id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.11755
VBMO: Voting-Based Multi-Objective Path Planning
This paper presents VBMO, the Voting-Based Multi-Objective path planning algorithm, that generates optimal single-objective plans, evaluates each of them with respect to the other objectives, and selects one with a voting mechanism. VBMO does not use hand-tuned weights, consider the multiple objectives at every step of search, or use an evolutionary algorithm. Instead, it considers how a plan that is optimal in one objective may perform well with respect to others. VBMO incorporates three voting mechanisms: range, Borda, and combined approval. Extensive evaluation in diverse and complex environments demonstrates the algorithm's ability to efficiently produce plans that satisfy multiple objectives.
Raj Korpan
2023-08-22T19:51:48Z
http://arxiv.org/abs/2308.11755v1
# VBMO: Voting-Based Multi-Objective Path Planning ###### Abstract This paper presents VBMO, the Voting-Based Multi-Objective path planning algorithm, that generates optimal single-objective plans, evaluates each of them with respect to the other objectives, and selects one with a voting mechanism. VBMO does not use hand-tuned weights, consider the multiple objectives at every step of search, or use an evolutionary algorithm. Instead, it considers how a plan that is optimal in one objective may perform well with respect to others. VBMO incorporates three voting mechanisms: range, Borda, and combined approval. Extensive evaluation in diverse and complex environments demonstrates the algorithm's ability to efficiently produce plans that satisfy multiple objectives. ## 1 Introduction Autonomous agents must often simultaneously address multiple competing objectives, such as distance, time, and safety, when path planning. Faced with multiple objectives, previous search-based approaches have either compromised among those objectives at each step in the search process or relied on a mathematical combination of them hand-tuned for a particular environment. The thesis of this work is that voting provides a more efficient way to find consensus among multiple objectives. This paper describes _VBMO_, a voting-based multi-objective path planning approach, where a set of single-objective planners each constructs its own optimal plan with A* and then uses a voting mechanism to select the plan that performs best across all objectives. The algorithm is evaluated in many challenging environments. An optimal graph-search algorithm finds the least cost path from a start vertex to a target vertex. Typically, the algorithm exploits a weighted graph that represents a real-world problem, such as a navigable two-dimensional space. Such a graph \(G=(V,E)\) represents unobstructed locations there as vertices \(V\). Edges \(E\) in \(G\) each connect two vertices, normally only if one can move directly between them. Each edge is associated with a label for the cost to traverse it. For example, if the objective were to minimize path length, edge labels could record the Euclidean distance between pairs of vertices. Without loss of generality, optimization here is assumed to be search for a minimum cost. A single-objective path planner \(SO\) seeks a plan \(P\) in \(G\) that minimizes a single objective \(\beta\), such as distance. A plan \(P\) is _optimal_ with respect to \(\beta\) only if no other plan \(P^{\prime}\) has a lower total cost \(\beta(P)\) for that objective, that is, for every other plan \(P^{\prime},\beta(P)\leq\beta(P^{\prime})\). A _multi-objective_ path planner \(MO\) seeks a plan \(P\) that performs well with respect to a set \(B\) of objectives. It can guide search in that graph with hand-tuned weights that encapsulate its objectives' costs into a single value or it construct a new compromise among its objectives at every plan step. If, for example, \(B=\{\beta_{1},\beta_{2}\}\), where \(\beta_{1}\) is travel distance and \(\beta_{2}\) is proximity to obstacles, \(MO\) would seek a plan \(P\) that scores well on both objectives. Because objectives may conflict, no single plan is likely to be optimal with respect to all of \(B\). Typically, a potential plan will perform better with respect to some \(\beta\)'s and worse with respect to others. Let \(B=\{\beta_{1},\beta_{2},\ldots,\beta_{J}\}\) be a set of \(J\) planning objectives with respective costs for plan \(P\) as \(\{\beta_{1}(P),\ldots,\beta_{J}(P)\}\) calculated in a graph labeled with their individual objectives. A plan \(P_{1}\)_dominates_ another plan \(P_{2}\) (\(P_{1}\ll P_{2}\)) when \(\beta(P_{1})\leq\beta(P_{2})\) for every \(\beta\in B\) and \(\beta_{j}(P_{1})<\beta_{j}(P_{2})\) for at least one objective \(\beta_{j}\in B\). Dominance is transitive, that is, if \(P_{1}\ll P_{2}\) and \(P_{2}\ll P_{3}\), then \(P_{1}\ll P_{3}\)(Pardalos _et al._, 2008). Among all possible plans, a non-dominated plan lies on the _Pareto frontier_, the set of all solutions that cannot be improved on one objective without a penalty to another objective (LaValle, 2006). A typical multi-objective planner searches for plans that lie on the Pareto frontier and then an external decision maker chooses among them. A plan is _Pareto optimal_ if and only if no other plan dominates it. We present an algorithm for multi-objective path planning, called _VBMO_, that produces a set of plans efficiently and then uses a voting mechanism to select a plan that is optimal for at least one objective and is guaranteed to be on the Pareto frontier. Unlike other approaches, VBMO does not need to modify the operations of the search algorithm. Instead, it relies on the efficiency of single-objective search and uses post-hoc evaluation and voting to select a non-dominated plan. The next sections provide related work in multi-objective path planning and describe VBMO. The final sections present empirical results and discuss future work. ## 2 Related Work An early approach to multi-objective optimization treated it as a single-objective problem for a simple weighted sum of the objectives [20]. Others addressed individual objectives in a weighted sum with constraints [14], minimum values [15], or ideal values [21]. The weighted sum approach has also been applied to the heuristic function of an optimal search algorithm [16]. All this work, however, required a human expert with knowledge of the relative importance of the objectives to tune the weights [15]. Moreover, small changes in those weights can result in dramatically different plans. Many have used metaheuristics (e.g., evolutionary algorithms) to find non-dominated solutions to multi-objective problems [13]. Those approaches, however, do not guarantee optimality, require tuning many hyperparameters, and are computationally expensive [12]. Furthermore, as the number of objectives increases, the fraction of non-dominated solutions among all solutions approaches one [14] and the size of the Pareto frontier increases exponentially [14]. As a result, methods that seek Pareto dominance break down with more objectives because it becomes more computationally expensive to compare all the potential non-dominated solutions. VBMO avoids this computation on infinitely many points on the surface of the Pareto frontier. Instead, it only ever compares \(|B|\) solutions because it transforms the multi-objective problem into a set of single-objective problems. A*, the traditional optimal search algorithm, requires an admissible heuristic, one that consistently underestimates its objective [15]. Several approaches extend A* to address multi-objective search. Multi-objective A* tracks all the objectives simultaneously as it maintains a queue of search nodes to expand [21]. NAMOA* extends multi-objective A* with a queue of partial solution paths instead of search nodes, but it is slow, memory hungry, and does not scale well [13]. Multi-heuristic A* modifies A* to consider multiple heuristics, some of which can be inadmissible [1]. It interleaves expansion of search nodes selected by an admissible heuristic with expansion on search nodes selected by nonadmissible ones. This approach was extended to treat the expansion from nonadmissible heuristics as a multi-armed bandit problem [10]. Recent work has sought to identify informative admissible heuristics for Figure 1: Each environment is converted to a graph whose topology is shared across all objectives. Figure 2: Each objective labels edges in the graph differently. Edges are colored in a range from green for low cost to red for high cost. (a) Distance is measured as 1 for vertical and horizontal edges and 1.414 for diagonals. (b) Safety is measured as the maximum degree in the graph (8 in this example) plus 1 minus the average degree of the two vertices joined by the edge. (c) Random costs are generated uniformly from 1 to 20. an improved version of NAMOA* [11]. Other work has addressed these issues of efficiency and scale but only for two objectives [13, 15]. Also in the bi-objective context, others have focused on finding the extreme supported non-dominated plans on the Pareto frontier [11] or approximate Pareto-optimal solutions [14]. VBMO also produces non-dominated plans on the Pareto frontier but for multiple objectives. Others have sought to extend multi-objective path planning to the multi-agent context with subdimensional expansion [13] and conflict-based search [14]. Some multi-objective approaches draw from social choice theory and voting systems. For example, in multi-attribute utility theory a function evaluates the available choices and selects the one with greatest utility [11]. Given a set of voters that cast a number of votes with respect to a set of _candidates_ (i.e., choices), a _voting method_ selects the winning candidate [21]. The goal of a voting method is to weigh the voters' choices to select a winning candidate that fairly balances all the voters' opinions. Voting methods have incorporated characteristics such as ranking, approval, and scoring [12]. A voter has a _preference_ between two candidates when it selects one over the other according to some criterion [13]. A voter can _rank_ all the candidates according to its preferences [14]. For example, _Borda_ assigns values to the \(c\) candidates based on the voters' rankings: a voter's first choice receives a value of \(c-1\), its second choice a value of \(c-2\), and so on. The candidate with the largest total value is the winner. In other voting methods voters approve or disapprove each candidate rather than create a ranking. For example, in _combined approval voting_ (CAV) voters assign a score of -1, 0, or +1 to indicate disapproval, apathy, or approval, respectively. Lastly, some voting methods allow voters to indicate their level of approval with a score. For example, in _range voting_ each voter gives a score within a given range to each candidate, and the candidate with highest sum of scores wins. The approach closest to ours formulated multi-objective path planning as a reinforcement learning problem, and voted \begin{table} \begin{tabular}{|l|c c c c c c|r|} \hline Range & \(\beta_{1}\) & \(\beta_{2}\) & \(\beta_{3}\) & \(\beta_{4}\) & \(\beta_{5}\) & \(\beta_{6}\) & Total Score \\ \hline \(P_{1}\) & 0 & 0.1 & 0.5 & 0.6 & 0 & 0.7 & 1.9 \\ \(P_{2}\) & 0.1 & 0 & 0.2 & 0.2 & 1 & 0.3 & **1.8** \\ \(P_{3}\) & 0.2 & 0.7 & 0 & 0.1 & 0.7 & 0.2 & 1.9 \\ \(P_{4}\) & 1 & 0.8 & 1 & 0 & 0.3 & 1 & 4.1 \\ \(P_{5}\) & 0.5 & 1 & 0.2 & 1 & 0 & 0.6 & 3.3 \\ \(P_{6}\) & 0.5 & 1 & 0.2 & 0.1 & 1 & 0 & 2.8 \\ \hline Borda & \(\beta_{1}\) & \(\beta_{2}\) & \(\beta_{3}\) & \(\beta_{4}\) & \(\beta_{5}\) & \(\beta_{6}\) & Total Points \\ \hline \(P_{1}\) & 6 & 5 & 4 & 3 & 6 & 2 & 26 \\ \(P_{2}\) & 5 & 6 & 5 & 4 & 3 & 4 & 27 \\ \(P_{3}\) & 4 & 4 & 6 & 5 & 4 & 5 & **28** \\ \(P_{4}\) & 2 & 3 & 3 & 6 & 5 & 1 & 20 \\ \(P_{5}\) & 3 & 2 & 5 & 2 & 6 & 3 & 21 \\ \(P_{6}\) & 3 & 2 & 5 & 5 & 3 & 6 & 24 \\ \hline CAV & \(\beta_{1}\) & \(\beta_{2}\) & \(\beta_{3}\) & \(\beta_{4}\) & \(\beta_{5}\) & \(\beta_{6}\) & Total Value \\ \hline \(P_{1}\) & 1 & 0 & 0 & 0 & 1 & 0 & **2** \\ \(P_{2}\) & 0 & 1 & 0 & 0 & -1 & 0 & 0 \\ \(P_{3}\) & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ \(P_{4}\) & -1 & 0 & -1 & 1 & 0 & -1 & -2 \\ \(P_{5}\) & 0 & -1 & 0 & -1 & 1 & 0 & -1 \\ \(P_{6}\) & 0 & -1 & 0 & 0 & -1 & 1 & -1 \\ \hline \end{tabular} \end{table} Table 1: Each voting mechanism considers the scores \(C_{ij}\) for six plans \(P_{i}\) given six objectives \(\beta_{j}\). Normalization ensures that each plan has minimum score with respect to its own objective. Range voting selects the plan with minimum total score, here \(P_{2}\), Borda voting selects the plan with maximum total points, here \(P_{3}\), and combined approval voting selects the plan with maximum total value, here \(P_{1}\). Although all three methods start with the same set of scores, the difference between the voting mechanisms results in a different plan being selected as \(P_{best}\). to select among the actions available at a state based on the expected reward under each objective [11]. It compared ranking, approval, and scoring voting methods. That approach, however, required hundreds of episodes of training and only considered an artificial \(10\times 20\) grid environment with four obstacles. In previous work, we compared performance on a robot navigator's selected plan's objective with a fixed alternative objective to generate natural language explanations [15]. That approach was then used on cognitively-based robot navigation system that learned three spatial models and compared plans that focused on each model to select and explain the plan that best exploited the robot's models [1]. VBMO was first introduced as a way to generally provide contrastive natural language explanations of multi-objective path planning [16]. VBMO uses a topologically identical graph but with a set of edge labels that represents different objectives. It constructs an optimal plan for each objective, evaluates each plan with every objective, and then selects the plan with voting (i.e., range voting would select the plan with lowest total cost across all of them). This avoids the limitations of other approaches because it addresses each objective independently and then evaluates the resultant plans from the perspective of each planner. In [16], VBMO was only evaluated as a mechanism to produce natural language explanations. It was examined in a single environment with eight objectives, six of which relied on different aspects of a spatial model learned as the robot navigated through the environment. The result was a set of objectives that were highly correlated and context specific. Here, we generalize to many more environments with diverse configurations and complexities, with different types of objectives and voting mechanisms, and compare performance against a simple weighted sum of the objectives. ## 3 Voting-based Multi-objective Path Planning VBMO constructs multiple plans, each of which optimizes a single objective, and then uses voting to select the plan that maximally satisfies the most objectives. It uses A* for its graph search algorithm. Pseudocode for VBMO appears in Algorithm 1. First, a graph is created to reflect the topology of the environment such that each vertex is unobstructed and edges are traversable. Figure 1 shows an example of an environment along with its graph representation. Next, the edges are labeled with a cost vector that reflects the cost with respect to each objective. Figure 2 shows an example of the edge costs based on three different objectives in Figure 1's graph. Then VBMO constructs an optimal plan \(P\) with re Figure 3: An example of four plans from VBMO in the environment of Figure 1. Each plan is optimal in its own objective. When VBMO used range voting, it selected the Safety-based plan because it has lowest total cost. spect to each objective in that graph. In this way, each plan is guaranteed to be optimal for at least one objective. Figure 3 provides an example of four plans generated by VBMO based on four objectives. Once it assembles the set of plans \(\mathcal{P}\), VBMO uses each objective to evaluate all of them. Because the underlying graph has the same topological structure (vertices and edges), every edge \(e\in E\) in any plan is labeled by all the objectives. To evaluate planner \(SO_{i}\)'s plan \(P_{i}=\langle v_{1},v_{2},\ldots,v_{m}\rangle\) from the perspective of objective \(\beta_{j}\), VBMO sums \(\beta_{j}\)'s edge costs from the sequence of vertices in \(P_{i}\). In this way, each objective \(\beta_{j}\) calculates a _score_\(C_{ij}\) for each stored plan \(P_{i}\). To avoid any biases that would be introduced by the magnitude of an objective's values, all scores from any \(\beta_{j}\) are normalized in \([0,1]\). Because VBMO seeks to minimize its objectives, a score \(C_{ij}\) near 0 indicates that plan \(P_{i}\) closely conforms to objective \(\beta_{j}\), while a score near 1 indicates that \(P_{i}\) strongly violates \(\beta_{j}\). Once every objective scores every plan, the plan \(P_{best}\) is selected with voting. VBMO has three voting mechanisms available to it. Range voting selects the plan with the lowest total score from all \(J\) planners: \[P_{best}=\underset{P_{i}\in\mathcal{P}}{argmin}\ \sum_{j=1}^{J}C_{ij} \tag{1}\] Borda voting first assigns a rank \(r_{ij}\) to each plan's score \(C_{ij}\) for each objective \(\beta_{j}\) and then assigns points to each plan as \((J+1)-r_{ij}\). It selects the plan with maximum total points: \[P_{best}=\underset{P_{i}\in\mathcal{P}}{argmax}\ \sum_{j=1}^{J}(J+1)-r_{ij} \tag{2}\] Combined approval voting assigns values \(v_{ij}\) to scores as \[v_{ij}=\begin{cases}-1,&\text{if }C_{ij}=1\\ 1,&\text{if }C_{ij}=0\\ 0,&\text{otherwise.}\end{cases}\] It selects the plan with maximum total value: \[P_{best}=\underset{P_{i}\in\mathcal{P}}{argmax}\ \sum_{j=1}^{J}v_{ij} \tag{3}\] An example with all three voting methods appears in Table 1. It demonstrates that VBMO produces a plan that is optimal for each objective, and that voting methods can balance performance among all the objectives differently, which can change the plan that is selected. VBMO is guaranteed to construct at least one plan on the Pareto frontier (proof shown in [13]). Furthermore, VBMO's voting mechanisms are guaranteed to always select a non-dominated plan. **Theorem 1**.: _Range voting always selects a non-dominated plan._ Proof.: by contradiction. Assume range voting selected a plan \(P_{2}\) that is dominated by plan \(P_{1}\). By the definition of dominance this means that \(\beta(P_{1})\leq\beta(P_{2})\) for every \(\beta\in B\) and \(\beta_{j}(P_{1})<\beta_{j}(P_{2})\) for at least one objective \(\beta_{j}\in B\). Because range voting selects a plan with minimal score, we must minimize \(P_{2}\)'s score while also maintaining the dominance assumption. Suppose that \(\beta(P_{1})=\beta(P_{2})\) for \(\beta_{1},...,\beta_{J-1}\) and is \(\beta_{J}(P_{1})<\beta_{J}(P_{2})\) for only one objective \(\beta_{J}\). Range voting selects the plan with minimum total score \(Score_{i}=\sum_{j=1}^{J}\beta_{j}(P_{i})\). This is equivalent to \(Score_{i}=(\sum_{j=1}^{J-1}\beta_{j}(P_{i}))+\beta_{J}(P_{i})\). When comparing \(Score_{1}\) with \(Score_{2}\), the first term \((\sum_{j=1}^{J-1}\beta_{j}(P_{i}))\) is equal in both, so the only difference is the second term \(\beta_{J}(P_{i})\). As shown earlier, \(\beta_{J}(P_{1})<\beta_{J}(P_{2})\), so the total scores will be \(Score_{1}<Score_{2}\). For range voting to select \(P_{2}\) over \(P_{1}\), however, \(Score_{2}\) must be less than \(Score_{1}\) because it selects the plan with minimum score. Our assumption that range voting selected a plan \(P_{2}\) that is dominated by plan \(P_{1}\) must then be false and since dominance is transitive, no dominated plan could be selected, which proves the theorem must be true. \(\blacksquare\) Similar proofs can trivially show that Borda voting and combined approval voting also always select a non-dominated plan. In summary, VBMO is an efficient multi-objective path planning approach that always identifies and then selects a plan on the Pareto frontier, without reliance on finely-tuned weights. Theorem 1 proves that, given a set of dominated and non-dominated plans, VBMO's range voting mechanism will always select a non-dominated plan because non-dominated plans score lower with respect to at least one objective and therefore have a lower total score. Given \(J\) objectives, VBMO has complexity \(\mathcal{O}(J^{2})\) because each plan is evaluated under each objective. ## 4 Experimental Results We compare VBMO with _Weighted_, which uses A* search on a simple equally-weighted sum of the objectives. These were implemented in Python and evaluated on 156 Dragon Age: Origins (DAO) benchmark grid environments [16] and the NY road network from the 9th DIMACS Implementation Challenge: Shortest Path 1. Table 2 summarizes details about these environments. We use Euclidean distance for A*'s heuristic in the DAO environments and Haver-sine distance for the DIMACS NY road network. Performance is measured by the average total score for the selected plan and the time to compute the plan. For each environment, performance is averaged over 50 randomly selected start and target vertices. All significant differences are at \(p<0.05\). \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Environment & \# & \begin{tabular}{c} Num. of \\ Nodes \\ \end{tabular} & \begin{tabular}{c} Num. of \\ Edges \\ \end{tabular} & \begin{tabular}{c} Average \\ Degree \\ \end{tabular} \\ \hline Dragon Age: & 156 & 168 to & 552 to & 6.5 to \\ Origins Grids & & 137,375 & 530,551 & 7.9 \\ \hline DIMACS NY & 1 & 365,050 & 264,346 & 2.8 \\ Road Network & & & & \\ \hline \end{tabular} \end{table} Table 2: Test environments The first experiment compared performance on the DAO environments with three objectives: distance and random as described in Figure 2, and a uniform cost of 1 on all edges. These three objectives would minimize distance travelled, minimize number of steps taken, and minimize a random uniform value (e.g., traffic in an environment). With three objectives, the total score a plan can range from 0 to 2, with 0 indicating that the plan optimally adheres to all three objectives, and 2 indicating that the plan performed worst on the other two objectives. (A plan cannot get a score of 1 from its own objective because it optimally minimizes its own objective.) VBMO's score and time was significantly better than Weighted for all the environments and with all three voting mechanisms. Figure 4 compares the distributions of the scores and times. Table 3 shows how frequently each plan was selected by VBMO. One challenge with using a weighted sum is to weight the objectives so that search is not biased toward any one objective because of the magnitude of its values. In the first experiment the the random objective could have a value much larger than the other two objectives, so the simple weighted sum would be biased toward plans that most satisfy that objective. VBMO does not have this problem because it normalizes the scores before calculating the total scores. In the second experiment, the objectives were modified so that their values would be approximately equal. Distance remained the same, uniform was changed to 1.5 cost on all edges, and random would select a cost of 1 or 2 for each edge. In this case all three objectives are relatively similar in magnitude. The result is that VBMO with range voting performs significantly worse in terms of average score in 151 of the DAO environments, and no significant difference in 5 environments. The average score for VBMO with range voting across the 156 environments is 0.855 compared to 0.536 for Weighted. In this situation, it is possible VBMO performed worse because it is designed to find the plans at the extremes of the Pareto frontier, and Weighted can find a plan somewhere in the middle, which could end up having a smaller total score because it does equally well on all three objectives without being optimal in any single one. Despite having worse scores, VBMO was still significantly faster than Weighted with an average time of 0.396s compared to 0.831s. The first two experiments used a random objective, which introduces uncertainty. To examine if uncertainty had an effect on the performance, the third experiment uses distance, uniform cost of 1.5, and the safety objective described in Figure 2. The results showed that of the 156 DAO environments, VBMO with range voting scored significantly better in 118, significantly worse in 2, and no different in 36. In the 118 environments with better scores VBMO scored 0.469 compared to 0.803 for Weighted on average. For the two worse environments, VBMO scored 0.328 compared with 0.18 for Weighted. Finally, for the 36 where there was no statistically significant difference, VBMO scored lower on average (0.463) than Weighted (0.539). An examination of the environment's structures, graph density, and average distance to the target did not reveal a readily apparent reason for the environments where VBMO performed better or worse. Current evaluation seeks to identify the environment's characteristics that affect VBMO's performance. VBMO was significantly faster than Weighted for all these environments, how Figure 4: The distributions of the scores and times for the first experiment. (a) The average scores for VBMO-Range, VBMO-Borda, VBMO-CAV, and Weighted are 0.904, 0.920, 0.924, and 1.235, respectively. (b) The average times for VBMO-Range, VBMO-Borda, VBMO-CAV, and Weighted are 0.175s, 0.198s, 0.186s, and 0.717s, respectively. Both scores and times are significantly less than Weighted. \begin{table} \begin{tabular}{|l|l|l|l|} \hline & Distance & Uniform & Random \\ \hline Range & 76.9\% & 22.8\% & 0.3\% \\ \hline Borda & 88.5\% & 11.1\% & 0.4\% \\ \hline CAV & 88.6\% & 11.1\% & 0.3\% \\ \hline \end{tabular} \end{table} Table 3: In the first experiment, averaged over the 156 environments among the 50 runs for each environment, voting selected the distance-based plan more often than the other two plans. ever, with an average time of 0.121s compared to 0.732s. The fourth experiment examined performance in a much larger real-world environment, the DIMACS NY road network. This data comes with two objectives already in the data set: distance and time. Table 4 shows the performance of VBMO with range voting on several configurations of objectives in this environment. The first two instances used distance, time, and a uniform cost. VBMO's score was no different when a small uniform cost was used, however, it was significantly worse with a larger uniform cost that was close to the average value of the distance and time objectives, similar to the result of the second experiment. The third instance added a random objective, which resulted in VBMO having a significantly better score. Given that distance and time are 96% correlated in this data set, the last two instances only used distance along with uniform and random costs, both at different scales. In both of these cases, VBMO had a significantly better score. In terms of time, VBMO was significantly faster with three objectives, and no different with four objectives. ## 5 Discussion This paper describes a flexible, scalable approach for multi-objective path planning, VBMO, that uses voting to select among single-objective planners. VBMO evaluates each plan with respect to each of the objectives. The plan that VBMO selects is optimal in its own objective and is guaranteed to be non-dominated. Both evolutionary methods and VBMO consider a population of solutions and select among them with a kind of fitness function. VBMO, however, does not require multiple iterations to refine its plan. Instead, it starts with at least one plan already on the Pareto frontier and selects a plan that is generally expected to perform well with respect to all the objectives. Future work could consider a hybrid approach, that starts with VBMO's plans and then uses an evolutionary method to merge and transform those plans. In this way the best segments of different plans could be combined or problematic portions eliminated. Here, Borda voting and combined approval voting were added to VBMO. Future work could consider additional voting methods for VBMO, such as a Condorcet method, to select a plan because those methods may have properties that more fairly and efficiently balance competing objectives to select among the plans. Although VBMO does not need hand-tuned weights to balance multiple objectives, they could be easily incorporated into any of the voting mechanisms to change an objective's influence on the sum. VBMO uses A* for its graph search algorithm but another optimal graph search algorithm could easily be substituted for it. Future work could examine how other search algorithms would affect VBMO's performance. In several of the experiments conducted here VBMO did not achieve a significantly better score. A potential reason for this could be that Euclidean distance was used as the heuristic for A* across all objectives in the DAO environments. Euclidean distance may not, however, be admissible for some objectives, such as random and uniform, so it may overestimate the cost to the goal. Current work considers alternative heuristics to ensure A* is optimal. VBMO can encounter a task where its set of objectives \(B\) generate plans that are equally poor on all the other objectives. In that case, total scores for all plans are equal and VBMO selects a plan at random, one that should perform well only on its own objective and poorly on the others. That reduces the solution to a planner with a single, randomly chosen objective. Other multi-objective planning methods avoid this difficulty by compromise among all the objectives rather than focus on strong performance from one. To address this issue, VBMO could incorporate additional planners that introduce weighted sums of different objectives so that the planner is forced to find a plan that compromises between them but would do less well on any single objective. Unless these additions were simple, it would become a problem of finding the best set of weights for the objectives (i.e., searching among the infinitely many points on the Pareto frontier). As shown in our previous work, VBMO easily generates contrastive explanations in natural language. These explanations flexibly compare plans with respect to the objectives under consideration and express the controller's confidence in its selected plan. Current work considers how to use a language model to shorten, simplify, and produce more human-like explanations. Although VBMO is applied here to path planning, it is more generally applicable to any multi-objective problem where a solution based on a single objective can be evaluated from the perspective of the other objectives. VBMO also only considers a single agent, future work could extend VBMO to address multi-agent multi-objective path planning. VBMO is an efficient multi-objective path planning approach that generates plans on the Pareto frontier. The plan it selects with voting is guaranteed to be optimal with respect to at least one objective and likely does well on the others. The results with voting-based multi-objective path planning presented here demonstrate that this planning algorithm is fast, flexible, and scalable. \begin{table} \begin{tabular}{|l|r|r|r|r|} \hline \multirow{2}{*}{Objectives Used} & \multicolumn{2}{c|}{Score} & \multicolumn{2}{c|}{Time} \\ & VBMO & Weighted & VBMO & Weighted \\ \hline Distance, Time, Uniform = 10 & 1.12 & 1.05 & 3.24* & 3.68 \\ \hline Distance, Time, Uniform = 2000 & 1.05 & 0.78* & 3.07* & 3.56 \\ \hline Distance, Time, Uniform = 20, Random = [1,20] & 1.27* & 1.43 & 4.64 & 4.33 \\ \hline Distance, Uniform = 20, Random = [1, 20] & 0.93* & 1.81 & 3.02* & 3.52 \\ \hline Distance, Uniform = 1000, Random = [1, 37000] & 0.92* & 1.04 & 2.81* & 3.34 \\ \hline \end{tabular} \end{table} Table 4: Comparison of performance with different objectives. Asterisk indicates significant difference between VBMO and Weighted. ## Acknowledgments The author thanks Dr. Susan L. Epstein for her mentorship on the development of VBMO.
2302.12207
Grading and Ranking Large number of candidates
It is common that a jury must grade a set of candidates in a cardinal scale such as {1,2,3,4,5} or an ordinal scale such as {Great, Good, Average, Bad }. When the number of candidates is very large such as hotels (BOOKING), restaurants (GOOGLE), apartments (AIRBNB), drivers (UBER), or papers (EC), it is unreasonable to assume that each jury member will provide a separate grade for each candidate. Each jury member is more likely to abstain for some candidates, cast a blank vote, or be associated at random, or as a function of its expertise, with only a small subset of the candidates and is asked to grade each of those. Extending the classical theory, we study aggregation methods in which a voter will not be eligible to grade all the candidates, and the candidates are not eligible for the same sets of voters. Moreover, each candidate on which they are eligible, the voter will have the choice between: a blank vote, grade the candidate, or abstain. Assuming single-peaked preferences over the grades, we axiomatically characterise a broad class of strategy-proof grading mechanisms satisfying axioms such as unanimity, anonymity, neutrality, participation or consistency. Finally, when a strict ranking is necessary (to distinguish let say between two borderline papers in a conference), some tie-breaking rules, extending the leximin and majority judgment, are defined and are shown to be equivalent to some strategy-proof grading functions on a richer space of outcome. Our paper will propose new rules, called phantom-proxy mechanisms, to aggregate the votes in the examples above or others, which differ from the usual average mark, that are easily manipulable. Moreover, the phantom-proxy are able to reduce the injustices caused by some candidates juries too generous or severe.
Rida Laraki, Estelle Varloot
2023-02-23T18:01:00Z
http://arxiv.org/abs/2302.12207v1
# Grading and ranking a large number of candidates ###### Abstract It is common that a jury must grade a set of candidates in a cardinal scale such as \(\{1,2,3,4,5\}\) or an ordinal scale such as \(\{\)Great, Good, Average, Bad\(\}\). When the number of candidates is very large such as hotels (BOOKING), restaurants (GOOGLE), apartments (AIRBNB), drivers (UBER), or papers (EC), it is unreasonable to assume that each jury member will provide a separate grade for each candidate. Each jury member is more likely to abstain for some candidates, cast a blank vote, or be associated at random, or as a function of its expertise, with only a small subset of the candidates and is asked to grade each of those. Extending the classical theory, we study aggregation methods in which a voter will not be eligible to grade all the candidates, and the candidates are not eligible for the same sets of voters. Moreover, each candidate on which they are eligible, the voter will have the choice between: a blank vote, grade the candidate, or abstain. Assuming single-peaked preferences over the grades, we axiomatically characterise a broad class of strategy-proof grading mechanisms satisfying axioms such as unanimity, anonymity, neutrality, participation or consistency. Finally, when a strict ranking is necessary (to distinguish let say between two borderline papers in a conference), some tie-breaking rules, extending the leximin and majority judgment, are defined and are shown to be equivalent to some strategy-proof grading functions on a richer space of outcome. Our paper will propose new rules, called phantom-proxy mechanisms, to aggregate the votes in the examples above or others, which differ from the usual average mark, that are easily manipulable. Moreover, the phantom-proxy are able to reduce the injustices caused by some candidates juries too generous or severe. _Manuscript submitted for review to the 24nd ACM Conference on Economics & Computation (EC23)._ ## 1. Introduction Have you ever waken up a week before a political election day to find there are 78 candidates on the ballot but you don't know much about most of them? You may have an opinion about five of them, and think two are good and three are bad, but you just don't have the time and the relevant information to study them all. If the ballot asks you to strictly rank all the 78 candidates, it is probably much too complicated. If the ballot asks you to rank just the one you know, this is not a good idea unless a clear meaning is given that the two top are good and the remaining three are bad. If the ballot specifies that you only rank the candidates you think are good then, you should only rank two of the five, but then you are denied giving your opinion about the three you think are bad! The reader might find this unlikely, but this was precisely the situation voters in Australia faced for the 2004 Senate elections in the state of New South Wales. The ballot asks the voters to rank strictly all the candidates, otherwise, the vote is considered invalid. To solve the issue, voters were allowed to "vote above the line" or "vote below the line." where "Above" means they choose the ranking specified by one party and "Below" means the voter determines their own ranking. This leads most Australians (95%) to vote above the line, which implies that the outcome is decided by a strategic game played by the parties. We propose a family of alternative solutions where each voter is asked to grade as many candidates as they want to. A method in this family has been adopted in 2020 by Paris for its participatory budget,1 where the sum of the grades received by a project are normalized to 100% then majority judgment (MJ) ranks them [1]. Footnote 1: Voters gave a grade to as many projects as they want in a scale of four grades: \(*\) Coup de coeur/J’adore, J’aime bien / C’est interessant, Pourquoi pas, Je ne suis pas convaincu \(*\). In 2022, 82 million euros was allocated to 62 projects. About 150.000 voters participated (about 130.000 with paper ballots and 20.000 electronically). All projects received at least 1000 grades (some much more), which is statistically representative, and more informative compared to the previous system where several projects received less than a hundred votes. A participatory democratic initiative was proposed by LaPrimaire.org.2 About 150.000 French voters participated in a process3 to nominate a citizen as a candidate for the 2017 French presidential election. An initial slate of about 200 candidates was whittled down to the 12 who were supported by at least 500 voters. Then each voter was asked to evaluate (with a few days delay) five candidates out of the 12 on the scale Excellent, Very Good, Good, Passable, Insufficient. The assignment of five was done randomly4 and the twelve ranked by MJ after the 100% normalization. The reason for not asking to vote on all the 12 was to incite them to invest time to make a careful comparative study.5 Footnote 2: [https://laprimaire.org/election-presidentiale-2017/](https://laprimaire.org/election-presidentiale-2017/) Footnote 3: [https://laprimaire.org/development/](https://laprimaire.org/development/) Footnote 4: One of the objectives of the random selection process was to guarantee that all the candidates are evaluated by approximately the same number of voters and that this number is large enough for the results to be statistically representative, which was the case: each candidate received in average 4454 voters, with a minimum on 4372 and a maximum of 4513. See [https://articles.laprimaire.org/resultats-du-1er-tour-de-laprimaire-org-c8fe612b64cb](https://articles.laprimaire.org/resultats-du-1er-tour-de-laprimaire-org-c8fe612b64cb) Footnote 5: Each candidate wrote a political program with several documents and videos. See for instance the program of the winning candidate Charlotte Marchandise: [https://laprimaire.org/qualife/charlotte-marchandise-franquet](https://laprimaire.org/qualife/charlotte-marchandise-franquet) Motivated by the above applications, our paper studies mechanisms where voters have different, exogenously, rights to grade subsets of the set of candidates. The optimal process to allocate the rights is not studied in this paper and is delegated to a future work. In practice and depending on the application (large or small electorate), it can be done at random as in LaPrimaire.org, as a function of the expertise or of conflict of interest, or as a combination of the previous as in CS conferences. Our mechanisms will allow the voters to abstain or vote blank, as in Paris participatory budgeting. Following [Moulin, 1980a] and [Balinski and Laraki, 2007, 2011, 2020] framework, our paper study the grading methods that associates a final grade to each candidate, given the choices of all the voters on their eligible candidates, where the choices a voter has on a candidate are: a blank vote, a grade, or to abstain. We also propose tie-breaking-rules, whenever a strict ranking is needed (borderline papers as in a CS conference). Assuming single-peaked preferences over the final grades, we characterize a class of strategy-proof grading functions satisfying familiar axioms such as unanimity, anonymity, neutrality, participation, or (variable electorate) consistency, see [Arrow, 1951, Brandt et al., 2016, Moulin, 1991]. But, contrarily to classical theory, there are many ways to define some axioms such as neutrality or anonymity because voters don't have the same rights and candidates are not eligible for the same voters. **Mains contributions and literature**. Technically, our paper uses and is an extension of [Moulin, 1980a] where the inputs from voters are interpreted as elements of an ordered set of grades. [Moulin, 1980a] interpreted in our model, characterized anonymous and non-anonymous strategy-proof methods when \(n\) voters grade one candidate. Our innovations compared to his work are: * We deal with \(n\) voters and \(m\) candidates where a candidate is eligible for a subset of voters, and a voter may cast a blank vote or abstain for each candidate. * We assume the set of input grades to be smaller than the set of outputs. * We characterize all strategy-proof grading functions combined with other axioms not considered in [Moulin, 1980a] and illustrate our results on the phantom-proxy class. * We propose several formulations of anonymity and show that the definition has an impact on the characterizations. A similar conclusion holds for other axioms. * We extend the grading methods to rank, by associating them to tie-breaking rules, in the spirit of majority judgment and show that the ranking function may be interpreted as a strategy-proof grading function with richer output space. Moulin's paper inspired a large literature that obtained characterizations for other domains or proved impossibility results (see, among many others,[Border and Jordan, 1983], [Nehring and Puppe, 2007], [Barbera et al., 1993][Caragiannis et al., 2016] and [Freeman et al., 2019b]). The later article for example introduces the class of phantom moving mechanisms which are anonymous, neutral and strategy-proof in the budget aggregation problem. Extensions that cover the case of private consumption or capacity constraints have been studied by [Moulin, 2017] and [Aziz et al., 2019]. [Varloot and Laraki, 2022] extends Moulin's phantom mechanisms to expert aggregation problems where each voter submits a probability distribution (a prediction) over an ordered set. Operationally and conceptually, our paper extends the majority judgment theory of [Balinski and Laraki, 2007, 2011]. They introduced and studied a class of grading and ranking methods that avoids Arrow and Condorcet paradoxes, and are resistant to strategic manipulations. In their model, \(n\) voters (equally treated) are requested to grade \(m\) candidates (equally treated), and the output is a final grade for each candidate and a ranking of the candidate. Our paper extends their main results and methods to situations where voters don't have necessarily the same weights or rights to vote, candidates and voters are not necessarily treated equally, and voters can abstain or cast blank votes. We also contribute to the literature on incomplete preferences in voting. see [Boutilier et al., 2016] for a survey. For example, [Bentert and Skowron, 2020] approximate Borda and minmax rules in a context with a large electorate where each voter is asked to rank a random subset of \(l\geq 2\) candidates, or to provide a ranking of her \(l\) most preferred candidates. [Konczak and Lang, 2005] introduced the notions of possible and necessary winners (PW, and NW) for a voting function \(f\) when we have access to partial information (a partial ranking for example) and computed the complexity of determining whether a candidate is PW or NW. [Lu and Boutilier, 2011] studied the minimax regret instead of PW or NW. Our problem is quite different: we search for a good rule to aggregate the grades when our partial information is caused by some candidates being ineligible for some voters, and some voters not having any opinion on the candidates they are eligible for and prefer to abstain or vote blank or they have an opinion but prefer to abstain strategically. **Structure of the paper**: Section 2 contains all the notations needed to follow the paper. Section 3 studies the incentive-compatible methods when a voter think of manipulating by changing his input grade, introduces the class of phantom-proxy mechanisms, and illustrates the general characterization of the proxy class. Section 4 studies the incentives for a voter implied by having, for each candidate, options other than grading: blank votes and abstention. We define and link the following properties: how to count Blank Votes (BV), Silent Ignored (SI), Silent Consent (SC), two forms of Participation (P and FP), Jury Determinism (JD), and Strong strategy-proofness. Section 5 characterizes the SP methods that satisfy additional properties such as Unanimity (U), three forms of Neutrality (N, SN and F), two of Anonymity (A and SA), and two of variable electorate Consistency (OC and IC). Section 6 extends each (OC,F) proxy grading function to a ranking function with almost no ties and shows that it can be interpreted as a strategy-proof grading function in a richer output space. Section 7 discusses some extensions and concludes. An appendix contains the missing proofs. ## 2. Notations There are 3 main outputs we will be interested in. The final (or aggregate) grade a candidate gets, the tie breaking rule to rank any two candidates that get the same grade, and the final ranking among all the candidates. As voters will have different rights, can abstain or vote blank, we will have to deal with too many subsets. To make it easy for the reader, we put all the notations here. * Voters will be described with small cap letters (e.g \(i,j\)) and the set of voters is \(\mathcal{N}\). The set is assumed finite. * Candidates will be described with capital letters (e.g \(I,J\)) and the set of candidates is \(\mathcal{M}\). * For a voter \(i\), let \(\mathcal{C}_{i}\subseteq\mathcal{M}\) be the finite set of candidates on which voter \(i\) is allowed to vote. * \(\mathcal{C}^{\prime}{}_{i}\subseteq\mathcal{C}_{i}\) is the set of candidates voter \(i\) provided a grade for. * For a candidate \(J\), let \(\mathcal{D}_{J}\subseteq\mathcal{N}\) be the set of voters that were asked to grade \(J\). * \(\mathcal{D}^{\prime\prime}{}_{J}(\mathbf{v})\) is the set of voters that gave a grade to candidate \(J\) in the voting profile \(\mathbf{v}\). When the context is clear we will use the shorthand \(\mathcal{D}^{\prime\prime}{}_{J}\). * Grades will use Greek letters (e.g \(\alpha,\beta\)). The set of grades that can be expressed by voters ( inputs) is \(\mathcal{A}\). The set of grades that can be provided by the grading function (outputs) is \(\mathcal{B}\). As it is the case in practice, we assume \(\mathcal{A}\subseteq\mathcal{B}\) and that a total order exists on \(\mathcal{B}\). An example is \(\mathcal{A}=\{1,2,3,4,5\}\) and \(\mathcal{B}=[1,5]\). Without loss of generality we consider that \(\{\inf\mathcal{A},\sup\mathcal{A}\}\subseteq\mathcal{B}\). * The **ballot** of voter \(i\) is \(v_{i}\). The notation \(v_{i}(J)\) corresponds to the opinion voter \(i\) expressed about candidate \(J\). It is the **vote** of player \(i\) for \(J\). We also use the shorthand \(\mathbf{v}(J)\) to describe the multi-set (bag) of **grades** provided for \(J\). * We also need notations to describe situations that do not correspond to a voter supplying a grade to a candidate. As such when \(i\notin\mathcal{C}_{i}\) we denote his vote as \(v_{i}(J)=\emptyset\) (the absence of a possible expressed opinion). For voters who to submit a blank vote for a given candidate we use the notation \(v_{i}(J)=\emptyset\). We will distinguish these from voters who chose to ignore the opportunity to submit a vote, these are referred to as absentees and will be denoted with \(v_{i}(J)=\circ\). We denote \(\mathcal{E}=\{\emptyset,\circ,\otimes\}\) all the possible inputs that are not grades. We denote \(\mathcal{O}=\mathcal{A}\cup\mathcal{E}\) all the possible submitted votes and the absence of possible vote. * The voting profile containing all the votes for the election is \(\mathbf{v}\in\mathcal{O}^{\mathcal{M}\times\mathcal{N}}\). We describe as \(\mathbf{v}_{-i}\) the voting profile where voter \(i\) was removed. * The notation \(\mathbf{v}_{-T}\) refers to the profile obtained from \(\mathbf{v}\) where all voters \(i\) that are in \(T\subseteq M\) have their allowed votes replaced by \(\otimes\). * For any \(\alpha\in\mathcal{O}\). We will also use the notation \(v_{i}[J:=\alpha]\) that denotes the ballot of \(i\) where the vote that \(i\) provided for candidate \(J\) was replaced by \(\alpha\) everything else equal. Similarly \(\mathbf{v}[v_{i}:=w_{i}]\) represent the situation voting profile where the ballot of \(i\) was replaced by \(w_{i}\) and \(\mathbf{v}[v_{i}(J):=\alpha]\) the voting profile where the vote of \(i\) was replaced by \(\alpha\) everything else equal. * We are therefore looking for an **grading function**\(\varphi:(\mathcal{O})^{\mathcal{M}\times\mathcal{N}}\rightarrow\mathcal{B}^{ \mathcal{M}}\). ## 3. Strategy-proofness in grading : Sp ### The general minmax characterization with the phantom mappings We will assume that each voter's objective is to try to make the outcome for any candidate (determined by the inputs and aggregation rule) be as close as possible to her (true) grade for that candidate. Hence, strategy-proof -in grading- can be described as follows. Definition 3.1 (Strategy-proof in Grading : SP).: A grading function \(\varphi:\mathcal{O}^{\mathcal{M}\times\mathcal{N}}\rightarrow(\mathcal{B} \cup\{\emptyset\})^{\mathcal{M}}\) is strategy-proof (or is incentive compatible) in grading if for any voter \(i\), for any candidate \(J\in\mathcal{C}_{i}\), and for any \(v_{i}\) and \(w_{i}\) such that \(v_{i}(J)\in\mathcal{A}\) and \(w_{i}(J)\in\mathcal{A}\) we have: \[\varphi(\mathbf{v})(J)>v_{i}(J)\Rightarrow\varphi(\mathbf{v}[v_{i}:=w_{i}])( J)\geq\varphi(\mathbf{v})(J)\] \[\varphi(\mathbf{v})(J)<v_{i}(J)\Rightarrow\varphi(\mathbf{v}[v_{i}:=w_{i}])( J)\leq\varphi(\mathbf{v})(J)\] In other words, a voter who graded a candidate shouldn't be able to change the outcome for that candidate closer to his input grade by lying. It is interesting that this definition implies that a voter does not question its impact on the candidates they are not allowed to vote for. We will come back to this question later when we will study stronger notions of SP. Also, strategy-proofness as defined above does not prevent a vote by \(i\) for a candidate \(I\) from impacting the grade for on another candidate \(J\) if \(i\) is not eligible for \(J\). However, \(i\) cannot impact \(J\) if allowed to vote for as the following useful lemma shows. Lemma 3.2 ().: _SP implies that if \(i\) graded \(I\) and \(J\), its vote about \(I\) does not impact the outcome for \(J\)._ Proof.: Let \(i\) be allowed to vote for both \(I\) and \(J\). Let \(w_{i}(J)=v_{i}(J)\) and \(w_{i}(I)\neq v_{i}(I)\) and everything else equal. Suppose that \(\varphi(\mathbf{v})(J)>v_{i}(J)\). \[\varphi(\mathbf{v})(J)>v_{i}(J)\Rightarrow\varphi(\mathbf{v}[v_{i}:=w_{i}])( J)\geq\varphi(\mathbf{v})(J).\] Therefore \(\varphi(\mathbf{v}[v_{i}:=w_{i}])(J)>w_{i}(J)\). It follows that \(\varphi(\mathbf{v})(J)\geq\varphi(\mathbf{v}[v_{i}:=w_{i}])(J)\geq\varphi( \mathbf{v})(J)\), therefore: \[\varphi(\mathbf{v}[v_{i}:=w_{i}])(J)=\varphi(\mathbf{v})(J).\] Conversely for \(\varphi(\mathbf{w})(J)>w_{i}(J)\), \(\varphi(\mathbf{v})(J)<v_{i}(J)\) and \(\varphi(\mathbf{w})(J)<w_{i}(J)\). As such \(\varphi(\mathbf{v})(J)=\varphi(\mathbf{v}[v_{i}:=w_{i}])(J)\). We can characterize now all the SP grading functions. The proof is a direct consequence of (Moulin, 1980a) and the above lemma. **Theorem 3.3** (SP general characterization).: _If a grading function \(\varphi:\mathcal{O}^{M\times N}\to(\mathcal{B}\cup\{\emptyset\})^{\mathcal{M}}\) is strategy-proof (SP) then for any \(J\) there are \(2^{\sharp\mathcal{D}_{J}}\) functions \(\omega_{J,\mathcal{S}}^{T}:\mathcal{O}^{M\times N}\to\mathcal{B}\) such that for all \(\emptyset\subseteq S\subseteq S^{\prime}\subseteq T\subseteq\mathcal{D}_{J}\) we have \(\omega_{J,\mathcal{S}}^{T}\leq\omega_{J,\mathcal{S}^{\prime}}^{T}\) and:_ \[\forall\boldsymbol{\psi},\varphi(\boldsymbol{\psi})(J)=\max_{\emptyset\subseteq S \subseteq\mathcal{D}^{\prime\prime}J}\min(\{v_{i}(J):i\in S\}\cup\{\omega_{J, \mathcal{S}}^{T^{\prime\prime}}(\boldsymbol{v}_{-\mathcal{D}^{\prime\prime} })\})\] _and we will call the \(\omega_{J,\mathcal{S}}^{T}\) the "phantom-mappings"._ Proof.: Let us fix \(J\). Let us fix the ballots of all voters that do not provide a grade for \(J\). Then by the Moulin's theorem (proposition 3 in the paper [Moulin, 1980a]), there exist \(2^{n}\) constants \(\alpha_{S}\) (called phantoms) such that: \[\varphi(\boldsymbol{\mathrm{v}})(J)=\max_{\emptyset\subseteq S\subseteq \mathcal{D}^{\prime\prime}J}\min(\{\alpha_{S}\}\cup\{v_{i}(J):i\in S\}).\] It follows that when we no longer consider that the ballots of voter that did not provide a grade for \(J\) are fixed, we replace the \(\alpha_{S}\) by functions \(\omega_{J,\mathcal{S}}^{T}\). According to lemma 3.2, the \(\omega_{J,\mathcal{S}}^{T}\) functions only depend on the ballots of voters that did not grade \(J\). As clearly any \(\varphi\) of this form is (SP), we have characterised the group of methods. The proof looks easy to prove, but only because we are using the non-trivial result of [Moulin, 1980a]. We will refine and simplify in the sequel the maxmin formula by adding axioms until we obtain something similar to the famous median formula of [Moulin, 1980a] for strategy-proof rules in the anonymous case. But before that, we introduce the SP family of phantom-proxy mechanisms, that can be described using the order statistics, and which will be our leading example. When \(\mathcal{A}=\{0,1,2,3,4,5\}\) and \(\mathcal{B}=[0,5]\), the restriction in the input space is likely due to a desire to keep the process simple for voters even if in theory the regulator wouldn't have minded giving the voters more freedom. It makes sense to consider that the regulator chose a mechanism in \(\Psi:\mathcal{B}\to\mathcal{B}\) based on its good properties and then restricted the possible input votes to \(\mathcal{A}\). It is interesting to note that (SP) mechanisms can be extended to a (SP) mechanism \(\Psi:\{\mathcal{B}\cup\mathcal{E}\}^{M\times N}\to(\mathcal{B}\cup\{ \emptyset\})^{\mathcal{M}}\). Any arbitrary extension of the \(\omega_{J,\mathcal{S}}^{T}\) phantom mappings will do. However, we provide most of the characterizations for \(\mathcal{A}\subset\mathcal{B}\), unless the characterization is more elegant with \(\mathcal{A}=\mathcal{B}\). ### The phantom-proxy grading functions Let us now consider a situation where a group of voters (the jury) is expected to grade a large number of candidates. For example, the grading of the GCSE. The regulator may learn for that jury votes what are their criteria regarding the candidates they grade and therefore use that be predict which grade they would give to all the candidates. This can be used to reduce the bias caused by the diversities of the juries and that some of them are more generous than others. In this subsection, we introduce a class of methods that are based on this idea: **The phantom-proxy mechanisms**. The concept for these mechanisms are simple. For a given candidate \(J\), if a voter \(i\) submitted a blank vote or was not allowed to vote \((v_{i}(J)\in\emptyset,\emptyset)\) then a proxy-vote based on what we know about the voter and the candidate will be designated to replace his vote. Absentee votes are either provided another proxy or removed from the process entirely. We therefore have for each candidate \(J\) a multi-set of grades that each correspond to a different voter, either because they graded \(J\) or because we have a proxy vote that represents them. We call this multi-set the **voting pool** for \(J\). Once we have this pool, we can simply select one of its elements in a non-bias way. To do this we use the order functions. **Definition 3.4** (Order functions): _The order function \(\mu_{k}:\mathcal{B}^{N}\to\mathcal{B}\) is the function that takes a multi-set (also known as a bag) as its input and returns the \(k\)-th smallest element of that multi-set._ **Remark 1**: _The order functions are the only single-peaked aggregation functions that always output one of their inputs and whose output doesn't change when the inputs are permuted, ([Balinski and Laraki, 2011], chapter 11) but this is obvious from [Moulin, 1980a] median anonymous characterization)._ This characterization tell us that order functions are the only one that guarantee to select a vote in the available voting pool without any bias in regards to which voter is associated to which vote. **Definition 3.5** (Phantom-proxy mechanisms): _A mechanism \(\psi:\mathcal{O}^{M\times N}\to(\mathcal{B}\cup\{\emptyset\})^{M}\) is phantom-proxy if for each voter \(i\) and each candidate \(J\) there is a function \(f_{i,J}:\mathcal{O}^{N}\to(\mathcal{B}\cup\{\emptyset\})\) s.t._ \[v_{i}(J)\in\mathcal{A}\wedge(\forall J,v_{i}(J)\in\{\emptyset,\otimes\} \Rightarrow f_{i,J}(v_{i}))=\emptyset\] _and a function \(g_{J}:\mathbf{N}\to\mathbf{N}\) where for all \(k,\,0<g_{J}(k)\leq k\) such that:_ \[\varphi(\mathbf{v})(J)=\mu_{g_{J}(\#\mathcal{D}^{n}j\cup\mathcal{F}_{J}( \mathbf{v}))}(\mathbf{v}(J)\cup\mathcal{F}_{J}(\mathbf{v}))\] _Where \(\mathcal{F}_{J}(\mathbf{v})=\{f_{i,J}(v_{i})|f_{i,J}(v_{i})\neq\emptyset\}\) is the multi-set containing the proxy votes for \(J\)._ We use the notation \(\mathcal{F}_{J}\) to represents the function that takes a voting profile \(\mathbf{v}\) and returns \(\mathcal{F}_{J}(\mathbf{v})\). With an ordinal scale, a well-known phantom-proxy grading function is the so-called majority grade in [Balinski and Laraki, 2007, 2011]. This is a grading method where each candidate's grade is the smallest median value of the votes it received. It corresponds to the phantom-proxy mechanism where all the \(f_{i,J}\) functions are constants equal to \(\emptyset\) and the \(g_{J}\) function is \(g_{J}(k)=\lfloor k/2\rfloor\). With a cardinal scale, an intuitive phantom-proxy grading function \(\psi:\mathcal{O}^{M\times N}\to(\mathcal{B}\cup\{\emptyset\})^{M}\) would be for all voters to be replaced by the average of the grades they gave and not to have a proxy if they never gave a grade: \[\forall v_{i},\exists I,v_{i}(I)\in\alpha\Rightarrow f_{i,J}(v_{i})=\frac{1} {\#\mathcal{C}^{{}^{\prime}}_{i}}\sum_{J\in\mathcal{D}^{n}j}v_{i}(J).\] \[\forall v_{i},\forall I,v_{i}(I)\in\mathcal{E}\Rightarrow f_{i,J}(v_{i})=\emptyset.\] For example, with \(\mathcal{N}=\{x,y,z\}\) and \(\mathcal{M}=\{I,J\}\) then for \(v_{x}=(1,\emptyset),v_{y}=(\emptyset,3),v_{y}=(2,2)\) with \(g_{I}=\min\) order function and \(g_{J}=\max\) order function, we obtain \(\varphi(\mathbf{v})=(1,3)\). **Proposition 3.6**: _All phantom proxy-mechanisms \(\psi\) are SP._ The proof of the proposition is in the appendix (B.1). Since all phantom-proxy mechanisms are SP it follows that we can characterize them using the phantom-mapping functions. **Proposition 3.7** (Characterization of phantom-proxy): _For any \(S\) and \(T\) and \(\mathbf{v}\), let \(p=g_{J}(\#T+\#\mathcal{F}_{J}(\mathbf{v}))\). Let \(k=\#S-\#T+p\)_ * _If_ \(k\leq 0\) _then_ \(\omega_{J,S}^{T}(\mathbf{v}_{-T})=\inf\mathcal{B}\)_._ * _If_ \(k>\#\mathcal{F}_{J}(\mathbf{v})\) _then_ \(\omega_{J,S}^{T}(\mathbf{v}_{-T})=\sup\mathcal{B}\)__ * _Else_ \(\omega_{J,S}^{T}(\mathbf{v}_{-T})=\mu_{k}(\mathcal{F}_{J}(\mathbf{v}))\)_._ _(Proof B.2.)_ Intuitively, since the only values that can be selected aside from the inputs in a grading function are the phantom-mapping and in a phantom-proxy mechanism are the proxy votes we have that all phantom-mappings outcomes must be associated to one of the proxy votes or never get selected by the mechanism. Take an example: if \(g_{J}\) is such that we must select the smallest element in the voting pool then if all grades in \(\mathbf{v}(J)\) are larger or equal to the smallest proxy vote then that proxy vote is selected. Therefore, we must have \(\omega_{J,T}^{T}(\mathbf{v}_{-T})=\min\mathcal{F}_{J}(\mathbf{v})\). Otherwise, if there is at least one grade from \(\mathbf{v}(J)\) that is less than the smallest proxy vote then no proxy votes can be selected as such \(\omega_{J,S}^{T}(\mathbf{v}_{-T})<\inf\mathcal{A}\). Since we need to be certain that \(\omega_{J,S}^{T}\leq\omega_{J,T}^{T}\) we can select \(\omega_{J,S}^{T}(\mathbf{v}_{-T})=\inf\mathcal{B}\). ## 4. Incentives and Impact of Non-Graders There are numerous ways we can treat blank votes and absentees. This paper will make suggestions we believe make sense when they are to be treated differently. It is important to note that the regulator may choose to count them similarly as it is often done in practice where blank votes and absentees are ignored and so, have the same (no) impact on the outcome of the election. In political elections, blank vote symbolically represents protest votes. In a grading system however if you want to protest against a candidate, you can just give them a dreadful grade such as Terrible. As such, in a grading model, a blank vote will be interpreted as conscious decision to ask to be removed from the decision process. Think to referees who grade the papers of EC. The member might feel incompetent to express herself on a paper out of her field or might have a conflict of interest. In both cases, the referee might want to be excluded from grading that paper. On the other hand, an absentee decided to ignore the chance to vote, and so in a sense entrusted the results to the rest of the voters. As such they can be considered to have voted for the outcome since they agree with what the other voters decided. Hence, the absentees will contribute to strengthen out trust in the election (because may be counted as if they voted for the outcome hence as if they follow the judgment of the colleagues who voted), and the blank vote will weaken that trust (because represents the desire to be removed from the election). In real life, examples where the distinction was made exist. In France the "Parti du vote blanc" (blank vote party) was created for those who wished for protest votes to count: if it receives most of the votes, one redo the election with new candidates. In other words, Blank is considered as a candidate who, when elected, all other candidates are rejected and the debate may continue with a new set of candidates. ### Blank Votes : BV Recall that blank votes are represented by \(\otimes\) and absentees by \(\circ\). In order to better describe the notion of blank vote, let us define the notion of completely ineligible voter. A voter is considered completely ineligible if there does not exist a candidate for which they may vote. Such a voter can be removed from the electorate without any impact on the outcome. Axiom 1 (Removing completely ineligible voters). _A grading function \(\varphi\) verifies the axiom if for any \(i\) such that \(\mathcal{C}_{i}=\emptyset\), we do not distinguish between \(\mathcal{N}\) and \(\mathcal{N}\) -[i]._ Many of the situations our model represents assume that proper grading of the candidates is time consuming. As such we can easily imagine that a lazy voter would wish to be considered completely ineligible. Similarly, in certain situations a voter \(i\) may wish to be considered ineligible to vote for a single candidate \(J\). For example, if the voter is aware of a conflict of interest (as in peer review) it would make sense that they can submit a blank vote to inform the regulator that there has been a mistake and that they should not have been granted the right to vote for the candidate. The regulator should be able to adapt by determining what the outcome would had been if he did not granted \(i\) the right to vote for \(J\). It is immediate that a lazy voter can therefore request to be considered completely ineligible, in other words they can ask to be removed from the process. Definition 4.1 (Blank votes : BV).: A function \(\lambda:\mathcal{O}^{\mathcal{M}\times\mathcal{N}}\rightarrow\left(\mathcal{B }\cup\{\emptyset\}\right)^{\mathcal{M}}\) respects blank votes (BV) if for all candidates \(J\), for all voters \(i\), for all \(\mathbf{v}\) where \(v_{i}(J)=\otimes\), if \(\mathbf{w}=\mathbf{v}[v_{i}(J)\coloneqq\emptyset]\) then: \[\lambda(\mathbf{v})=\lambda(\mathbf{w}).\] Interestingly enough, not being allowed to vote for a candidate is not the same as not being able to affect the candidate. As such unless a voter uses (BV) to make himself ineligible then depending on the situation he may still have an impact on the final outcome. In our current setting, it is perfectly possible for a voter that was not allowed to vote for a candidate to affect that candidate, thanks to the votes she provided to other candidates. This is somewhat counter-intuitive as some of the versions we can suggest for dealing with absentees will however completely remove their ability to affect the candidate. [Blank vote characterization] A SP grading function \(\varphi\) respects blank votes (BV) iff the phantom-mappings \(\omega_{J,S}^{T}\) associated to \(\varphi\) respect blank votes. (Proof C.1) It is important to realize that the intuitive implication is one direction only. Being able to replace a blank vote \(\otimes\) with an ineligibility notification \(\emptyset\) does not mean we can replace a ineligibility notification with a blank vote. A voter can demand to be removed from the election but he cannot request permission to join. In other words the notion of blank vote however implies that the regulator has to prepare the mechanism in such a way that he considers the possibility of removing additional vote rights. In our setting when a voter provides a (BV) to candidate \(J\) it usually represents his desire to let the regulator act as if he was not given eligibility to vote for \(J\). In terms of phantom-proxies this therefore can describes sufficient trust in the ability of the proxy vote to properly represent them. A phantom-proxy mechanism \(\psi\) respects blank votes (BV) iff the \(f_{J,i}\) proxy functions associated to \(\psi\) respect blank votes. (Proof C.2) An example of a phantom-proxy mechanism that does not verify (BV) would be if for a given \(i\) and \(J\) we had \(f_{i,J}=med(\{v_{i}(J):J\in\mathcal{C}^{\prime}{}_{i}\})\) when \(v_{i}(J)=\otimes\) and \(f_{i,J}=\emptyset\) when \(v_{i}(J)=\emptyset\). ### Ignoring the absentees : Sl One of our options for dealing with absentee voters is simply to ignore them completely. In other words, if a voter \(i\) abstained for \(J\) the regulator proceeds for \(J\) as if the voter \(i\) never existed from the perspective of \(J\). This is quite different from (BV) where the outcome for \(J\) can still take in account the way \(i\) voted elsewhere. In particular when considering phantom-proxy mechanism since the voter did not provide a blank vote, we cannot trust that he is willing to allow a proxy to represent him and as such it make sense not to. Therefore since we do not know what to do with the absentee voter we might as well just remove him from the process. [Who is silent is ignored : SI] A grading function \(\varphi\) verifies the "who is silent is ignored" rule if for any candidate \(J\), any voter \(i\) and any \(\alpha\in\mathcal{A}\) we have: \[\forall\mathbf{v}\in\mathcal{O}^{\mathcal{M}\times\mathcal{N}},v_{i}(J)= \circ\Rightarrow\varphi(\mathbf{v})(J)=\varphi(\mathbf{v}[\forall J;v_{i}(J) \coloneqq\emptyset])(J).\] [Ignoring absentees characterization] A SP grading function \(\varphi\) verifies the "who is silent is ignored" (SI) property iff we can select the phantom mappings \(\omega_{J,S}^{T}\) such that they verify (SI). (Proof C.3) A phantom-proxy mechanism \(\psi\) verifies the "who is silent is ignored" (SI) property iff for all \(i,J\) we have: \[\forall v_{i},v_{i}(J)=\circ\Rightarrow f_{i,J}(v_{i})=\emptyset.\] (Proof C.4) ### Silent consent rule : SC Our basic notion for an absentee is based on the expression "who stays silent consents" and depicts the fact that a voter that abstained might as well have voted the outcome. In other words, we should be able to act as if the outcome for \(J\) when \(i\) abstains for \(J\) is the same as if \(i\) voted the said outcome. While the intuition for the "silent consent rule" is relatively straightforward, it also implies that voter \(i\) can vote for the outcome he would have obtained and this is not necessarily the case. As such let us start with the awkward definition in which we can only consent when the outcome was in \(\mathcal{A}\) and its characterization before moving on to the more intuitive approach where we extend the set of possible grades from \(\mathcal{A}\) to \(\mathcal{B}\) so that our absentee voter can vote any possible outcome. **Definition 4.7** (Who is silent consents rule : SC).: A grading function \(\varphi\) verifies the "who is silent consents" rule if for any candidate \(J\), any voter \(i\) and any \(\alpha\in\mathcal{A}\) we have: \[\forall\mathbf{v}\in\mathcal{O}^{\mathcal{M}\times\mathcal{N}},v_{i}(J)= \circ\wedge\varphi(\mathbf{v})(J)=\alpha\Rightarrow\varphi(\mathbf{v}[v_{i}( J):=\alpha])(J)=\varphi(\mathbf{v})(J).\] Lemma 4.8 (SC characterization).: _An SP \(\varphi:\mathcal{O}^{\mathcal{M}\times\mathcal{N}}\rightarrow\mathcal{B}^{ \mathcal{M}}\) function verifies the SC property if for all candidates \(J\), all sets \(\emptyset\subseteq S\subseteq T\subseteq\mathcal{D}_{J}\), all candidates \(i\notin T\) and all profiles \(\mathbf{v}\), the phantom-mappings verify that for every \(\alpha\in\mathcal{A}\) and \(\forall\mathbf{v}\):_ \[\omega_{J,S}^{T}(\mathbf{v}_{-T})\leq\alpha\leq\omega_{J,T}^{T}(\mathbf{v}_{ -T})\Rightarrow\omega_{J,S}^{T\cup\{i\}}(\mathbf{v}_{-T\cup\{i\}})\leq\alpha\] \[\omega_{J,\emptyset}^{T}(\mathbf{v}_{-T})\leq\alpha\leq\omega_{J,S}^{T}( \mathbf{v}_{-T})\Rightarrow\alpha\leq\omega_{J,S\cup\{i\}}^{T\cup\{i\}}( \mathbf{v}_{-T\cup\{i\}})\] _(Proof C.5.)_ As observed above, an SP mechanism can always be extended so that \(\mathcal{A}=\mathcal{B}\). In that case we have. Theorem 4.9 (Silent consent characterization).: _When \(\mathcal{A}=\mathcal{B}\), an SP grading function \(\varphi\) verifies the silent consent rule (SC) if for all candidates \(J\), all sets \(\emptyset\subset S\subset T\subseteq\mathcal{D}_{J}\), all candidates \(i\notin T\) and all profiles \(\mathbf{v}\), the phantom-mappings verify:_ \[\omega_{J,S}^{T\cup\{i\}}(\mathbf{v}_{-T\cup\{i\}})\leq\omega_{J,S}^{T}(( \mathbf{v}[v_{i}(J):=\circ]_{-T})\leq\omega_{J,S\cup\{i\}}^{T\cup\{i\}}( \mathbf{v}_{-T\cup\{i\}}).\] _(Proof C.6.)_ Corollary 4.10.: _When \(\mathcal{A}=\mathcal{B}\), An SP grading function \(\varphi\) verifies SI and SC iff its associated phantom-mappings \(\omega_{J,S}^{T}\) verify:_ \[\omega_{J,S}^{T\cup\{i\}}(\mathbf{v}_{-T})\leq\omega_{J,S}^{T}((\mathbf{v}_{ -T})_{-i})=\omega_{J,S}^{T}((\mathbf{v}_{-T})_{-i},v_{i}[J:=\circ])\leq\omega_ {J,S\cup\{i\}}^{T\cup\{i\}}(\mathbf{v}_{-T}).\] Proposition 4.11.: _A phantom-proxy mechanism \(\psi\) verifies the SC rule iff for all \(p\in\mathcal{N}\) we have \(g_{J}(p+1)\in\{g_{J}(p),g_{J}(p)+1\}\). (Proof C.7.)_ ### Participation : P and FP Since our objective is to get voters to grade as much candidates as possible, we need be sure that they never benefit from an absentee vote. That is to say, if a voter has an ideal grade for a candidate, then he should not be able to get closer to that grade by abstaining for that candidate. This is the Participation property (also known as the no no-show paradox). We define two notions of Participation, one stronger than the other. #### 4.4.1. Participation : P. Definition 4.12 (Participation : P).: A grading function \(\varphi\) verifies the Participation property (P) if for any candidate \(J\), any voter \(i\) with \(v_{i}(J)\in\mathcal{A}\): \[\varphi(\mathbf{v})(J)>v_{i}(J) \Rightarrow\varphi(\mathbf{v}[v_{i}(J)\coloneqq\circ])(J)\geq \varphi(\mathbf{v})(J)\] \[\varphi(\mathbf{v})(J)<v_{i}(J) \Rightarrow\varphi(\mathbf{v}[v_{i}(J)\coloneqq\circ])(J)\leq \varphi(\mathbf{v})(J)\] Remark 2.: _Even if we are not SP, participation (P) implies that we verify the "silent consent" rule (SC). (Proof C.8.)_ Theorem 4.13 (Participation characterization).: _An SP grading function \(\varphi\) verifies participation (P) if for candidates \(J\) all sets \(\emptyset\subset S\subset T\subseteq\mathcal{D}_{J}\), all candidates \(i\notin T\), all profiles \(\mathbf{v}\) the phantom-mappings verify:_ \[\omega_{J,S}^{T\cup\{i\}}(\mathbf{v}_{-T\cup\{i\}})\leq\omega_{J,S}^{T}(( \mathbf{v}[v_{i}(J)\coloneqq\circ])_{-T})\leq\omega_{J,S\cup\{i\}}^{T\cup\{i\} }(\mathbf{v}_{-T\cup\{i\}}).\] Proof C.9.: \(\;\) Theorem 4.14 (Relationship between P and SC).: _When \(\mathcal{A}=\mathcal{B}\), in the (SP) context, the participation and "silence consent" properties (SC) are equivalent._ Proposition 4.15.: _A phantom-proxy mechanism verifies participation (P) iff it verifies the silent consent rule (SC). (Proof C.10.)_ #### 4.4.2. Full Participation : FP. In full participation (FP), voters cannot benefit from blank votes \(\otimes\) either. Definition 4.16 (Full Participation : FP).: A grading function \(\varphi\) verifies participation (FP) if for any candidate \(J\), any voter \(i\) with \(v_{i}(J)\in\mathcal{A}\) and any \(\epsilon\in\{\circ,\otimes\}\): \[\varphi(\mathbf{v})(J)\geq v_{i}(J) \Rightarrow\varphi(\mathbf{v}[v_{i}(J)\coloneqq\epsilon])(J)\geq \varphi(\mathbf{v})(J)\] \[\varphi(\mathbf{v})(J)\leq v_{i}(J) \Rightarrow\varphi(\mathbf{v}[v_{i}(J)\coloneqq\epsilon])(J)\leq \varphi(\mathbf{v})(J)\] Theorem 4.17 (Participation characterization).: _An (SP) grading function \(\varphi\) verifies participation (FP) if for all candidates \(J\), all sets \(S\subseteq T\subseteq\mathcal{D}_{J}\), and all voting profiles \(\mathbf{v}\), we have \(\omega_{J,S}^{T}(\mathbf{v}_{-T})\leq\omega_{J,S}^{T\cup\{i\}}((\mathbf{v}[v _{i}(J)\coloneqq\otimes])_{-T})\leq\omega_{J,S}^{T}(\mathbf{v}_{-T}).\)_ Proposition 4.18.: _Any phantom-proxy mechanism that verifies (P) verifies (FP). (Proof C.10.)_ ### Jury Determinism : JD We may also wish for the aggregated grade for a candidate not to be affected by the other candidates, but only depend on the votes of his assigned jury. Definition 4.19 (Jury Determinism : JD).: A grading function \(\varphi\) is jury determined (JD) iff for all \(\mathbf{v}\) and \(\mathbf{w}\): \[\varphi(w_{1}[J\coloneqq v_{i}(J):\forall i\in\mathcal{N}])(J)=\varphi( \mathbf{v})(J)\] Lemma 4.20.: _An (SP) grading function \(\varphi\) is jury determined (JD) iff all phantom-mappings \(\omega_{J,S}^{T}\) only depend on the identity of absentees for \(J\) and \(\mathcal{D}_{J}\). (Proof C.11.)_ It is unsurprising to find that JD and phantom-proxy mechanisms do not provide a large class of mechanisms. The purpose of JD is to reduce the impact of non-eligible voters on the outcome whereas the class of phantom-proxy mechanism is intended as a means to provide grades to represent the non-eligible voters. Intuitively, the 2 conflict, as the following propositions shows. They show how limited phantom-proxy mechanism become in a JD setting where there is almost no information available for the proxy to learn how to imitate their voter. **Proposition 4.21**: _A phantom-proxy mechanism \(\psi\) verifies jury determinism (\(\mathcal{I}\!D\)) iff for all \(f_{i,J}\) functions, there is a function \(\tilde{f_{i,J}}:\mathcal{E}\rightarrow\mathcal{B}\) such that:_ \[\forall v_{i},f_{i,J}(v_{i})=\tilde{f_{i,J}}(v_{i}(J)).\] _(Proof C.12.)_ ### Strong strategy-proofness : Strong SP This is SP except that we now consider that you might have an opinion for a candidate you were not asked to vote for. If this happens then you should not be able to impact its outcome in a way that is make their grade closer to your opinion. **Definition 4.22** (Strong strategy-proofness : Strong SP).: A grading function \(\varphi\) is strongly strategy-proof if for any voter \(i\), for any candidate \(J\) and \(\alpha\in\mathcal{A}\) and any \(\mathbf{v}\) and \(w_{i}\) such that if \(v_{i}(J)\in\mathcal{A}\) then \(v_{i}(J)=\alpha\) and \(C_{i}(\mathbf{w})=C_{i}(\mathbf{v})\): \[\varphi(\mathbf{v})(J)>\alpha\Rightarrow\varphi(\mathbf{v}[v_{i}:=w_{i}])(J) \geq\varphi(\mathbf{v})(J)\] \[\varphi(\mathbf{v})(J)<\alpha\Rightarrow\varphi(\mathbf{v}[v_{i}:=w_{i}])(J) \leq\varphi(\mathbf{v})(J).\] Strong SP states that even if an expert did not give a grade, they cannot manipulate the outcome, even if they were not offered the option to grade the candidate. This almost implies jury determinism. **Lemma 4.23**: _Strong SP is equivalent to (SP,FP,JD)._ ## 5. Additional properties and their characterisations In the following sections we describe different properties (or axioms) that are considered desirable. ### Unanimity : U Unanimity usually corresponds to the situation where all voters agree on something. It is quite natural that in such an instance we want the outcome to also agree. In our setting, we desire a stronger notion of unanimity. When all voters that graded a candidate agree about the grade, then the outcome for that candidate should be that grade. **Definition 5.1** (Unanimity : U).: A grading function \(\varphi\) is unanimous iff: \[\forall\mathbf{v},\forall\alpha\in\mathcal{A},\forall i,v_{i}(J)\in\{\alpha \}\cup\mathcal{E}\wedge\exists i,v_{i}(J)=\alpha\Rightarrow\varphi(\mathbf{v })(J)=\alpha.\] **Theorem 5.2**: _A SP grading function \(\varphi\) is unanimous iff the phantom-mappings satisfy:_ \[\forall J,\emptyset\subset T\subseteq\mathcal{D}^{\prime\prime}J,\omega_{J, \emptyset}^{T}\leq\inf\mathcal{A}\wedge\omega_{J,T}^{T}\geq\sup\mathcal{A}.\] _(Proof D.1.)_ Interestingly enough, we can see that if \(\varphi\) is unanimous then so long as \(J\) received at least one grade the outcome for \(J\) is always in between \(\inf\mathcal{A}\) and \(\sup\mathcal{A}\) included. It is known that in single-peaked settings, Unanimity is equivalent of Pareto Optimality (Weymark, 2011). It is also true in our setting (see appendix F). This means that (SP,U) are equivalent to: for a fixed candidate \(J\), no grade \(\beta\in\mathcal{B}\) could be closer to the grade a voter provided without being further away from the grade another voter provided. Unanimity is not a natural property to obtain when considering phantom-proxy mechanisms. In order to have a unanimous phantom-proxy mechanism we would need to be sure that the number of proxy votes is never greater than the number of grades provided. In the instance where only one grade was provided we therefore cannot have any proxy votes or that the \(g_{J}\) function always selects the right outcome. **Proposition 5.3**.: _A phantom-proxy \(\psi\) mechanism is unanimous (U) iff \(\mathcal{F}_{J}=\emptyset\)_ ### Neutrality : N, SN and F A decision maker is neutral if it is considered a fair independent judge that treats all candidates equality. The decision they make should only depend on the information provided for the decision process and not any personal preferences regarding the candidates. We therefore expect that if two candidates swapped their names (without telling the decision maker) everything else equal, their positions will have swapped in the outcome (everything else equal). In order to promote fairness we therefore wish to introduce a notion of neutrality. However the notion of neutrality conflicts with restricted voting. How can we expect the regulator to treat candidates the same when they have already created non-equal treatment with the voting rights. Our first notion of neutrality (N) is therefore very restrictive. The regulator can only be undiscriminating between two candidates if he gave them the same set of voters. The stronger version of neutrality (SN) is more permissive, as voting rights are not taking into account. We also introduce fairness (F), a weak of neutrality that is specific to phantom-proxy mechanism #### 5.2.1. Neutrality : N In the case of restricted neutrality (or neutrality), we can only swap the votes for 2 candidates if they have the same rights to vote. We therefore expect that the notion of restricted neutrality is linked to the ability to bound a candidate to the set of voters that voted for it. **Definition 5.4** (Neutrality: N).: A function \(\lambda:\mathcal{O}^{\mathcal{M}\times\mathcal{N}}\rightarrow(\mathcal{B} \cup\{\emptyset\})^{\mathcal{M}}\) is neutral (N) if any two candidates \(I\) and \(J\) such that \(\mathcal{D}_{I}=\mathcal{D}_{J}\): \[\varphi(\mathbf{v}[v_{i}[I:=v_{i}(J);J:=v_{i}(I)]:\forall i\in\mathcal{D}_{I}]) =\varphi(\mathbf{v})[I:=\varphi(\mathbf{v})(J);J:=\varphi(\mathbf{v})(I)]\] The following theorem is therefore relatively expected. **Theorem 5.5**.: _A SP grading function \(\varphi\) verifies N if there exists a neutral \(\omega_{U,S}^{T}\) functions where \(T\subseteq U\subseteq\mathcal{N}\) such that if \(\mathcal{D}_{J}=U\) then \(\omega_{J,S}^{\mathcal{D}^{\prime\prime}J}(\mathbf{v}_{-\mathcal{D}^{\prime \prime}J})=\omega_{U,S}^{\mathcal{D}^{\prime\prime}J}(\mathbf{v}_{-\mathcal{D}^ {\prime\prime}J})\). (Proof D.2.)_ **Proposition 5.6**.: _Let \(\bigcup\mathcal{M}_{S}=\mathcal{M}\) be the partition of \(\mathcal{M}\) defined by \(J\in\mathcal{M}_{S}\) if \(\mathcal{D}_{J}=S\). A phantom-proxy mechanism \(\psi\) is N iff for all \(i\), and all \(S\subseteq\mathcal{N}\) we have an neutral function \(f_{i,S}\) and a function \(g_{S}\) such that \(f_{i,J}=f_{i,S}\) and \(g_{J}=g_{S}\) if \(J\in\mathcal{M}_{S}\). (Proof D.3.)_ #### 5.2.2. Strong Neutrality : SN In strong neutrality we can swap the rights to vote. It implies that the regulator should not care who has which rights. For example, the allocation of rights to vote may be random or once the rights to vote was allocated the regulator forgot about them. Alternatively, the regulator may simply create the mechanism without defining the rights to vote yet and intends to use the (BV) property to determine all values once they decided on the rights to vote. **Definition 5.7** (Strong neutrality : SN).: A method \(\varphi\) is strong neutral (SN) if for any two candidates \(I\) and \(J\): \[\varphi(\mathbf{v}[v_{i}[I:=v_{i}(J);J:=v_{i}(I)]:\forall i])=\varphi(\mathbf{ v})[I:=\varphi(\mathbf{v})(J);J:=\varphi(\mathbf{v})(I)]\] **Theorem 5.8** (Strong Neutrality characterization).: _A (SP) grading method \(\varphi\) verifies (SN) iff there exists a strong neutral \(\omega_{S}^{T}\) function such that for all \(\mathcal{J}\) we have \(\omega_{S,J}^{T}=\omega_{S}^{T}\). (Proof D.4.)_ In other words, a grading function is strongly neutral if we use the same phantom-mappings \(\omega_{S}^{T}\) for all the candidates. This was to be expected. The easiest way to ensure that the outcome does not depend on the identities is to ensure that no step in process depends on their identities. **Proposition 5.9**.: _A (BV) phantom-proxy mechanism \(\psi\) verifies (SN) iff there is a strongly neutral function \(f_{i}\) such that \(f_{i,J}=f_{i}\) for all \(J\) and there is a function \(g\) such that for all \(J\) we have \(g=g_{J}\). (Proof D.5.)_ #### 5.2.3. Fairness : F. In the specific case of phantom-proxy mechanism we have an additional notion of neutrality. The intuition behind the phantom-proxy mechanisms is that voters can be represented by the grade they submitted or by some proxy vote. Once the voting pool is selected, the mechanism decides who the winner is. A phantom-proxy mechanism is considered fair is 2 different candidates that provided the same voting pool get the same outcome. **Definition 5.10** (Fairness : F).: A phantom-proxy mechanism \(\psi\) is fair iff for any two candidates \(I\) and \(J\): \[\mathbf{v}(J)\cup\mathcal{F}_{J}(\mathbf{v})=\mathbf{v}(J)\cup\mathcal{F}_{J} (\mathbf{v})\Rightarrow\psi(\mathbf{v})_{I}=\psi(\mathbf{v})_{J}\] **Theorem 5.11**.: _A phantom-proxy \(\psi\) is fair iff there exists \(g\) such that for all \(J\), \(g_{J}=g\)._ ### Anonymity : A and SA In the classical sitting, a voting method is anonymous if the identity of a voter does not impact the outcome of the election. In other words any 2 voters can swap their ballots without it impacting the outcome. In our model with rights, this notion is troublesome. The identity of the voter impacts which candidates he has the right to vote for. It would therefore be natural that two voters without the same rights to vote cannot swap their ballots. We are therefore left with 2 different anonymity notions. The one were we are restricted and can only swap ballots if the voters have the same rights to vote and the one were any 2 ballots can be swapped as if we are ignoring the rights to vote. #### 5.3.1. Anonymity : A. Just like with neutrality, we consider that 2 voters can only swap their ballots if they have the same rights to vote. **Definition 5.12** (Anonymity).: A function \(\lambda:\mathcal{O}^{M\times N}\rightarrow(\mathcal{B}\cup\{\emptyset\})^{M}\) is anonymous (A) if for any \(\mathbf{v}\) and for any \(\mathbf{w}\) obtained from \(\mathbf{v}\) by switching the ballots of 2 players that had the same rights to vote we have: \[\varphi(\mathbf{v})=\varphi(\mathbf{w}).\] Let \(\bigcup_{M\subseteq M}\mathcal{N}_{M}\) be the partition of \(\mathcal{N}\) such that if \(\mathcal{C}_{i}=M\) then \(i\in\mathcal{N}_{M}\). This partition provides us the sets of voters that can swap their votes. We say that \(T\) and \(U\) have the same partition-cardinal if \(\forall M\subseteq\mathcal{M},\#(\mathcal{N}_{M}\cap T)=\#(\mathcal{N}_{M} \cap T)\). **Theorem 5.13** (Anonymous characterization).: _A SP grading function \(\varphi\) is anonymous iff all phantom-mapping are anonymous and if for all \(J\), if \(T\) and \(T^{\prime}\) have the same cardinal-partition, and if \(S\subseteq T\) and \(S^{\prime}\subseteq T^{\prime}\) have the same cardinal-partition then \(\omega_{S}^{T}=\omega_{S^{\prime}}^{T}\) (Proof D.6)._ **Proposition 5.14**.: _A (BV) phantom-proxy mechanism \(\psi\) verifies anonymity (A) iff for all \(J\), for all \(M\subseteq\mathcal{M}\) there is a \(f_{M,J}\) such that if \(\mathcal{C}_{i}=M\) then \(f_{M,J}=f_{i,J}\). (Proof D.7.)_ Here is an example with 3 voters \(\{x,y,z\}\) and 2 candidates \(\{I,J\}\). Let \(\mathcal{C}_{x}=\{I,J\}\) and \(\mathcal{C}_{y}=\mathcal{C}_{z}=\{I\}\). Then if \(f_{I,y}=f_{I,z}\) and \(f_{y,I}=f_{z,I}\), then the mechanism is anonymous. Else it isn't. #### 5.3.2. Strong Anonymity : SA. An intuitive anonymity situation is strong anonymity. This represents situations where the regulator treats all voters the same regardless of their voting rights (for example if there are many voters and they were assigned at random, like in LaPrimaire.org). **Theorem 5.15** (Strong Anonymous characterization).: _An (SP) grading function \(\varphi:\mathcal{O}^{M\times N}\rightarrow(\mathcal{B}\cup\{\emptyset\})^{ \mathcal{M}}\) is strongly anonymous iff for each \(J\) there exists \(\frac{\#\mathcal{D}_{J}(\#\mathcal{D}_{J}+1)}{2}\) strongly anonymous phantom-mappings \(\omega_{J,k}^{d}:\mathcal{O}^{\mathcal{M}}\times\mathcal{N}\Rightarrow\mathcal{B}\) such that for all \(0\leq k<d\leq\#\mathcal{D}_{J}\) we have \(\omega_{J,k}^{d}\leq\omega_{J,k+1}^{d}\) and:_ \[\forall\mathbf{v},\varphi(\mathbf{v})(J)=\text{med}(\mathbf{v}(J),\omega_{J,0 }^{\sharp 2^{\mathcal{D}^{\prime}}J}(\mathbf{v}_{-\mathcal{D}^{\prime}}),\dots, \omega_{J,\sharp\mathcal{D}^{\prime}J}^{\sharp \mathcal{D}^{\prime}J}(\mathbf{v}_{-\mathcal{D}^{\prime}})).\] _(Proof D.8.)_ This is perhaps the closest formula we have to (Moulin, 1980a) median in the anonymous case. This is largely due to the fact that just like with the proof of theorem 3.3 we can use the (Moulin, 1980b) formula as the first step of the proof of this theorem. **Remark 3**.: _As in (Moulin, 1980a), to obtain a unanimous rule, we just need to remove \(\omega_{J,0}^{d}\) and \(\omega_{J,d}^{d}\)._ **Proposition 5.16**.: _A (BV) phantom-proxy mechanism \(\psi\) verifies anonymity iff for all \(J\) there is a function \(J\) such that for all \(i\), \(f_{i,J}=f_{J}\).(Proof D.9.)_ ### Consistency : OC and IC Let us consider a situation where the regulator performed his survey in 2 different towns and obtained the same grade for a candidate in both. It is natural to consider that if the regulator had performed a single survey on the total population of both towns the candidate should have gotten that same grade. This is the idea of consistency (in grading). We omit "grading" as we will not discuss consistence in ranking because of lack of space. Consistency was first introduced and studied in the context of social choice (resp. welfare) functions by (Young, 1975) and(Smith, 1973). In this section we consider two versions of consistency. In the first all voters are assigned to one of the two sets we wish to merge. This is the most common version of consistency in the literature. In the second, only the two groups of voters that provided grades to be disjoint. #### 5.4.1. Outer Consistency : OC In outer consistency, the two sets that we wish to merge are considered completely disjoint. As such, a voter is considered ineligible when another set is being studied and completely represented when it is part of the set. **Definition 5.17** (Outer consistency :OC).: A grading function \(\varphi\) is outer-consistent (OC) iff for all partitions \(\mathcal{N}_{1}\cup\mathcal{N}_{2}=\mathcal{N}\) of the set of voters and all voting profile \(\mathbf{t}\) and all candidates \(J\), if \(\mathbf{v}\) and \(\mathbf{w}\) are the voting profiles defined as \(\mathbf{v}=\mathbf{t}_{-\mathcal{N}_{1}}\) and \(\mathbf{w}=\mathbf{t}_{-\mathcal{N}_{2}}\) then we have: \[\varphi(\mathbf{v})(J)=\varphi(\mathbf{w})(J)\Rightarrow\varphi(\mathbf{v})(J) =\varphi(\mathbf{t})(J)\] Note that the previous definition implies almost (BV) to a certain extent. **Theorem 5.18**.: _An (SP,BV) grading function \(\varphi\) is verifies consistency (OC) iff for all \(J\), for all \(\mathbf{t}\) and for all partitions \(\mathcal{N}_{1}\cup\mathcal{N}_{2}=\mathcal{N}\), if \(\mathbf{v}=\mathbf{t}_{-\mathcal{N}_{1}}\) and \(\mathbf{v}=\mathbf{t}_{-\mathcal{N}_{2}}\) we have that for all \(S\subseteq\mathcal{D}^{\prime\prime}{}_{J}(\mathbf{v})=T\) and \(S^{\prime}\subseteq\mathcal{D}^{\prime\prime}{}_{J}(\mathbf{w})=T^{\prime}\) we must verify:_ 1. \[\omega_{J,S}^{T}(\mathbf{v}_{-T})=\omega_{J,S}^{T^{\prime}}(\mathbf{w}_{-T^{ \prime}})\Rightarrow\omega_{J,S\cup S^{\prime}}^{T\cup T^{\prime}}(\mathbf{t}_ {-(T\cup T^{\prime})})\] 2. \[\max(\omega_{J,\emptyset}^{T}(\mathbf{v}_{-T}),\omega_{J,\emptyset}^{T }(\mathbf{w}_{-T^{\prime}}))\leq a\leq\min(\omega_{J,S}^{T}(\mathbf{v}_{-T}),\omega_{J,S}^{T }(\mathbf{w}_{-T}))\] \[\Rightarrow\] \[\omega_{J,\emptyset}^{T\cup T^{\prime}}(\mathbf{t}_{-(T\cup T^{\prime})})\leq a\leq\omega_{J,S\cup S^{\prime}}^{T\cup T^{\prime}}(\mathbf{t}_{-(T\cup T ^{\prime})})\] _ 3. \[\max(\omega_{J,S}^{T}(\mathbf{v}_{-T}),\omega_{J,S^{\prime}}^{T}(\mathbf{w}_{ -T^{\prime}}))\leq \alpha\leq\min(\omega_{J,T}^{T}(\mathbf{v}_{-T}),\omega_{J,T}^{T}(\bm {w}_{-T^{\prime}}))\] \[\Rightarrow\] \[\omega_{J,S\cup S^{\prime}}^{T\cup T^{\prime}}(\mathbf{t}_{-(T\cup T^{ \prime})})\leq \alpha\leq\omega_{J,T\cup T^{\prime}}^{T\cup T^{\prime}}(\mathbf{t}_ {-(T\cup T^{\prime})})\] _(Proof D.10.)_ Corollary 5.19.: _When \(\mathcal{A}=\mathcal{B}\), a (SP,BV) grading function \(\varphi\) verifies consistency (OC) iff for all \(J\), for all \(\mathbf{t}\) and for all partitions \(\mathcal{N}_{1}\cup\mathcal{N}_{2}=\mathcal{N}\), if \(\mathbf{v}=\mathbf{t}_{-\mathcal{N}_{1}}\) and \(\mathbf{v}=\mathbf{t}_{-\mathcal{N}_{2}}\) we have that for all \(S\subseteq\mathcal{D}^{\prime\prime}{}_{J}(\mathbf{v})=T\) and \(S^{\prime}\subseteq\mathcal{D}^{\prime\prime}{}_{J}(\mathbf{w})=T^{\prime}\) we must verify:_ \[\min(\omega_{J,S}^{T}(\mathbf{v}_{-T}),\omega_{J,S}^{T}(\mathbf{w}_{-T^{\prime}})) \leq\omega_{J,S\cup S^{\prime}}^{T\cup T^{\prime}}(\mathbf{z}_{-(T\cup T^{\prime} )})\leq\max(\omega_{J,S}^{T}(\mathbf{v}_{-T}),\omega_{J,S}^{T}(\mathbf{w}_{-T^{\prime} }))\] Proposition 5.20.: _A (BV) phantom-proxy \(\psi\) is outer-consistent iff for all \(J\) and \(\forall k,k^{\prime}\) we have \(g_{J}(k+k^{\prime})\in\{g_{J}(k)+g_{J}(k^{\prime})-1;g_{J}(k)+g_{J}(k^{\prime })\}\). (Proof D.11.)_ Remark 4.: _A (BV,OC) phantom-proxy mechanism \(\psi\) satisfies the participation (P)._ #### 5.4.2 Inner Consistency : IC Inner consistency is a stronger notion of consistency where, if we have two incomplete voting profiles for the same set of voters, if both voting profiles agreed on the grade for \(J\) then we would like for the outcome to agree as well. **Definition 5.21** (Inner consistency :IC).: Let \(\mathbf{t}\) be the voting profile where all voters were allowed to vote. Let \(\mathbf{v}\) and \(\mathbf{w}\) be voting profiles obtained from \(\mathbf{t}\) by removing the right to vote of some experts for some candidates. A grading function \(\varphi\) is inner-consistent (IC) iff for any such \(\mathbf{t},\mathbf{v},\mathbf{w}\), if for candidate \(J\), \(\mathbf{v}\) and \(\mathbf{w}\) have disjoints sets of voters with the right to vote for \(J\), then: \[\varphi(\mathbf{v})(J)=\varphi(\mathbf{w})(J)\Rightarrow\varphi(\mathbf{v})(J)=\varphi( merge(\mathbf{v},\mathbf{w}))(J)\] where merge is the function defined by : if for any \((i,J)\), \(v_{i}(J)\neq\emptyset\Rightarrow merge(\mathbf{v},\mathbf{w})_{i}(J)=v_{i}(J)\). Theorem 5.22.: _An (SP,BV) grading function \(\varphi\) verifies inner-consistent (IC) if for all \(J\), all voting profiles \(\mathbf{v}\) and \(\mathbf{w}\) obtained from a same \(\mathbf{t}\) by removing votes such that \(\mathcal{D}^{\prime\prime}{}_{J}(\mathbf{v})\) and \(\mathcal{D}^{\prime\prime}{}_{J}(\mathbf{w})\) are disjoint, we have that for all \(S\subseteq\mathcal{D}^{\prime\prime}{}_{J}(\mathbf{v})=T\) and \(S^{\prime}\subseteq\mathcal{D}^{\prime\prime}{}_{J}(\mathbf{w})=T^{\prime}\)\(\mathbf{z}=merge(\mathbf{v},\mathbf{w})\) must verify:_ 1. \[\omega_{J,S}^{T}(\mathbf{v}_{-T})=\omega_{J,S}^{T^{\prime}}(\mathbf{w}_{-T})\Rightarrow \omega_{J,S\cup S^{\prime}}^{T\cup T^{\prime}}(\mathbf{z}_{-(T\cup T^{\prime})})\] 2. \[\max(\omega_{J,\mathbf{0}}^{T}(\mathbf{v}_{-T}),\omega_{J,\mathbf{0}}^{T^{ \prime}}(\mathbf{w}_{-T^{\prime}}))\leq \alpha\leq\min(\omega_{J,S}^{T}(\mathbf{v}_{-T}),\omega_{J,S}^{T^{ \prime}}(\mathbf{w}_{-T}))\] \[\Rightarrow\] \[\omega_{J,\mathbf{0}}^{T\cup T^{\prime}}(\mathbf{z}_{-(T\cup T^{\prime})})\leq \alpha\leq\omega_{J,S\cup S^{\prime}}^{T\cup T^{\prime}}(\mathbf{z}_{-(T \cup T^{\prime})})\] 3. \[\max(\omega_{J,S}^{T}(\mathbf{v}_{-T}),\omega_{J,S^{\prime}}^{T}(\mathbf{w}_ {-T^{\prime}}))\leq \alpha\leq\min(\omega_{J,T}^{T}(\mathbf{v}_{-T}),\omega_{J,T}^{T}(\bm {w}_{-T^{\prime}}))\] \[\Rightarrow\] \[\omega_{J,S\cup S^{\prime}}^{T\cup T^{\prime}}(\mathbf{z}_{-(T\cup T^{ \prime})})\leq \alpha\leq\omega_{J,T\cup T^{\prime}}^{T\cup T^{\prime}}(\mathbf{z}_{-(T \cup T^{\prime})})\] _(Proof D.12.)_ **Corollary 5.23**.: _When \(\mathcal{A}=\mathcal{B}\), an (SP,BV) grading function \(\varphi\) verifies inner consistency (IC) if for all \(J\), all voting profiles \(\boldsymbol{v}\) and \(\boldsymbol{w}\) obtained from a same \(\boldsymbol{t}\) by removing votes such that \(\mathcal{D}^{\prime\prime}{}_{J}(\boldsymbol{v})\) and \(\mathcal{D}^{\prime\prime}{}_{J}(\boldsymbol{w})\) are disjoint, we have that for all \(S\subseteq\mathcal{D}^{\prime\prime}{}_{J}(\boldsymbol{v})=T\) and \(S^{\prime}\subseteq\mathcal{D}^{\prime\prime}{}_{J}(\boldsymbol{w})=T^{\prime}\), \(\boldsymbol{z}=merge(\boldsymbol{v},\boldsymbol{w})\) must verify:_ \[\min(\omega_{J,S}^{T}(\boldsymbol{v}_{-T}),\omega_{J,S}^{T^{\prime}}( \boldsymbol{w}_{-T^{\prime}}))\leq\omega_{J,\cup S^{\prime}}^{T\cup T^{ \prime}}(\boldsymbol{z}_{-(T\cup T^{\prime})})\leq\max(\omega_{J,S}^{T}( \boldsymbol{v}_{-T}),\omega_{J,S^{\prime}}^{T^{\prime}}(\boldsymbol{w}_{-T^{ \prime}}))\] ## 6. Ranking: An iterative tie-breaking for the proxy-mechanisms The mechanisms that we have considered so far allow us to provide grades which we can use to rank. The main issue with this grading method is that ties are expected and frequent, especially if we use a grading function where the output is one of the inputs and the input space \(\mathcal{A}\) is small. For example, if we have only six possible grades as inputs grades such as {Excellent, Good, Acceptable, Weak, Bad, Terrible} then since the proxi-mechanisms will return one of these grades, for seven or more candidates, two will be in a tie. It is thus quite natural that if the aggregate grades have any meaning such as merit, two candidates that do not have the same final grades, we must rank them according to the order induced from their respective final grades. Our only issue is therefore how to break the ties. In this section we introduce an intuitive tie-breaking rule for the phantom-proxy mechanisms, extending the one used in (Balinski and Laraki, 2007, 2011) for majority judgment. Importantly, as (Gibbard, 1973) and (Satterthwaite, 1975) theorems forbid the existence of strategy-proof mechanisms when the objective is to rank, one cannot expect our ranking methods to be strategy-proof in a ranking context. However, as majority judgment, proxy ranking methods can be proved to be partially strategy-proof in ranking (see (Balinski and Laraki, 2011), section 13.1). ### Ranking when the voting pools have the same size Let \(\psi\) be a (F) phantom-proxy grading function, which is by definition, defined for a variable electorate. For any \(J\), let \(\mathcal{V}(J)\coloneqq\mathbf{v}(J)\cup\mathcal{F}_{J}(\mathbf{v})\) be the voting pool for \(J\). We therefore have that \(\psi(\mathbf{v})(J)=\mu_{g(\varphi(J))}(\mathcal{V}(J))\). Let \(I\) and \(J\) be two candidates we wish to rank using a phantom-proxy \(\psi\) (not necessarily (BV) or (OC)) and suppose in this section (to facilitate the understanding) that: \[\forall I,J\in\mathcal{M},\#(\mathbf{v}(I)\cup\mathcal{F}_{I}( \mathbf{v}))=\#(\mathbf{v}(J)\cup\mathcal{F}_{J}(\mathbf{v}))\] and \[\forall i\in\mathcal{N},v_{i}(J)\in\mathcal{A}\lor f_{i,J}(v_{i}) \in\mathcal{B}\Rightarrow\forall I,v_{i}(I)\in\mathcal{A}\lor f_{i,I}(v_{i}) \in\mathcal{B}\] Having made these assumptions (to be relaxed in the next section), let us explain how the tie breaking-rule associated to a proxy \(\psi\) rank candidate \(J\) compared to \(I\) and call this order \(<_{\psi}\). It is natural to assume that if \(\psi(\mathbf{v})(J)<\psi(\mathbf{v})(I)\) (the final grade of \(J\) according to \(\psi\) is strictly smaller than that of \(I\), then \(J<_{\psi}I\). Therefore if we were unable to distinguish \(I\) and \(J\) according to \(\psi\), that means that there is an \(\alpha\) such that \(\psi(\mathbf{v})(J)=\psi(\mathbf{v})(I)=\alpha\) and: \[\alpha\in\mathcal{V}(I)\cup\mathcal{V}(J).\] If there is a voter \(i\) that voted \(\alpha\) for both candidates, he is satisfied by the outcome (both have \(i\)'s grade as final grade), and \(i\) is satisfied with either tie-breaking choice between \(I\) and \(J\). It therefore makes sense to remove \(i\) (or equivalently \(\alpha\)) from the ranking process to allow the other voters (in particular those with an incentive to participate in the tie-breaking process) to decide how the tie must be broken. If we accept this logic, to compare \(I\) and \(J\) we should compare \((I=\emptyset])(I)\) and \(\psi(\mathbf{v}[\forall J,v_{i}(J):=\emptyset])(J)\). If they can distinguish the two candidate we are done, if not we repeat the process attractively until we can distinguish the two candidates or otherwise, we declare them equally competent (which happens only if they have identical voting pools, very unlikely in practice). Following this logic, we generate a ranking function \(<_{\psi}\), better described if we associate to each candidate \(J\) a voting range \(R_{J}^{\psi}:\mathcal{O}^{\mathcal{M}\times\mathcal{N}}\rightarrow(\mathcal{B} )^{\mathcal{N}}\) generated by the following algorithm: 1. \(R_{J}^{\psi}:=[]\); \(S=\emptyset\); 2. If \(\psi(\mathbf{v}[\forall J,\forall i\in S,v_{i}(J)=\emptyset])(J)=\emptyset\); Return \(R_{J}^{\psi}\), END 3. Else \(\alpha:=\psi(\mathbf{v}[\forall J,\forall i\in S,v_{i}(J)=\emptyset])(J)\)\(R_{J}^{\psi}:=(R_{J}^{\psi},\alpha)\); 4. Find \(i\in\mathcal{N}-S\) such that \(v_{i}(J)=\alpha\) or \(f_{i,J}(v_{i})=\alpha\); \(S:=S\cup\{i\}\). 5. Go back to (2) As \(\psi\) if fixed we will drop in the sequel its index just use the notation \(R_{J}\). Because the votes associated to each voter in the voting pool for a candidate \(J\) is independent from the ballots of all other voters, we conclude that: Proposition 6.1 ().: _The voting range associated to \(J\) is well defined. (Proof.E.1.)_ Definition 6.2 (\(\psi\)-Ordering).: We declare \(I<_{\psi}J\) at \(\mathbf{v}\) if and only if \(R_{I}(\mathbf{v})<R_{J}(\mathbf{v})\) for the lexicographic order. When all voters are eligible and the voters input a grade to all the candidates, two well-known ranking functions are the leximin and leximax ordering (Moulin, 1980b). Majority judgment (Balinski and Lara 2007), which ranks iteratively according to the smallest median, is another. Corollary 6.3 ().: _The ranking function \(<_{\psi}\) defines a total (hence complete and transitive) order where a tie between two candidates happens only when their voting pools are identical. (Proof.6.3.)_ Observe that, to rank \(I\) and \(J\) using the voting ranges defined by \(\psi\), the only requirement is that the voting pool sizes for \(I\) and \(J\) are equal. The next section extends the construction to non-equal sizes, which is very important in many applications. ### Ranking when voting pools have different sizes (under OC) When the voting pool size for two candidates are not initially equal, we will duplicate their respective voting pools until with get the same size. Operationally, this is equivalent to pass to a continuum electorate and normalize so that each voting pools size is 100%. This is how LaPrinaire.org and Paris participatory budget uses MJ to compare candidates with juries of different sizes. Definition 6.4 (Duplicate voters).: For a phantom-proxy \(\psi\), a voter \(j\) is the duplicate of \(i\) at the voting profile \(\mathbf{v}\) iff \(v_{i}=v_{j}\) and \(f_{i,J}=f_{j,J}\) for all \(J\). Now let us make the link with the range function defined in the previous section. When \(\psi\) is a proxy and \(\mathbf{v}\) is voting profile, \(R_{J}(\mathbf{v})\) is a vector \((R_{J}(\mathbf{v})(1),...,R_{J}(\mathbf{v})(d))\) for some \(d\). What happens to this vector if we duplicate the electorate \(k\) times? In general, anything can happen. However, if \(\psi\) satisfies the outer consistent (OC) property, the next proposition provides a nice answer. Assuming (OC) makes sense because (OC) is a desirable property for variable electorates. Recall that it implies in particular the Participation (P), the Blank Vote (BV) and the Silent Ignored (SI) properties. Proposition 6.5 ().: _When a phantom-proxy mechanism \(\psi\) verifies (OC), for all voting profile \(\mathbf{v}\), if \(\mathbf{w}\) is obtained by duplicating all the voters \(k\) times, then for \(\mathbf{w}\), the order in which we select for the first time a duplicate representing a new voter is the same as the order in which the voters where selected for **v**. (Proof E.4)_ **Corollary 6.6**: _When a voting phantom-proxy mechanism \(\psi\) verifies (OC,F) and the voting profile **v** provides us \(I<_{\psi}J\) then any voting profile **w** obtained by duplicating \(k\) times all voters of **v** also verifies \(I<_{\psi}J\). (Proof E.5)_ Consequently, all candidates that could be ranked (using the previous section) because their voting pools have equal size, can still be ranked after duplication, and will keep the same rank as before. We will show now how two candidates with non-equal pools can be ranked under (OC). If for a given **v**, \(I\) and \(J\) cannot be ranked by using \(\psi(\textbf{v})(I)<\psi(\textbf{v})(J)\) then we have that \(\psi(\textbf{v})(I)=\psi(\textbf{v})(J)\) and that the voting pool size for \(I\) and \(J\) are non-equal. Let \(\mathcal{N}_{I}\) and \(\mathcal{N}_{J}\) be the set of voters obtained by duplicating respectfully \(n_{J}\) and \(n_{I}\) times \(\mathcal{N}\). Let **v** be the initial voting profile and let \(\textbf{v}^{I}\) and \(\textbf{v}^{J}\) be the voting profiles for \(\mathcal{N}_{I}\) and \(\mathcal{N}_{J}\) respectively. Consequently, the voting pool of \(I\) for \(\textbf{v}^{I}\) has now the same size as the voting pool of \(J\) for \(\textbf{v}^{J}\). It follows that the voting range \(R_{I}(\textbf{v}^{I})\) can be compared to the voting range \(R_{J}(\textbf{v}^{J})\), as such we can order \(I\) and \(J\) according to \(\psi\). Since \(R_{I}(\textbf{v}^{I})\) represents \(R_{I}(\textbf{v})\) and \(R_{J}(\textbf{v}^{J})\) represents \(R_{J}(\textbf{v})\) we can therefore, without loss of generality, claim that \(R_{I}(\textbf{v})<R_{J}(\textbf{v})\) iff \(R_{I}(\textbf{v}^{I})<R_{J}(\textbf{v}^{J})\). **Proposition 6.7**: _The ranking method just described, we note \(>_{\psi}\), induces a well defined total order._ Interestingly, we can view our ranking function as a grading function where the output space is the range space (typically \(\cup_{n}\mathcal{B}^{n}\)), which is a much richer output space than \(\mathcal{B}\). In this richer space, ties are extremely rare. Also, if we assume that the preferred range outcome on a candidate \(J\) for a voter \(i\) that graded \(J\) is (\(v_{i}(J)\)) and that a voter has a single peaked preference over that totally ordered space (for any two ranges \(R_{1}\) and \(R_{2}\) of same size, if \(R_{1}\) and \(R_{2}\) have the same prefix then use the SP on the first different value with \(v_{i}(J)\) as your peak), then, thanks to our construction, our richer grading function is strategy-proof. **Theorem 6.8**: _The grading function induced by \(>_{\psi}\) on the richer output space is strategy-proof. (Proof E.3)_ ## 7. Extensions and Conclusion The model is so rich that what is left to do exceed what we did. For example: we only characterized the two most natural anonymity properties, while there are many other interesting permutations that transform a **v** into a **w** everything else equal such as: * For any \(i,j\in\mathcal{D}^{\prime\prime}_{J}:w_{i}(J)=v_{j}(J)\) and \(w_{j}(J)=v_{i}(J)\). * For any \(i,j\in\mathcal{D}_{J}:w_{i}(J)=v_{j}(J)\) and \(w_{j}(J)=v_{i}(J)\) We didn't give a full characterization of all the proxy-phantom grading functions; We could have studied a richer proxy class (that may include the uniform median (Caragiannis et al., 2016; Freeman et al., 2019) excluded by our proxy) but we didn't in the paper because their characterizations (we have) are much more involved; We didn't discuss all the axioms we want for a ranking functions (such as IIA, partial SP, consistency in ranking, etc). Finally, we could have tried to characterize the methods where voters weights differ across voters and/or candidates, etc. We have taken as given the voting rights. It is important to understand the optimal or approximately optimal way to determine those rights, given a method. When we have a large set of equally qualified voters who just don't have the time to grade all the candidates, proposing at random a small number of candidates to each seems to be the right solution. (Bentert and Skowron, 2020) computed the optimal way to approximate Borda and minmax rules; LaPrimaire.org, implemented
2304.10946
CancerGPT: Few-shot Drug Pair Synergy Prediction using Large Pre-trained Language Models
Large pre-trained language models (LLMs) have been shown to have significant potential in few-shot learning across various fields, even with minimal training data. However, their ability to generalize to unseen tasks in more complex fields, such as biology, has yet to be fully evaluated. LLMs can offer a promising alternative approach for biological inference, particularly in cases where structured data and sample size are limited, by extracting prior knowledge from text corpora. Our proposed few-shot learning approach uses LLMs to predict the synergy of drug pairs in rare tissues that lack structured data and features. Our experiments, which involved seven rare tissues from different cancer types, demonstrated that the LLM-based prediction model achieved significant accuracy with very few or zero samples. Our proposed model, the CancerGPT (with $\sim$ 124M parameters), was even comparable to the larger fine-tuned GPT-3 model (with $\sim$ 175B parameters). Our research is the first to tackle drug pair synergy prediction in rare tissues with limited data. We are also the first to utilize an LLM-based prediction model for biological reaction prediction tasks.
Tianhao Li, Sandesh Shetty, Advaith Kamath, Ajay Jaiswal, Xianqian Jiang, Ying Ding, Yejin Kim
2023-04-18T02:49:53Z
http://arxiv.org/abs/2304.10946v1
# CancerGPT: Few-shot Drug Pair Synergy Prediction using Large Pre-trained Language Models ###### Abstract Large pre-trained language models (LLMs) have been shown to have significant potential in few-shot learning across various fields, even with minimal training data. However, their ability to generalize to unseen tasks in more complex fields, such as biology, has yet to be fully evaluated. LLMs can offer a promising alternative approach for biological inference, particularly in cases where structured data and sample size are limited, by extracting prior knowledge from text corpora. Our proposed few-shot learning approach uses LLMs to predict the synergy of drug pairs in rare tissues that lack structured data and features. Our experiments, which involved seven rare tissues from different cancer types, demonstrated that the LLM-based prediction model achieved significant accuracy with very few or zero samples. Our proposed model, the CancerGPT (with \(\sim 124\)M parameters), was even comparable to the larger fine-tuned GPT-3 model (with \(\sim 175\)B parameters). Our research is the first to tackle drug pair synergy prediction in rare tissues with limited data. We are also the first to utilize an LLM-based prediction model for biological reaction prediction tasks. ## 1 Introduction Foundation models have become the latest generation of artificial intelligence (AI) (Moor et al. (2023)). Instead of designing AI models that solve specific tasks one at a time, such foundation models or "generalist" model can be applied to many downstream tasks without specific training. For example, large pre-trained language model (LLM), such as GPT-3 (Brown et al. (2020)) and GPT-4 (OpenAI (2023)), has been a game changer in foundation AI model (Mitchell and Krakauer (2023)). LLM can apply its skills to unfamiliar tasks that it has never been trained for, which is few-shot learning or zero-shot learning. This is due in part to multitask learning, which enables LLM to unintentionally gain knowledge from implicit tasks in its training corpus (Radford et al. (2018)). Although LLM has shown its proficiency in few-shot learning in various fields (Brown et al. (2020)), including natural language processing, robotics, and computer vision (Veit et al. (2017); Brown et al. (2020); Wertheimer and Hariharan (2019)), the generalizability of LLM to unseen tasks in more complex fields such as biology has yet to be fully tested. In order to infer unseen biological reactions, knowledge of participating entities (e.g., genes, cells) and underlying biological mechanisms (e.g., pathways, genetic background, cellular environment) is required. While structured databases encode only a small portion of this knowledge, the vast majority is stored in free-text literature which could be used to train LLMs. Thus, we envision that, when there are limited structured data and limited sample sizes, LLMs can serve as an innovative approach for biological prediction tasks, by extracting prior knowledge from unstructured literature. One of such few-shot biological prediction tasks with a pressing need is a drug pair synergy prediction in understudied cancer types. Drug combination therapy has become a widely accepted strategy for treating complex diseases such as cancer, infectious diseases, and neurological disorders. In many cases, combination therapy can provide better treatment outcomes than single-drug therapy. Predicting drug pair synergy has become an important area of research in drug discovery and development. Drug pair synergy refers to the enhancement of the therapeutic effects of two (or more) drugs when used together compared to when each drug is used alone. The prediction of drug pair synergy can be challenging due to a large number of possible combinations and the complexity of the underlying biological mechanisms (Zagidullin et al. (2019)). Several computational methods have been developed to predict drug pair synergy, particularly using machine learning. Machine learning algorithms can be trained on large datasets of in vitro experiment results of drug pairs to identify patterns and predict the likelihood of synergy for a new drug pair. However, most of the data available comes from common cancer types in certain tissues, such as breast and lung cancer; very limited experiment data are available on certain types of tissues, such as bone and soft tissues (Fig. 1). Obtaining cell lines from these tissues can be physically difficult and expensive, which limits the number of training data available for drug pair synergy prediction. This can make it challenging to train machine learning models that rely on large datasets. Early studies in this area have relied on relational information or contextual information to extrapolate the synergy score to cell lines in other tissues, (Chen and Li (2018); Sun et al. (2020); Li et al. (2018); Kuru et al. (2022); Liu et al. (2022)), ignoring the biological and cellular differences in these tissues. Another line of studies has sought to overcome the discrepancy between tissues by utilizing diverse and high-dimensional features, including genomic (e.g., gene expression of cell lines) or chemical profiles (e.g., drug structure) (Preuer et al. (2018); Liu and Xie (2021); Kuru et al. (2022); Hosseini and Zhou (2023); Kim et al. (2021)). Despite the promising results in some tissues (with abundant data), these approaches cannot be applied to tissues with too limited data to adapt its model with the large number of parameters for those high-dimensional features. In this work, we aim to overcome the above challenge by LLMs. We hypothesize that cancer types with limited structured data and discrepant features still have good information in scientific literature. Manually extracting predictive information on such biological entities from literature is a complex task. Our innovative approach is to leverage prior knowledge in scientific literature encoded in LLMs. We built a few-shot drug pair synergy prediction model that transforms the prediction task into a natural language inference task and generate answers based on prior knowledge encoded in LLMs. Our experimental results demonstrate that our LLM-based few-shot prediction model achieved significant accuracy even in zero shot setting (i.e., no training data) and outperformed strong tabular prediction models in most cases. This remarkable few-shot prediction performance in one of the most challenging biological prediction tasks has a critical and timely implication to a broad community of biomedicine because it shows a strong promise in the "generalist" biomedical artificial intelligence (Moor et al. (2023)). ## 2 Related Work ### Drug pair synergy prediction Lots of methods have been proposed to predict drug pair synergy in recent years. Based on the data type to use, these methods can be classified either as a multi-way relational method or as a context-aware method. Multi-way relational methods (Chen and Li (2018); Sun et al. (2020); Liu et al. (2022)) use drug and cell line's relational information without any further chemical or gene information as input and predict drug pair's synergy. Context-aware methods (Preuer et al. (2018); Liu and Xie (2021); Kuru et al. (2022); Hosseini and Zhou (2023)) further utilized chemical and gene information from drugs and cell lines to predict drug pair's synergy, which usually contains drug-drug, drug-gene, gene-gene interactions, and cellular environment. These methods usually achieve good performance with rich features on common tissues. However, both approaches do not apply to the cell lines in rare tissues with the limited size of data and cellular information. Kim et al. (2021) uses transfer learning to extend the prediction model trained in common tissues to some of the rare tissues with relatively rich data and cellular features. However, it cannot be utilized for rare tissues with extremely limited data and cellular information. ### Few-shot learning on tabular data Traditional supervised learning algorithms can struggle due to the difficulty in obtaining enough labeled data for classification. Few-shot learning is an emerging field that aims to address this issue by enabling machines to learn from a few examples rather than requiring a large size of labeled data. Meta-learning (Finn et al. (2017); Wang et al. (2023); Gao et al. (2023)) is one technique for few-shot learning. It trains a model on a set of tasks in a way that allows it to quickly learn to solve new, unseen tasks with a few examples. Another technique is data augmentation (Nam et al. (2023); Yang et al. (2022)), which generates new examples by transforming existing data. One promising but less explored direction is to leverage LLMs, particularly when prior knowledge encoded in a corpus of text can be served as a predictive feature. TabLLM (Hegselmann et al. (2023)) is one such framework. It serializes the tabular input into a natural language text and prompts LLM to generate predictions. Leveraging TabLLM, we investigated the effectiveness of LLMs in few-shot learning tasks in biology. ### Language models for biomolecular sequence analysis There has been a growing interest in using language models for biomolecular sequence analysis, and one approach involves the training of language models with biomolecular data Figure 1: Few-shot prediction in biology. A. Different from task-specific approach, large pre-trained language model can perform new tasks which are not been explicitly trained for. B. Drug pair synergy prediction in rare tissues is an important examples of numerous few-shot prediction tasks in biology. C. Large pre-trained language model can be an innovative approach for few-shot prediction in biology thanks to its prior knowledge encoded in its weight. (Madani et al. (2023); NVIDIA (2023)). These models learn the language of biomolecules, such as DNA, RNA, and protein sequences, similar to how GPT-2 (Radford et al. (2018)) or GPT-3 (Brown et al. (2020)) learns human language. However, our study takes a different approach. Rather than training a language model specifically for biomolecular data, we use a language model that has been pre-trained on a corpus of human language text. This pre-trained model is used as a few-shot prediction model for drug pair synergy data, allowing us to make accurate predictions with minimal training data. By leveraging the power of pre-trained language models, we are able to make use of existing resources and obtain generalizability to diverse biological prediction tasks beyond biomolecule sequence analysis. ## 3 Results We developed CancerGPT, a few-shot drug pair synergy prediction model for rare tissues. Leveraging LLMs-based tabular data prediction model (Hegselmann et al. (2023)), we first converted the prediction task into a natural language inference task and generated answer using prior knowledge from the scientific literature encoded in LLM's pre-trained weight matrices (Section 5.3, Fig. 2). We presented our strategy to adapt the LLM to our task with only a few shots of training data in each rare tissue in Section 5.5 and Fig. 3. To evaluate the performance of our proposed CancerGPT model and other LLM-based models, we conduct a series of experiments in which we compare the model with various other tabular models (Section 6). We measured accuracy using the area under the precision-recall curve (AUPRC) and the area under the receiver operating curve (AUROC) under the different settings. We considered different few-shot learning scenarios, where the model is provided with a limited number \(k\) of training data to learn from (\(k\)=0 to 128). By varying the number of shots, we can examine the model's ability to adapt and generalize with minimal training data. Next, we investigated the performance of CancerGPT and other LLM-based models across different tissue types. Since cancer is a highly heterogeneous disease with various subtypes, it is crucial for the model to be able to accurately predict outcomes in diverse tissue contexts. We then investigated whether the LLM's reasoning for its prediction is valid by checking its argument with scientific literature. ## 1 Drug pair synergy data ## 2 Convert tabular input and prediction task to natural text ### Predict drug pair synergy Figure 2: Study workflow. We first converted the tabular input to natural text and created a task-specific prompt (Section 5.2). The prompt was designed to generate binary class predictions (e.g., _“Positive”, “Not positive”_). We fine-tuned the LLMs (GPT-2 and GPT-3) with \(k\)-shots of data in rare tissues (Section 5.5). We further tailored GPT-2 by fine-tuning it with a large amount of common tissue data, in order to adjust GPT-2 in the context of drug pair synergy prediction (CancerGPT, Section 5.4). We evaluated and compared the prediction models with a different number of shots and tissues (Section 6). We investigated the LLM’s reasoning based on factual evidence. ### Accuracy \begin{table} \begin{tabular}{r r r r r r r r r r} \hline \hline & \multicolumn{1}{c}{Methods} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{64} & \multicolumn{1}{c}{128} \\ \hline & XGBoost & 0.026 & - & - & - & - & - & - & - & - \\ Pancreas & TabTransformer & 0.056 & - & - & - & - & - & - & - \\ (\(n_{0}\)=38, \(n_{1}\)=1) & CancerGPT & 0.033 & - & - & - & - & - & - & - \\ & GPT-2 & 0.032 & - & - & - & - & - & - & - \\ & GPT-3 & **0.111** & - & - & - & - & - & - & - \\ & XGBoost & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & - & - \\ Endometrium & TabTransformer & 0.674 & 0.889 & 0.903 & **0.948** & **0.938** & **0.962** & - & - \\ (\(n_{0}\)=36, \(n_{1}\)=32) & CancerGPT & 0.564 & 0.668 & 0.676 & 0.831 & 0.686 & 0.737 & - & - \\ & GPT-2 & 0.408 & 0.808 & 0.395 & 0.383 & 0.389 & 0.717 & - & - \\ & GPT-3 & **0.869** & **1** & **0.947** & 0.859 & 0.799 & 0.859 & - & - \\ & XGBoost & 0.132 & 0.132 & 0.132 & 0.132 & 0.132 & 0.132 & 0.12 & 0.12 \\ Liver & TabTransformer & 0.13 & **0.128** & 0.147 & 0.189 & 0.265 & 0.168 & 0.169 & 0.234 \\ (\(n_{0}\)=192, \(n_{1}\)=21) & CancerGPT & 0.136 & 0.102 & 0.13 & 0.147 & 0.252 & 0.21 & 0.197 & 0.187 \\ & GPT-2 & **0.5** & 0.099 & **0.151** & **0.383** & **0.429** & **0.401** & **0.483** & 0.398 \\ & GPT-3 & 0.185 & 0.086 & 0.096 & 0.125 & 0.124 & 0.314 & 0.362 & **0.519** \\ & XGBoost & 0.243 & 0.243 & 0.243 & 0.243 & 0.235 & 0.235 & 0.264 & 0.271 \\ Soft tissue & TabTransformer & 0.273 & 0.287 & **0.462** & **0.422** & **0.526** & 0.571 & 0.561 & 0.64 \\ (\(n_{0}\)=269, \(n_{1}\)=83) & CancerGPT & **0.314** & **0.315** & 0.338 & 0.383 & 0.403 & 0.464 & 0.469 \\ & GPT-2 & 0.259 & 0.298 & 0.254 & 0.262 & 0.235 & 0.297 & 0.254 & 0.206 \\ & GPT-3 & 0.263 & 0.194 & 0.28 & 0.228 & 0.363 & **0.618** & **0.638** & **0.734** \\ & XGBoost & 0.104 & 0.104 & 0.104 & 0.104 & 0.104 & 0.104 & 0.09 & 0.094 \\ Stomach & TabTransformer & 0.261 & **0.371** & **0.396** & **0.383** & **0.294** & **0.402** & **0.45** & **0.465** \\ Stomach & CancerGPT & **0.3** & 0.297 & 0.316 & 0.325 & 0.269 & 0.308 & 0.297 & 0.312 \\ (\(n_{0}\)=1081, \(n_{1}\)=109) & GPT-2 & 0.116 & 0.124 & 0.099 & 0.172 & 0.165 & 0.107 & 0.152 & 0.131 \\ & GPT-3 & 0.078 & 0.106 & 0.17 & 0.37 & 0.1 & 0.19 & 0.219 & 0.181 \\ & XGBoost & 0.186 & 0.186 & 0.186 & 0.186 & 0.186 & 0.197 & 0.199 & 0.209 \\ & TabTransformer & 0.248 & **0.264** & **0.25** & **0.278** & **0.274** & 0.249 & **0.293** & **0.291** \\ Urinary tract (\(n_{0}\)=1996, \(n_{1}\)=462) & CancerGPT & 0.241 & 0.226 & 0.246 & 0.239 & 0.256 & **0.271** & 0.266 & 0.269 \\ & GPT-2 & 0.191 & 0.192 & 0.188 & 0.156 & 0.193 & 0.185 & 0.183 & 0.185 \\ & GPT-3 & **0.27** & 0.228 & 0.222 & 0.201 & 0.206 & 0.2 & 0.24 & 0.272 \\ & XGBoost & 0.064 & 0.064 & 0.064 & 0.064 & 0.064 & 0.064 & 0.064 & 0.064 & 0.064 \\ Bone & TabTransformer & **0.123** & 0.12 & 0.121 & 0.115 & 0.102 & **0.13** & **0.129** & 0.121 \\ & CancerGPT & 0.119 & **0.115** & **0.125** & **0.116** & **0.115** & 0.11 & 0.114 & 0.125 \\ (\(n_{0}\)=3732, \(n_{1}\)=253) & GPT-2 & 0.063 & 0.094 & 0.057 & 0.081 & 0.052 & 0.071 & 0.057 & 0.065 \\ & GPT-3 & 0.064 & 0.051 & 0.045 & 0.058 & 0.068 & 0.087 & 0.101 & **0.181** \\ \hline \hline \end{tabular} \end{table} Table 1: AUPRC of \(k\)-shot learning on seven tissue sets. \(n_{0}\):=total number of non-synergistic samples (not positive), \(n_{1}\):=total number of synergistic samples (positive). We used 20% data as a test set in each rare tissue, while ensuring the binary labels were equally represented. We evaluated the accuracy of our synergy prediction models. We calculated the AUPRC and AUROC of the LLM-based models (CancerGPT, GPT-2, GPT-3) and baseline models (XGBoost, TabTransformer) (Table 1, 2). Due to an imbalance in positive and non-positive labels, we reported both AUPRC and AUROC. Details on the classification task and threshold of synergy are discussed in Section 6.3. Number of training data and accuracyOverall, the LLM-based models (CancerGPT, GPT-2, GPT-3) achieved comparable or better accuracy in most of the cases compared to baselines. In the zero-shot scenario, the LLM-based models generally had higher accuracy than the baseline models in all experiments except stomach and bone. As the number of shots increased, we observed mixed patterns across various tissues and models. TabTransformer consistently exhibited an increase in accuracy with more shots. CancerGPT showed \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & \multicolumn{8}{c}{Number of shots} \\ \cline{3-10} & Methods & 0 & 2 & 4 & 8 & 16 & 32 & 64 & 128 \\ \hline \multirow{10}{*}{Pancreas} & XGBoost & 0.5 & & & & & - & - & - \\ & TabTransformer & 0.553 & & & & & - & - & - \\ & CancerGPT & 0.237 & & & & & & - & - \\ & GPT-2 & 0.211 & & & & & - & - & - \\ & GPT-3 & **0.789** & & & & & & & \\ & XGBoost & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & - & - \\ & TabTransformer & 0.694 & 0.857 & 0.878 & **0.939** & **0.939** & **0.959** & - & - \\ Endometrium & CancerGPT & 0.489 & 0.693 & 0.714 & 0.735 & 0.612 & 0.612 & - & - \\ & GPT-2 & 0.265 & 0.816 & 0.224 & 0.184 & 0.204 & 0.612 & - & - \\ & GPT-3 & **0.837** & **1** & **0.949** & 0.898 & 0.878 & 0.898 & - & - \\ & XGBoost & 0.587 & 0.587 & 0.587 & 0.587 & 0.587 & 0.574 & 0.574 \\ & TabTransformer & 0.535 & 0.506 & 0.526 & 0.535 & 0.609 & 0.647 & 0.702 & 0.804 \\ Liver & CancerGPT & 0.615 & 0.468 & 0.59 & 0.641 & **0.782** & **0.776** & **0.737** & 0.737 \\ & GPT-2 & **0.731** & 0.449 & 0.558 & **0.66** & 0.679 & 0.763 & 0.731 & 0.731 \\ & GPT-3 & 0.615 & 0.49 & 0.542 & 0.583 & 0.474 & 0.731 & **0.737** & **0.91** \\ & XGBoost & 0.491 & 0.491 & 0.491 & 0.491 & 0.454 & 0.476 & 0.542 & 0.552 \\ & TabTransformer & 0.557 & 0.566 & **0.709** & 0.727 & **0.788** & **0.802** & 0.83 & 0.835 \\ Soft tissue & CancerGPT & **0.656** & 0.646 & 0.68 & **0.734** & 0.725 & 0.754 & 0.8 & 0.795 \\ & GPT-2 & 0.546 & 0.535 & 0.519 & 0.56 & 0.427 & 0.577 & 0.456 & 0.384 \\ & GPT-3 & 0.517 & 0.406 & 0.6 & 0.444 & 0.607 & 0.82 & **0.866** & **0.889** \\ & XGBoost & 0.529 & 0.529 & 0.529 & 0.529 & 0.529 & 0.529 & 0.476 & 0.508 \\ & TabTransformer & **0.804** & **0.863** & **0.855** & **0.853** & **0.812** & **0.855** & **0.885** & **0.869** \\ Stomach & CancerGPT & 0.794 & 0.792 & 0.796 & 0.794 & 0.785 & 0.787 & 0.824 & 0.808 \\ & GPT-2 & 0.551 & 0.569 & 0.521 & 0.516 & 0.589 & 0.538 & 0.469 & 0.566 \\ & GPT-3 & 0.419 & 0.575 & 0.724 & 0.769 & 0.534 & 0.69 & 0.742 & 0.724 \\ & XGBoost & 0.494 & 0.494 & 0.494 & 0.494 & 0.494 & 0.526 & 0.53 & 0.544 \\ & TabTransformer & 0.599 & 0.612 & 0.604 & 0.625 & 0.601 & 0.587 & 0.623 & 0.622 \\ Urinary tract & CancerGPT & 0.578 & 0.561 & 0.579 & 0.577 & 0.589 & 0.569 & 0.593 & 0.609 \\ & GPT-2 & 0.526 & 0.528 & 0.532 & 0.397 & 0.515 & 0.452 & 0.469 & 0.566 \\ & GPT-3 & **0.645** & 0.57 & 0.556 & 0.496 & 0.508 & 0.516 & 0.531 & 0.572 \\ & XGBoost & 0.499 & 0.499 & 0.499 & 0.499 & 0.499 & 0.499 & 0.499 & 0.499 & 0.499 \\ & TabTransformer & **0.706** & **0.705** & **0.724** & **0.697** & **0.655** & **0.689** & **0.708** & 0.696 \\ Bone & CancerGPT & 0.625 & 0.648 & 0.693 & 0.653 & 0.636 & 0.658 & 0.681 & 0.68 \\ & GPT-2 & 0.507 & 0.616 & 0.471 & 0.579 & 0.421 & 0.552 & 0.476 & 0.518 \\ & GPT-3 & 0.498 & 0.415 & 0.341 & 0.429 & 0.485 & 0.605 & 0.62 & **0.794** \\ \hline \hline \end{tabular} \end{table} Table 2: AUROC of \(k\)-shot learning on seven tissues sets. higher accuracy with more shots in the endometrium and soft tissue, and GPT-3 showed higher accuracy with more shots in the liver, soft tissues, and bone, indicating that the information gained from a few shots of data complements the prior knowledge encoded in CancerGPT and GPT-3. However, the LLM-based models sometimes did not show significant improvements in accuracy in certain tissues, such as the stomach and urinary tract, suggesting that the additional training data do not always improve the LLM-based models' performance. With the maximum number of shots (\(k\)=128), the LLM-based model, specifically GPT-3, was on par with TabTransformer, achieving the highest accuracy with the pancreas, liver, soft tissue, and bone, while TabTransformer achieved the best accuracy with endometrium, stomach, and urinary tract. Tissue types and accuracyThe accuracy of the models varied depending on the tissue types, as each tissue possessed unique characteristics and had different data size. In pancreas and endometrium tissues, GPT-3 showed high accuracy with only a few shots (\(k\)=0 or 2). Generally, the cell lines from the two tissues are difficult to obtain and have a limited number of well-established cell lines, which makes them less investigated. For example, the pancreas is located deep within the abdomen, making it difficult to access and isolate cells without damaging them. The endometrium is a complex tissue that undergoes cyclic changes during the menstrual cycle, and this dynamic process complicates the cell culturing process. Due to this limited training data, few-shot drug pair synergy prediction in these tissues required even higher generalizability. In the liver, soft tissue, and bone, GPT-3 again achieved the highest accuracy than any other models, including one that trained with common tissues (TabTransformer, CancerGPT). This may be because these tissues have unique cellular characteristics specific to their tissue of origin that training with common tissues may not help predict accurately. For example, hepatic cell lines (originated from liver tissue) are often used in research on drug metabolism and toxicity and have unique drug response characteristics due to high expression of drug-metabolizing enzymes such as cytochrome P450s (Guo et al. (2011)). Bone cell lines have bone-specific signaling pathways that can affect drug responses, and the extracellular matrix composition and structure in bone tissue can also impact drug delivery and efficacy (Lin et al. (2020)). On the other hand, models trained with common tissues (TabTransformer, CancerGPT) achieved the best accuracy in the stomach and urinary tract tissues of all \(k\), indicating that the prediction learned from common tissues can be extrapolated to these tissues. Particularly, CancerGPT achieved the highest accuracy with no training sample (\(k\)=0) in the stomach. Comparing LLM-based modelsWhen comparing LLM-based models, CancerGPT and GPT-3 demonstrated superior accuracy compared to GPT-2 in most tissues. GPT-3 exhibited higher accuracy than CancerGPT in tissues with limited data or unique characteristics, while CancerGPT performed better than GPT-3 in tissues with less distinctive characteristics, such as the stomach and urinary tract. The higher accuracy of CancerGPT compared to GPT-2 highlights that well-balanced adjustment to specific tasks can increase the accuracy while maintaining generalizability. However, the benefits of such adjustments may diminish with larger LLM models, such as GPT-3 (175B parameters), in situations where more generalizability is required. The fact that CancerGPT with smaller parameters (124M parameters) achieved the comparable accuracy to GPT-3 with larger parameters (175B parameters) implies that further fine-tuning of GPT-3 could achieve even higher accuracy. ### Fact check LLM's reasoning We evaluated whether the LLM can provide the biological reasoning behind its prediction. In this experiment, we used zero-shot GPT-3 because other fine-tuned LLM-based models compromised its language generative performance during the fine-tuning and were not able to provide coherent responses. To do this, we randomly selected one true positive prediction and examined whether its biological rationale was based on factual evidence or mere hallucination. Our example was the drug pair AZD4877 and AZD1208 at cell line T24 for urinary tract tissue. We prompted the LLMs with _"Could you provide details why are the drug1 and drug2 synergistic in the cell line for a given cancer type?"_. Details on prompt generation are discussed in Supplementary 1. We evaluated the generated answer by comparing it with existing scientific literature. We found that the LLM provided mostly accurate arguments, except for two cases (Table 3) in which no scientific literature exists. By combining these individual scientific facts the LLM inferred the unseen synergistic effect. Generally, drugs targeting non-overlapping proteins in similar pathways are more likely to be synergistic (Cheng et al. (2019); Tang and Gottlieb (2022)). In this case, both AZD4877 and AZD1208 target similar pathways that inhibit tumor cell divisions without overlapping protein targets. The Loewe synergy score of this pair at T24 was 46.82, meaning strong positive synergistic effect. ### Example of prediction results As an example, we listed predicted synergistic drug pairs for stomach and soft tissue using CancerCPT (Table S3.1, S3.2) and bone and liver tissue using GPT-3 (Table S3.3, S3.4). We randomly selected two true positive, false positive, true negative, and false negative prediction examples. We discovered that Loewe synergy scores of the true negative or false negative prediction examples were close to the threshold we used to categorize the label (i.e., Loewe score \(>\)5). This suggests that accuracy may vary significantly by different thresholds for determining positive synergy. Setting more extreme thresholds (e.g., \(>\)10, \(>\)30), like previous models (Kim et al. (2021); Kuru et al. (2022); Hosseini and Zhou (2023)), may increase the prediction accuracy. ## 4 Discussion SummaryOur study investigates the potential of LLMs as a widely applicable few-shot prediction model in the field of biology. Specifically, we propose a new few-shot model for predicting drug pair synergy, which can be used in rare tissues with few or no training samples available. We transformed tabular data prediction into natural language inference tasks and fine-tuned LLMs (GPT-2, GPT-3) with very few samples in each tissue. The CancerGPT model, which was further tuned with a large amount of common tissue data, showed comparable accuracy to the few-shot tuned GPT-3 model, indicating that tailoring \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline \multicolumn{2}{|c|}{Excerpt of the generated answer} & Fact check and reference \\ \hline _“The combination of AZD-4877 and AZD1208 has been studied in T24 cells...to be synergistic in reducing bladder cancer cell growth and metastasis”_ & False. No study conducted on this drug pair \\ \hline _“The combination was also found to target multiple pathways involved in the growth and spread of bladder cancer cells.”_ & True. AZD1208 is a PIM1 inhibitor. PIM1 is overexpressed in bladder cancer initiation and progression (Guo et al. (2010)). AZD4877 is a drug designed to target bladder cancer (Jones et al. (2013)). \\ \hline _“...Specifically, AZD-4877 was found to inhibit the activation of proteins involved in the promotion of tumor growth...”_ & True. AZD4877 is a drug designed to target bladder cancer (Jones et al. (2013)). \\ \hline _“...AZD1208 was found to inhibit proteins associated with the inhibition of tumor growth.”_ & True. AZD1208 inhibits the cell growth by suppressing p70S6K, 4EBP1 phosphorylation, and messenger RNA translation (in acute myeloid leukemia) (Cortes et al. (2018)). \\ \hline _“This combination was also effective at reducing the production of inflammatory mediators such as cytokines, which are known to contribute to tumor progression.”_ & False. AZD1208 is a pan-PIM kinase inhibitor, and PIM kinases are downstream effectors of cytokine (National Cancer Institute (2011)). However, AZD4877 has no evidence in reducing inflammatory mediators. \\ \hline _“...these two drugs have been shown to reduce levels of apoptosis inhibitors, which can also play a role in tumor progression.”_ & True. AZD1208 induce cell apoptosis (Cervantes-Gomez et al. (2019)). AZD4877 is a inhibitor of Eg5, which promotes cell apoptosis (Borthakur et al. (2009)). \\ \hline \end{tabular} \end{table} Table 3: Example of generated answer when the LLM was asked to provide its reasoning for its prediction GPT-3 to specific tasks could further improve prediction accuracy. The LLM's reasoning for its prediction revealed that it implicitly infers unseen synergistic effects by combining several independent scientific facts. Why drug pair synergy prediction to evaluate LLMsThe prediction of drug pair synergy in uncommon tissues serves as an excellent benchmark task for evaluating LLMs in few-shot learning within the field of biology. This prediction requires incorporating multiple pieces of information, such as drug and cell line, as well as the sensitivity of drugs to the cell lines, in order to infer the synergistic effects. While detailed information on these entities can be found in scientific papers, the interaction effect, or synergistic effect, is primarily available through biological experiments. To effectively assess LLMs' inference capabilities, one must employ a prediction task where the ground truth is not explicitly available in text format but can be determined through alternative sources for model evaluation. Typically, drug pair synergy scores are obtained through high-throughput testing facilities involving robot arms (He et al. (2018)). Therefore, individual records of the experiments are seldom recorded in academic literature, decreasing the likelihood of their use as training data for LLMs. Additionally, few studies have been conducted on rare tissues regarding their synergy prediction models, and their synergy prediction outcomes are not explicitly stated in text format. Another similar task is predicting the sensitivity of a single drug in a cell line; however, since the sensitivity of individual drugs is extensively researched and well-documented in publications, the LLM model may merely recollect from the text rather than infer unseen tasks. Comparison to existing drug pair synergy prediction modelsIt should be noted that it was not possible to compare our LLM-based models with previous predictions of drug pair synergy. The majority of models necessitate high-dimensional features of drugs and cells (e.g., genomic or chemical profiles), along with a substantial amount of training data, even the one specifically designed for rare tissue (Kim et al. (2021)). This kind of data is not easily accessible in rare tissues, which makes it challenging to carry out a significant comparison. Our model is designed to address a common but often overlooked situation where we have limited features and data. Thus, we compared the LLM-based models with other tabular models that share the same set of inputs. ContributionThe contribution of our study can be summarized as follows. In the area of drug pair synergy prediction in rare tissues, our study is the first to predict drug pair synergy on tissues with very limited data and features, which other previous prediction models have neglected. This breakthrough in drug pair synergy prediction could have significant implications for drug development in these cancer types. By accurately predicting which drug pair will have a synergistic effect on these tissues in which cell lines are expensive to obtain, biologists can directly zoom into the most probable drug pairs and perform in vitro experiments in a cost effective manner. Our study also delivers generalizable insights about LLMs in the broader context of biology. To the best of our knowledge, our study was the first to investigate the use of LLMs as an few-shot inference tool based on prior knowledge in the field of biology, where much of the latest information is presented in unstructured free text (such as scientific literature). This innovative approach could have significant implications for advancing computational biology where obtaining abundant training data is not readily possible. By leveraging the vast amounts of unstructured data available in the field, LLMs can help researchers bypassing the challenge of limited training data when building data-driven computational models. Furthermore, this LLM-based few-shot prediction approach could be applied to a wide range of diseases beyond cancer, which is currently limited by the scarcity of available data. For instance, this approach could be used in infectious diseases, where the prompt identification of new treatments and diagnostic tools is crucial. LLMs could help researchers quickly identify potential drug targets and biomarkers for these diseases, resulting in faster and more effective treatment development. LimitationsThe present study, while aiming to showcase the potential of LLMs as a few-shot prediction model in the field of biology, is not without its limitations. To fully establish the generalizability of LLMs as a "generalist" artificial intelligence, a wider range of biological prediction tasks must be undertaken to validate it. Additionally, it is crucial to investigate how the information gleaned from LLMs complements the existing genomic or chemical features that have traditionally been the primary source of predictive information. In future research, we plan to delve deeper into this aspect and develop an ensemble method that effectively utilizes both existing structured features and new prior knowledge encoded in LLMs. Furthermore, while we observed that GPT-3's reasoning was similar to our own when fact-checking its argument with scientific literature in one example, it is important to note that the accuracy of its arguments cannot always be verified and may be susceptible to hallucination. It is reported that LLMs can also contain biases that humans have (Schramowski et al. (2022)). Therefore, further research is necessary to ensure that the LLM's reasoning is grounded in factual evidence. Despite these limitations, our study provides valuable insights into the potential of LLMs as a few-shot prediction model in biology and lays the groundwork for future research in this area. ## 5 Method ### Problem Formulation ObjectiveOur objective is to predict whether a drug pair in a certain cell line has a synergistic effect, particularly focusing on rare tissues with limited training samples. Given an input \[x=\{d_{1},d_{2},c,t,ri_{1},ri_{2}\}\] of drug pair \((d_{1},d_{2})\), cell line \(c\), tissue \(t\), and the sensitivity of the two drugs using relative inhibition, the prediction model is \[y\approx f(x)\] where \(y\) is the binary synergy class (1 if positive synergy; 0 otherwise). Prior research (Liu et al. (2022); Hosseini and Zhou (2023)) has employed three different scenarios for predicting drug pair synergy (random split, stratified by cell lines, stratified by drug combinations). Our task is to predict synergy when the data are stratified by tissue, which is a subset of cell lines. Why tabular inputAs discussed in Section 2, relationships learned in a tissue cannot be well generalizable to other tissues that have different cellular environments. This biological difference poses a challenge in predicting drug pair's synergy in tissues with a limited number of samples. The limited sample size makes it even more difficult to incorporate typical cell line features, such as gene expression level, which has large dimensionality (e.g., \(\sim\) 20,000 genes). Due to this data challenge, the drug pair synergy prediction model is then reduced to build a prediction model with limited samples (few or zero-shot learning) with only limited tabular input feature types. Specific input features were described in Section 6. ### Synergy prediction models based on Large pre-trained language models Converting tabular input to natural textTo use an LLM for tabular data, the tabular input and prediction task must be transformed into a natural text. For each instance of tabular data (Fig. 2), we converted the structured features into text. For example, given the feature string (e.g., "drug1", "drug 2", "cell line", "tissue", "sensitivity1", "sensitivity2") and its value (e.g., "lonidamine", "717906-29-1", "A-673", "bone", "0.568", "28.871"), we converted the instance as _"The first drug is AZD1775. The second drug is AZACTICIDINE. The cell line is SF-295. Tissue is bone. The first drug's sensitivity using relative inhibition is 0.568. The second drug's sensitivity using relative inhibition is 28.871."_ Other alternative ways to convert the tabular instance into the natural text are discussed in previous papers (Li et al. (2020); Narayan et al. (2022)). Converting prediction task into natural textWe created a prompt that specifies our tasks and guides the LLM to generate a label of our interest. We experimented with multiple prompts. One example of the prompts we created was _"Determine cancer drug combination synergy for the following drugs. Allowed synergies: Positive, Not positive. Tabular Input. Synergy:"_. As our task is a binary classification, we created the prompt to only generate binary answers (_"Positive", "Not positive"_). Comparing these multiple prompts (Supplementary 1), the final prompt we used in this work was _"Decide in a single word if the synergy of the drug combination in the cell line is positive or not._ {{_Tabular Input_ }}_. Synergy:"_. ### LLM-based prediction model Large pre-trained language modelsWe built our prediction models by tuning GPT-2 and GPT-3 into our tasks (Fig. 2). GPT-2 is a Transformer-based large language model which was pre-trained on a very large corpus of English data without human supervision. It achieved state-of-the-art results on several language modeling datasets in a zero-shot setting when it was released, and it is the predecessor of GPT-3 and GPT-4. GPT-2 (Radford et al. (2018)) has several versions with different sizes of parameters, GPT-2, GPT-Medium, GPT-Large, and GPT-XL. We used GPT-2 with the smallest number of parameters (regular GPT-2, 124 million) in this work to make the model trainable on our server. To adjust the model for a binary classification task, we added a linear layer as a sequence classification head on top of GPT-2, which uses the last token of the output of GPT-2 to classify the input. The cross-entropy loss was used to optimize the model during the fine-tuning process (discussed below). GPT-3 (Brown et al. (2020)) is a Transformer-based autoregressive language model with 175 billion parameters, which achieved state-of-the-art performance on many zero-shot and few-shot tasks when it was released. GPT-3.5, including ChatGPT (OpenAI (2022)), a famous fine-tuned model from GPT-3.5, is an improved version of GPT-3. However, the GPT-3 model and its parameters are not publicly available. Although the weight of the GPT-3 model is undisclosed, OpenAI offers an API (OpenAI (2021)) to fine-tune the model and evaluate its performance. We utilized this API to build drug pair synergy prediction models through \(k\)-shot fine-tuning. There are four models provided by OpenAI for fine-tuning, Davinci, Curie, Babbage, and Ada, of which Ada is the fastest model and has comparable performance with larger models for classification tasks. For that reason, we use GPT-3 Ada as our classification model. After uploading the train data, the API adjusted the learning rate, which is 0.05, 0.1, or 0.2 multiplied by the original learning rate based on the size of the data, and fine-tuned the model for four epochs. A model of the last epoch was provided for further evaluation. ### CancerGPT We further tailored GPT-2 by fine-tuning it with a large amount of common tissue data, in order to adjust GPT-2 in the context of drug pair synergy prediction. We named this model CancerGPT. CancerGPT used the same structure as the modified GPT-2 mentioned above. A linear layer was added to the top of GPT-2, which uses the last token of the GPT-2 output to predict the label. To use the pre-trained GPT-2 model, the same tokenizer was used as GPT-2. Left padding was used to ensure the last token was from the prompt sentence. The cross-entropy loss was used to optimize the model. CancerGPT was first fine-tuned to learn the relational information between drug pairs from common tissues, similar to collaborative filtering (Suhavilai et al. (2018)) (Fig. 3). This approach was based on the assumption that certain drug pairs exhibit synergy regardless of the cellular context, and therefore, the relational information between drug pairs in common tissues can be used to predict synergy in new cell lines in different tissues (Hosseini and Zhou (2023)). Additionally, we incorporated information on the sensitivity of each individual drug to the given cell line, using relative inhibition score as a measure of sensitivity (Zheng et al. (2021)). By doing so, we were able to gather a more detailed and nuanced understanding of the relationship between drugs and cell lines. Subsequently, we utilized CancerGPT as one of the pre-trained LLMs and fine-tuned to \(k\) shots of data in each rare tissue (as discussed in the following section). All the LLM models use the tabular input that was converted to natural text and share the same prompt. ### \(k\)-shot fine-tuning strategy The LLM-based models had different training and fine-tuning strategies (Fig. 3). Samples of common tissues were split into 80% train data and 20% validation data for CancerGPT. The models were trained using train data and evaluated by validation data to determine the models with specific hyperparameters to be used for further fine-tuning on rare tissues. For the GPT-2 and GPT-3 based prediction models, we directly used pre-trained parameters from GPT-2 (Radford et al. (2018)) using Huggingface's Transformers library (Wolf et al. (2020)) and GPT-3 Ada from OpenAI (Brown et al. (2020)) respectively. All these models were then fine-tuned with \(k\) shots of data in each of the rare tissues. For bone, urinary tract, stomach, soft tissues, and liver, we performed experiments with \(k\) from \([0,2,4,8,16,32,64,128]\). For endometrium and pancreas, because of the limited number of data, we implemented experiments with \(k\) from \([0,2,4,8,16,32]\) from the endometrium, and only zero shot (\(k\) = 0) for the pancreas. With the limited number of shots, a careful balance of binary labels in the train and test set was critical. We partitioned the data into 80% for training and 20% for testing in each rare tissue, while ensuring the binary labels were equally represented in both sets. We randomly selected \(k\) shots from the training for fine-tuning, while maintaining consistency with previously selected shots and adding new ones. Specifically, we maintained the previously selected \(k\) shots in the training set and incremented additional \(k\) shots to create \(2\times k\) shots. The binary label distribution in each \(k\) shot set followed that of the original data, with at least one positive and one negative sample included in each set. For evaluation stability, the test data was consistent across different shots for each tissue. Figure 3: Training strategy of baseline and proposed LLM-based models. General tabular models and CancerGPT were first trained with samples from common tissues then \(k\)-shot fine-tuned with each tissue of interest. GPT-2 and GPT-3 are pre-trained models, and we fine-tuned them with \(k\) shots of data in each tissue. ## 6 Experiments ### Dataset We utilized a publicly accessible extensive database of drug synergy from DrugComb Portal (Zagidullin et al. (2019)), which is an open-access data portal where the results of drug combination screening studies for a large variety of cancer cell lines are accumulated, standardized, and harmonized. The database contains both drug sensitivity rows and drug pair synergy rows. After filtering the available drug pair synergy rows, the data contains 4,226 unique drugs, 288 cell lines, with a total of 718,002 drug pair synergy rows. We employed the Loewe synergy score, which ranges from -100 (antagonistic effect) to 75 (strong synergistic effect), for drug combination synergy. (Greco et al. (1995)) The Loewe synergy score quantifies the excess over the expected response if the two drugs are the same compound (Ianevski et al. (2017); Yadav et al. (2015)). In this paper, we focused on cell lines from rare tissues. We defined the rare tissues as the ones with less than 4000 samples, which include the pancreas (n=39), endometrium (n=68), liver (n=213), soft tissues (n=352), stomach (n=1,190), urinary tract (n=2,458), and bone (n=3985). We tested our models with each of the rare tissues. ### Baseline models We compared the LLM-based prediction model with two other tabular models that take the same set of inputs. We specifically used XGBoost (Chen and Guestrin (2016)) and TabTransformer (Huang et al. (2020)). XGBoost is one of the gradient-boosting algorithms for supervised learning based on tree ensemble. for structured or tabular data. It is widely used in large-scale drug synergy data (Sidorov et al. (2019); Celebi et al. (2019)). TabTransformer is a self-attention-based supervised learning model for tabular data. TabTransformer applies a sequence of multi-head attention-based Transformer layers on parametric embeddings to transform them into contextual embeddings, in which highly correlated features will be close to each other in the embedding space. Considering the highly correlated nature of drugs in our data, TabTransformer can be a very strong baseline in this work To train the two baseline models, we first converted the drugs and cell lines in the tabular data into indicators using one-hot coding. Tissue information was not used in training because the models will be tested in one specific rare tissue that is not used in training. Neither XGBoost nor TabTransformer is a pre-trained LLM; thus, no further contextual information can be inferred through the unseen tissue indicator. For XGBoost, all the variables (drugs, cell lines, and sensitivities) were used as input to predict the drug pair synergy. For TabTransformer, we first trained an embedding layer from scratch on the categorical variables (drugs and cell lines) and passed them through stacked multi-headed attention layers, which we then combined with the continuous variables (sensitivities). This combination then passes through feed-forward layers, which have a classification head. ### Hyperparameter Setting The predicted output was a binary label indicating the presence of a synergistic effect, with a Loewe score greater than 5 indicating a positive result. We used AUROC and AUPRC to evaluate the accuracy of classification. Regression tasks were not possible in our LLM-based models because our model can only generate text-based answers (_"positive"_ or _"not positive"_), with poor precision in accurately quantifying the synergy value. XGBoost was used with a boosting learning rate of 0.3. The number of the gradient boost trees was set to 1000 with a maximum tree depth of 20 for base learners. TabTransformer was used with a learning rate of 0.0001 and a weight decay of 0.01. The model was trained for 50 epochs on common tissues. During the training, the model with the best validation performance was selected for further fine-tuned on rare tissues. For each \(k\) shot in each tissue, the model was fine-tuned using the same learning rate and weight decay for 1 epoch and tested with AUPRC and AUROC. Details in the hyperparameter setting are discussed in Supplementary 2. CancerGPT was first fine-tuned with pre-trained regular GPT-2 for 4 epochs on common tissues. The learning rate was set to be 5e-5 and weight decay was set to be 0.01. Then the model was fine-tuned for \(k\) shots in rare tissues. The same hyperparameters are used in training. The model was finally tested with AUPRC and AUROC. GPT-2 and GPT-3 are directly fine-tuned on rare tissues with pre-trained parameters from regular GPT-2 and GPT-3 Ada. For each \(k\) shot in each tissue, GPT-2 is fine-tuned for 4 epochs using a learning rate of 5e-5 and a weight decay of 0.01. The hyperparameters of GPT-3 are adjusted by OpenAI API based on the data size. The model was also fine-tuned for 4 epochs. GPT-2 and GPT-3 fine-tuned models were finally tested with AUPRC and AUROC. ## Acknowledgments XJ is CPRIT Scholar in Cancer Research (RR180012), and he was supported in part by Christopher Sarofim Family Professorship, UT Stars award, UTHealth startup, the National Institute of Health (NIH) under award number R01AG066749, R01AG066749-03S1, R01LM013712 and U01TR002062, and the National Science Foundation (NSF) #2124789. This work is supported by NSF AI Center Grant (NSF 2019844), NSF-CSIRO (NSF 2303038), NIH Bridge2AI (OTA-21-008), and Bill & Lewis Suit Professorship at the University of Texas at Austin. YK was supported in part by the NIH under award number R01AG066749-03S1 and CPRIT RR180012.
2304.01693
Performance of 802.11be Wi-Fi 7 with Multi-Link Operation on AR Applications
Since its first release in the late 1990s, Wi-Fi has been updated to keep up with evolving user needs. Recently, Wi-Fi and other radio access technologies have been pushed to their edge when serving Augmented Reality (AR) applications. AR applications require high throughput, low latency, and high reliability to ensure a high-quality user experience. The 802.11be amendment, which will be marketed as Wi-Fi 7, introduces several features that aim to enhance its capabilities to support challenging applications like AR. One of the main features introduced in this amendment is Multi-Link Operation (MLO) which allows nodes to transmit and receive over multiple links concurrently. When using MLO, traffic is distributed among links using an implementation-specific traffic-to-link allocation policy. This paper aims to evaluate the performance of MLO, using different policies, in serving AR applications compared to Single-Link (SL). Experimental simulations using an event-based Wi-Fi simulator have been conducted. Our results show the general superiority of MLO when serving AR applications. MLO achieves lower latency and serves a higher number of AR users compared to SL with the same frequency resources. In addition, increasing the number of links can improve the performance of MLO. Regarding traffic-to-link allocation policies, we found that policies can be more susceptible to channel blocking, resulting in possible performance degradation.
Molham Alsakati, Charlie Pettersson, Sebastian Max, Vishnu Narayanan Moothedath, James Gross
2023-04-04T10:44:58Z
http://arxiv.org/abs/2304.01693v1
# Performance of 802.11be Wi-Fi 7 with Multi-Link Operation on AR Applications ###### Abstract Since its first release in the late 1990s, Wi-Fi has been updated to keep up with evolving user needs. Recently, Wi-Fi and other radio access technologies have been pushed to their edge when serving Augmented Reality (AR) applications. AR applications require high throughput, low latency, and high reliability to ensure a high-quality user experience. The 802.11be amendment - which will be marketed as Wi-Fi 7 - introduces several features that aim to enhance its capabilities to support challenging applications like AR. One of the main features introduced in this amendment is Multi-Link Operation (MLO) which allows nodes to transmit and receive over multiple links concurrently. When using MLO, traffic is distributed among links using an implementation-specific traffic-to-link allocation policy. This paper aims to evaluate the performance of MLO, using different policies, in serving AR applications compared to Single-Link (SL). Experimental simulations using an event-based Wi-Fi simulator have been conducted. Our results show the general superiority of MLO when serving AR applications. MLO achieves lower latency and serves a higher number of AR users compared to SL with the same frequency resources. In addition, increasing the number of links can improve the performance of MLO. Regarding traffic-to-link allocation policies, we found that policies can be more susceptible to channel blocking, resulting in possible performance degradation. Wi-Fi 7, IEEE 802.11be, MLO, Multi-link, AR ## I Introduction The IEEE 802.11 standard has been improving by amending new versions to catch up on increasing user requirements. Recently, Wi-Fi and other radio access technologies have been pushed to their limit again when it comes to Extended Reality (XR) applications which is an umbrella term that refers to: Virtual Reality (VR), AR, and Mixed Reality (MR) [1]. Task Group be (TGbe), who is working on the 802.11be amendment, has been developing new features with the primary goal to enhance Wi-Fi with the capabilities to achieve Extremely High Throughput (EHT) to support high-reliability and low-latency applications - like AR. The 11be amendment introduces MLO, which gives Access Points (APs) and Stations (STAs) the capability to simultaneously transfer data belonging to the same traffic flow through multiple radio interfaces [2]. Wi-Fi devices using MLO are expected to achieve higher peak throughput, lower latency, and higher reliability [3]. When using MLO, traffic flows at the transmitter will be distributed to different links using a predefined traffic-to-link allocation policy, which plays an essential role in the performance of MLO [4]. Therefore, evaluating the performance of MLO using different policies is desired. Since the introduction of MLO in Wi-Fi standards, studies have investigated its effect on Wi-Fi performance. Multiple traffic-to-link allocation policies have been evaluated in [4] and [5]. One of the evaluated policies from these works allocates all incoming flows to the less congested link. Another one balanced the load uniformly between multiple links. The last policy implemented in these works allocates traffic according to the congestion level of the links. The authors argued that the performance of MLO mainly depends on the traffic-to-link allocation policy implemented. The results showed that congestion-aware policies were more flexible in adapting to the state of the network. They also concluded that allocating traffic to the least congested link, instead of proportionally distributing it among links, has less traffic exposure and simplifies the procedure complexity of the policy. Lopez-Raventos et al. [6] compared the performance of dynamic and non-dynamic traffic-to-link allocation policies that depend on the congestion of channels. In the non-dynamic policies, the load is adjusted according to the congestion level and channel occupancy of channels at the receiving nodes. The congestion level is calculated only upon a data flow arrival. On the other hand, the dynamic policy updates the load adjustment periodically and upon a data flow arrival. Two policies mentioned in [4, 5], and [6] are used in this paper. Namely, the policies balancing the load among links uniformly and according to the link congestion level. However, we calculate the congestion level using a moving average and introduce two new policies, a dynamic and a non-dynamic one. The effect of increasing the number of aggregated links on latency is investigated in [7]. According to the study, using three links instead of one decreased the 90th percentile latency by \(93\%\). However, the authors noted that using five links instead of three reduced the 90th percentile latency by only \(50\%\). A similar investigation of different link configuration effects on MLO is studied in this paper, but we also use different policies and impose AR traffic requirements. This paper evaluates MLO performance, supporting AR, with different link configurations and policies. To the best of our knowledge, this paper is novel regarding evaluating MLO with AR traffic. The contributions of this paper can be summarized as follows: * We evaluate and compare Wi-Fi performance using SL and MLO, with different policies, when serving AR traffic. * We study the effects of different link configurations on the delay values while imposing the AR traffic requirements. * We estimate how well Wi-Fi can serve AR applications by investigating the maximum number of AR STAs that can be supported. * We show and explain the vulnerability of some policies to channel blocking. The remainder of this paper is organized as follows: Section II describes the main aspects of AR applications and MLO, descriptions of traffic-to-link allocation policies are presented in Section III, Section IV describes simulation setup, evaluation methodology, and results. Discussion of those results may be found in Section V. Lastly, Section VI gives conclusion remarks. ## II Background ### _AR Applications_ Descriptive traffic models for AR applications are needed to investigate and simulate scenarios. In this paper, AR traffic models described in the 3GPP technical report for XR applications [8] are used as base models. The report describes different models for AR, which vary in complexity. According to the 3GPP XR traffic models, AR traffic flows can be modeled as a combination of a _Down-Link (DL) video stream_, an _Up-Link (UL) video stream_, and an _UL pose/data stream_, as illustrated in Fig. 1. The traffic streams implemented in this paper, using 3GPP XR traffic models [8] as bases, are described as follows: 1. **Video stream model:** This model describes the DL and UL video stream. Traffic in this model is periodic, and the packet size is modeled as a Truncated Gaussian Distribution. The network jitter is also modeled as a Truncated Gaussian Distribution in the _DL video stream_. While in the _UL video stream_, network jitter is not considered since the network distance is short. The parameter values of this model are shown in Table I. The data rates used in our simulations for this model were reduced to achieve higher granularity in the results. When the simulations were performed using higher data rates, as in [8], many resulting Key Performance Indicator (KPI) values for different policies were similar, which made the evaluation comparison among policies difficult. Decreasing the data rates made the performance differences among policies clearer. 2. **UL pose/control data stream model:** This model describes the traffic for _pose_ data needed to update the positioning and orientation of the AR user and any other _control_ data. Traffic from this model has a constant packet size and a fixed periodicity. The parameter values for this model can be seen in Table II. These models have two primary required limits. First, the Packet Delay Boundary (PDB), which is the maximum delay allowed for application packets to be received successfully. The packet delay is an application layer delay value measured from when the packet arrives at the AP or STA to when its destination successfully receives it. The second limit is the packet success rate, which is the percentage of packets arriving within the PDB. The required packet success rate is 99% for all three data streams. ### _Multi-Link Operation_ The idea of MLO, as introduced in the 802.11be amendment, is to use multiple radio interfaces to transmit and receive data concurrently. These radio interfaces can operate simultaneously on the 2.4, 5, and 6 GHz bands [3][9]. Since MLO uses multiple separate links to transmit and receive, adjustments have been suggested to control the channel access mechanisms for these links. Many transmission modes has been suggested by TGbe. This paper considers \begin{table} \begin{tabular}{|l|l|} \hline **Parameter** & **Value** \\ \hline Packet size & \(100\,\mathrm{B}\) \\ Periodicity & \(4\,\mathrm{ms}\) \\ PDB & \(10\,\mathrm{ms}\) \\ Packet success rate & \(99\,\mathrm{\char 37}\) \\ \hline \end{tabular} \end{table} TABLE II: UL pose/control stream parameters. \begin{table} \begin{tabular}{|l|l|} \hline **Parameter** & **Value** \\ \hline Frame rate & \(60\,\mathrm{f/s}\) \\ Data rate & DL:\(10\,\mathrm{Mb/s}\) UL:\(3.3\,\mathrm{Mb/s}\) \\ Periodicity & \(16.667\,\mathrm{ms}\) \\ PDB & DL:\(10\,\mathrm{ms}\) UL:\(30\,\mathrm{ms}\) \\ Packet success rate & \(99\,\mathrm{\char 37}\) \\ \hline \multicolumn{3}{|c|}{**Packet size model**} \\ \hline Mean & DL:\(21\,\mathrm{K}\) BL:\(7\,\mathrm{K}\) \\ STD & \(10.5\,\mathrm{\char 37}\) of Mean \\ Max. & \(150\,\mathrm{\char 37}\) of Mean \\ Min. & \(50\,\mathrm{\char 37}\) of Mean \\ \hline \multicolumn{3}{|c|}{**Jitter model (DL only)**} \\ \hline Mean & \(0\,\mathrm{ms}\) \\ STD & \(2\,\mathrm{ms}\) \\ Max. & \(4\,\mathrm{ms}\) \\ Min. & \(-4\,\mathrm{ms}\) \\ \hline \end{tabular} \end{table} TABLE I: DL/UL video stream parameters. Fig. 1: Illustration of AR traffic flows. the asynchronous transmission mode since it achieves the best MLO performance [10]. The asynchronous transmission mode enables nodes to transmit and receive packets over multiple interfaces simultaneously. The concept of Multi-Link Device (MLD) is also introduced in the 11be amendment. An MLD is a device with multiple Physical (PHY) interfaces sharing the same interface to the Logical Link Control (LLC) layer. In other words, MLD is a device that implements MLO. To implement MLO, 11be divided the Medium Access Control (MAC) layer into two sublayers [11]: A Lower MAC that is unique for each interface and supports link-specific operations like channel access, link adaptation, and sounding. The second sublayer is called the Upper MAC and is shared among interfaces. It supports operations like Aggregated MAC Service Data Unit (AMSDU) aggregation/de-aggregation, packet numbering, and fragmentation/defragmentation. In the Upper MAC, the distribution of MAC Protocol Data Units (MPDUs) among links is conducted according to the applied traffic-to-link allocation policy, which is implementation-specific. A description of such a policy along with implemented policies definitions are presented in the next section. ## III Traffic-to-link allocation policies The traffic-to-link mapping takes place in the Upper MAC using a _traffic-to-link allocation policy_. When data to be transmitted arrives at a MAC buffer, it is containerized in MPDUs, and MLD transmitters are notified to start their independent channel access processes. Depending on the policy used, the Upper MAC layer allocates a specific number of MPDUs from the buffer to be transmitted by each link. Policies presented in this paper are divided into _Informed_ and _Uninformed_ policies. The _Uninformed_ policies are straightforward and independent of the real-time system state. They consist of the _Greedy_ and _Uniform-Load_ policies. In contrast, the _Informed_ policies are dynamic and consider system information when distributing traffic among links. The _Informed_ policies in this paper are the _Congestion-Aware_ and _Condition-Aware_ policies. The implemented policies are defined as follows: 1. **Greedy Policy**: This policy allocates the maximum allowed number of MPDUs from the MAC buffer to the first link that wins access to the medium. 2. **Uniform-Load Policy**: This policy distributes the MPDUs present in the MAC buffer uniformly among radio links. Assume that MLO uses \(i\) links and \(n\) MPDUs are pending in the MAC buffer. An AMPDU consisting of \(\lceil n/i\rceil\) MPDUs is aggregated and allocated for each link. 3. **Congestion-Aware Policy**: This policy aims to balance the traffic load by utilizing estimated congestion levels for each link. The congestion level on a link can be described using a _channel busy-time_ metric, which is the fraction of a particular period that a channel is busy [12]. This particular period is called the _Update Period_. In our simulations, the estimated congestion level depends on the transmission duration for all received packets on a link during the update period. During an update period, each link on each node accumulates the transmission duration of all received packets, including packets that do not belong to the node. The estimated congestion level, measured as _Link busy-time_, is then updated using a moving average. A window containing the ten previous congestion level values is used for the moving average. The choice of update period and moving average window values was decided experimentally. First, the theoretical congestion level is calculated on each link in a test setup. Then, the estimated congestion levels produced by this policy are checked. The chosen values of 0.5 s and 10 samples for the update period and the moving average window, respectively, give a close enough estimation for the congestion level. The policy distributes a ratio of packets from the buffer to a link according to the following equation: \[\text{Packet ratio}=\frac{\text{Link free time}}{\text{Total free time}}\] _Total free time_ is the sum of _Link free time_ values from all links. The free time on each link is calculated as shown below: \[\text{Link free time}=\text{Update period}-\text{Link busy time}\] 4. **Condition-Aware Policy**: Since there can be links with high congestion but can transmit data with higher data rates, a policy that considers the transmission data rate could be beneficial. This policy is similar to the Congestion-aware policy but also considers the effect of the subsequent transmission data rate for each link. To calculate the ratio of packets for each link, the _Information bits per period_ parameter is introduced. This parameter corresponds to the estimated number of information bits a link can transmit in a congestion update period (0.5 s as in the Congestion-aware policy). The _information bits per period_ for each link is calculated as follows: Information bits per period \(=\text{Link free time}\cdot\text{Data rate}\) The _Data rate_ is calculated, for each link, using the link bandwidth and the decided Modulation and Coding Scheme (MCS) value of the subsequent transmission. The MCS value is decided according to the link adaptation algorithm used, namely the Minstrel adaptation. The ratio of packets allocated for each link is then found as below: \[\text{Packets ratio}=\frac{\text{Information bits per period}}{\text{Total information bits per period}}\] Where the _Total information bits per period_ is the sum of _Information bits per period_ from all links. All the aforementioned policies are implemented in our Wi-Fi simulator and used in each simulation setup throughout the paper. Simulation details and results are presented in the next section. ## IV Simulations Details regarding the simulation scenario, Wi-Fi simulator, evaluation methodology, and obtained results are discussed in this section. ### _Methodology_ In this paper, we consider a single-cell scenario that consists of one AP in the center and several STAs that spawn at random positions with a maximum distance of 10 m to the AP, as shown in Fig. 2. A proprietary, well-established, event-based Wi-Fi system simulator was used to implement the designed scenario and run simulations. The simulator has been used in research for many years and is in continuous development. In our simulations, all STAs are activated within the first second of the simulation and remain active throughout the simulation time of 50 seconds. For each configuration simulations with different seeds are performed to ensure statistical relevance. System parameters used in this scenario are presented in Table III. The main KPI we are interested in evaluating is the maximum number of STAs that Wi-Fi can support, considering the AR traffic requirements. Therefore, we log the delay values for each STA and data stream in each simulation. After gathering all data belonging to a data stream and an STA, an evaluation of whether the system fulfilled the application requirements is conducted. Let the number of STAs in the simulation setup be \(m\). For each data stream, the 99th percentile delay values for each STA are calculated, given that the Packet Success Rate requirement for all data streams is 99%, as mentioned above. Then, the STA with the worst 99th percentile delay value in each data stream is identified and compared with the corresponding delay limitations. If the worst 99th percentile delay value in each data stream is less than or equal to the corresponding delay limit, one can deduce that the system can successfully support at least \(m\) STAs in the simulated setup. An illustration of such evaluation is shown in Fig 3, which contains the Complementary Cumulative Distribution Function (CCDF) delay plots for the three data streams when simulating with 6 STAs. Note that, the CCDF should be less than 0.01 - which corresponds to the 99th percentile - for all delay values higher than the delay limit. Thus, one can infer from this illustration that the Wi-Fi system can support 6 STAs using SL or any MLO policies. Furthermore, note from the figure that the DL stream generates worst-case delays that are closer to the limit than those of the two UL streams. As a result, in our simulations, it is sufficient to consider only the data corresponding to the DL stream while checking the KPI of the number of STAs that can be supported. ### _Results_ In Fig 4, we show results from simulating with different numbers of STAs and the corresponding worst 99th percentile delay values for each data stream. The total bandwidth is chosen to be 80 MHz, that is, 80 MHz for SL, and 2\(\times\)40 for MLO. In the plot corresponding to the DL video delay, we can see that all policies support a maximum of 6 STAs except the Greedy policy which can support 7 STAs. As discussed earlier, note that the delay values are higher for the DL video stream than those in UL streams with the same number of STAs, thus making it the dominant figure. As a result, even though the UL streams can support 7 STAs (the UL video stream with Greedy policy can support even 8 STAs), we do not consider the system successful as the DL stream violates the delay limit. Simulating with other link configurations, namely using 4\(\times\)20 and 2\(\times\)80 MHz, resulted in similar behavior where the DL stream violates its corresponding AR traffic requirements and is the limiting factor regarding the maximum number of supported STAs. Therefore, we choose not to show these figures to avoid redundancy. Fig 5 shows an overview of the worst 99th percentile delay values, belonging to the DL video stream, when supporting 6 STAs with various link configurations and policies. This figure shows that with a link configuration of 2\(\times\)40 MHz, the Uniform, Congestion, and Condition policies have higher delay values than SL, with the Greedy policy having the lowest delay value. Increasing the number of links using the same bandwidth, namely using 4 links with 20 MHz each, yields Fig. 2: The AP and STAs deployment in the single-cell scenario. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline **Parameter** & **Value** \\ \hline Max. AMPDU length & 5.484 ms \\ Max. number of MPDUs in an AMPDU & 64 MPDUs \\ MCS selection algorithm & Minstrel \\ Transmit power & 20 dBm \\ MPDU payload size & 1500 B \\ Wireless channel model & TGn model D [13] \\ \hline \multicolumn{3}{|c|}{**Carrier frequencies**} \\ \hline SL & 5.5 GHz \\ MLO & 5.2, 5.5, 6.1, and 6.5 GHz \\ \hline \multicolumn{3}{|c|}{**Link bandwidth(s)**} \\ \hline SL & 80 and 160 MHz \\ MLO & 2\(\times\)40, 4\(\times\)20, and 2\(\times\)80 MHz \\ \hline \end{tabular} \end{table} TABLE III: System Parameters. lower delay values for MLO in general. A similar effect can result when increasing the bandwidth to 160 MHz. We can conclude from this figure that the Greedy policy achieves the lowest delay values among other policies and with different link configurations. Fig 6 shows the main evaluation KPI values for SL and MLO. From this figure, we notice that when simulating with a 2\(\times\)40 MHz link configuration, Greedy policy has the best performance with 7 STAs that can be supported. In comparison, using SL or MLO with the other policies can support a maximum of 6 STAs, with the same frequency resources. Increasing the number of links to four gives MLO, using any policy, a gain of around 17% compared to SL. When simulating with a total of 160 MHz bandwidth, a maximum of 10 STAs can be supported using Greedy and Condition policies, 9 STAs with the Uniform and Congestion policies, while SL can support only 8 STAs. Fig 4: Worst 99\({}^{\text{th}}\) delay values for each AR data stream with different policies plotted against the number of STAs. Link bandwidth for SL is 80 MHz and 2\(\times\)40 MHz for MLO. Fig. 5: The worst 99\({}^{\text{th}}\) DL delay values when simulating with 6 STAs, using SL and different MLO policies. Fig. 3: Resulted CCDF plots of delay values for each AR data stream when simulating with 6 STAs, using SL and different MLO policies. Fig. 6: The maximum number of AR STAs that can be supported by each policy and link configuration. ## V Discussion Considering the results shown in Fig 6, we surprisingly notice that the informed policies are performing either worse or as good as the uninformed Greedy policy. We can also see, from Fig 3 and Fig 4, that the worst 99% delay values in all streams are the lowest when using the Greedy policy. This behavior is counter-intuitive since the informed policies are supposed to perform better considering their ability to adjust dynamically to the status of channels, while the Greedy policy acts statically. Frame and packet distribution analyses were conducted for each policy to investigate the degradation in the informed policies' performance. We found that they, plus the Uniform policy, have a similar behavior regarding channel access. The implementation strategy used for these policies assumes that all links will get access to the channel in a reasonable time and transmit their allocated AMPDUs. In other words, these policies require simultaneous access to all links to perform as we expect. Therefore, we call these policies Simultaneous-Access Policies (SAPs). The problem with such an implementation strategy is that there is no guarantee of getting access due to the contention-based channel access in Wi-Fi. We have observed that in some cases, it can happen that one of the links does not get access to the medium in a reasonable time. Then, the packets that were supposed to be transmitted on that link will remain in the buffer. When block acknowledgments are received at other links, the policy restarts, and the remaining packets will be distributed among links again. The same behavior can be repeated with a non-zero probability until one or two packets remain in the buffer, which can be transmitted in the same AMPDU. This behavior can be present and result in acceptable delay values. However, when the environment gets more congested, e.g. by adding STAs, the probability of collisions and channel blocking becomes higher. Consequently, packets have delay values over the required limits due to higher queuing delays, leading to performance degradation. In the case of using the Greedy policy, only one channel access is needed to transmit the application frame using one AMPDU. Therefore, this policy does not suffer from unreliable channel access to the same extent. We conclude from the analysis of SAPs that these policies can make transmissions more vulnerable to channel blocking. However, MLO using a wisely-chosen policy can serve more AR STAs than SL, as shown in Fig 6. MLO can achieve lower delay values and is less affected by incrementing the number of supported STAs due to parallel access and transmission. ## VI Conclusions From the results obtained, we conclude that MLO can achieve lower latency than SL when serving AR. Therefore, using the same frequency resources, MLO serves more AR STAs. However, we can conclude that informed policies do not always perform better than uninformed ones and that Greedy policy performs as good, or even better than, informed policies. We also found that the implementation choice of the SAPs caused an issue due to unreliable channel access and degraded their performance when simulating with AR. Increasing the number of links by two, using the same frequency resources, achieved lower latency values and can yield a higher number of supported STAs. For future work, it might be interesting to study different channel access techniques to alleviate the vulnerability of SAPs to channel blocking.
2307.09947
U-CE: Uncertainty-aware Cross-Entropy for Semantic Segmentation
Deep neural networks have shown exceptional performance in various tasks, but their lack of robustness, reliability, and tendency to be overconfident pose challenges for their deployment in safety-critical applications like autonomous driving. In this regard, quantifying the uncertainty inherent to a model's prediction is a promising endeavour to address these shortcomings. In this work, we present a novel Uncertainty-aware Cross-Entropy loss (U-CE) that incorporates dynamic predictive uncertainties into the training process by pixel-wise weighting of the well-known cross-entropy loss (CE). Through extensive experimentation, we demonstrate the superiority of U-CE over regular CE training on two benchmark datasets, Cityscapes and ACDC, using two common backbone architectures, ResNet-18 and ResNet-101. With U-CE, we manage to train models that not only improve their segmentation performance but also provide meaningful uncertainties after training. Consequently, we contribute to the development of more robust and reliable segmentation models, ultimately advancing the state-of-the-art in safety-critical applications and beyond.
Steven Landgraf, Markus Hillemann, Kira Wursthorn, Markus Ulrich
2023-07-19T12:41:54Z
http://arxiv.org/abs/2307.09947v1
# U-CE: Uncertainty-aware Cross-Entropy for Semantic Segmentation ###### Abstract Deep neural networks have shown exceptional performance in various tasks, but their lack of robustness, reliability, and tendency to be overconfident pose challenges for their deployment in safety-critical applications like autonomous driving. In this regard, quantifying the uncertainty inherent to a model's prediction is a promising endeavour to address these shortcomings. In this work, we present a novel **U**ncertainty-aware **C**ross-**E**ntropy loss (U-CE) that incorporates dynamic predictive uncertainties into the training process by pixel-wise weighting of the well-known cross-entropy loss (CE). Through extensive experimentation, we demonstrate the superiority of U-CE over regular CE training on two benchmark datasets, Cityscapes and ACDC, using two common backbone architectures, ResNet-18 and ResNet-101. With U-CE, we manage to train models that not only improve their segmentation performance but also provide meaningful uncertainties after training. Consequently, we contribute to the development of more robust and reliable segmentation models, ultimately advancing the state-of-the-art in safety-critical applications and beyond. 1 Footnote 1: Code will be made available once the publication process is complete. ## 1 Introduction Humans often make poor decisions and reach erroneous conclusions while overestimating their abilities, a phenomenon known as the Dunning-Kruger effect [22]. Although deep neural networks are highly effective at solving semantic segmentation problems [34], they also suffer from overconfidence [13]. Additionally, neural networks lack interpretability [12] and struggle to distinguish between in-domain and out-of-domain samples [26]. These flaws are particularly relevant in safety-critical applications, such as autonomous driving [32] and medical imaging [27], as well as in computer vision tasks that have high demands on reliability, like industrial inspection [16, 44] and automation [24, 45], where robust predictions are crucial. Misclassifying pixels in these contexts can lead to severe consequences, emphasizing the need for robust and trustworthy segmentation models. Previous work suggests that quantifying the uncertainty inherent to a model's prediction is a promising endeavour to enhance the safety and reliability of such applications [25, 26, 27, 35, 36]. These uncertainties provide additional insights beyond the common softmax probabilities, revealing regions where the model is indecisive and likely to make errors. Surprisingly, the utilization of these uncertainties during the training of segmentation models has not been thoroughly explored. In this work, we present a novel **U**ncertainty-aware **C**ross-**E**ntropy loss, referred to as **U-CE**, that addresses this gap by incorporating dynamic uncertainty estimates into the training process as shown in Figure 1. Through pixel-wise uncertainty weighting of the well-known cross-entropy loss (CE), we harness the valuable insights provided by the uncertainties for more effective training. With U-CE, we manage to train models that are naturally capable of predicting meaningful uncertainties after training while simultaneously improving their segmentation performance. Our contributions can be summarized as follows: Firstly, Figure 1: U-CE introduces an uncertainty-aware cross-entropy loss that dynamically incorporates the predictive uncertainties provided by Monte Carlo Dropout (MC-Dropout) into the training process. As a result, we manage to train models that are naturally capable of predicting meaningful uncertainties after training while also improving their segmentation performance. we propose the U-CE loss function, which utilizes uncertainty estimates to guide the optimization process, emphasizing regions with high uncertainties. Secondly, we conduct extensive experiments on two benchmark datasets, Cityscapes [8] and ACDC [41], using two common backbones, ResNet-18 and ResNet-101 [15], demonstrating the superiority of U-CE over regular CE training. Lastly, we present additional insights, limitations, and potential improvements for U-CE through multiple ablation studies and a thorough discussion. ## 2 Related Work In this section, we briefly review the related work on uncertainty quantification and uncertainty-aware segmentation. ### Uncertainty Quantification Deep neural networks, with their millions of model parameters and non-linearities, have proven effective in solving complex tasks in natural language processing [38] and computer vision, like semantic segmentation [34]. Unfortunately, due to their complexity, the computation of the exact posterior probability distribution of the network's output is infeasible [4, 30]. Consequently, approximate uncertainty quantification methods are employed to offer a practical solution to tackle the intractability of the exact posterior distribution. The most prominent methods include Bayesian Neural Networks [31], Monte Carlo Dropout [10], and Deep Ensembles [23]. We will refer to these methods as traditional uncertainty quantification techniques throughout the following. A mathematically grounded, though computationally complex, approach to uncertainty quantification is provided by Bayesian Neural Networks, which transform a deterministic network into a stochastic one using probabilistic distributions placed over the activations or the weights [18]. For instance, Bayes by Backprob [4] employs variational inference to learn approximate distributions over the weights. These can be used to create an ensemble of models with differently sampled weights to approximate the posterior distribution of the predictions. Gal and Ghahramani simplify this approximation process by using Monte Carlo Dropout [10]. While dropout is usually applied as a regularization technique [43], Monte Carlo Dropout uses this concept to sample from the posterior distribution of a network's prediction at test time. In its original form, Monte Carlo Dropout only captures the epistemic uncertainty inherent to the model. To obtain a more comprehensive measure of uncertainty that includes the aleatoric uncertainty, which captures the noise inherent in the observations, Monte Carlo Dropout can be combined with learned uncertainty predictions and assumed density filtering [11, 30, 21]. The current state-of-the-art uncertainty quantification method are Deep Ensembles, which consist of an ensemble of trained models that generate diverse predictions at test time [23]. Due to the introduction of randomness through random weight initialization or different data augmentations across ensemble members [9], Deep Ensembles are well-calibrated [23]. Multiple studies demonstrated that Deep Ensembles generally outperform other uncertainty quantification methods across varying tasks [48, 14, 39]. However, this performance gain is associated with high computational cost. In addition to the aforementioned approximate uncertainty quantification methods, there has been a growing interest in deterministic single forward-pass approaches, which offer advantages in terms of memory usage and inference time. For example, van Amersfoort _et al_. [46] and Liu _et al_. [29] explore the concept of distance-aware output layers. While these methods demonstrate good performance, they are not competitive with the current state-of-the-art and require significant modifications to the training process [36]. Another approach, proposed by Mukhoti _et al_. [36], simplifies the two previous methods by employing Gaussian Discriminant Analysis for feature-space density estimation after training. Although they perform on par with Deep Ensembles in some settings, their approach still necessitates a more sophisticated training approach. Additionally, fitting the feature-space density estimator is only possible after training, which is not suitable for U-CE where meaningful uncertainties are required during training. Overall, uncertainty quantification remains an active and evolving field of research, with various approaches offering their own advantages and disadvantages. For our specific case, Monte Carlo Dropout emerges as the preferred option due to its ease of use, minimal impact on the training process, and computational efficiency compared to Deep Ensembles. Through Monte Carlo Dropout sampling, we can compute the predictive uncertainty to apply pixel-wise weighting of the well-known cross-entropy loss. With predictive uncertainties, we refer to the standard deviation of the softmax probabilities of the predicted class provided by Monte Carlo Dropout sampling. ### Uncertainty-aware Segmentation In the domain of uncertainty-aware segmentation, researchers have explored various techniques to incorporate uncertainty measures into the training process. Surprisingly, traditional uncertainty quantification methods have been largely overlooked or underutilized. We provide an overview of notable works that leverage uncertainty-aware techniques for segmentation tasks in various domains. Additionally, we discuss how U-CE addresses the gap towards full utilization of traditional uncertainty quantification methods during training. Some of the earlier work on more effective training has originally been designed for object detection. For example, Lin [28] introduced the focal loss that down-weights the contribution of easy examples to shift the focus more towards hard examples. Another closely related technique is online hard example mining by Shrivastava [42]. They propose to automatically select hard examples to only learn from them and completely ignore the easy examples. By now, both methods have been successfully adapted for semantic segmentation [17, 47]. Another line of work focuses on the identification and compensation of ambiguities and label noise. Kaiser [19] propose adding a learned bias to a network's logits and introducing a novel uncertainty branch to induce the compensation bias only to relevant regions. However, unlike U-CE, their approach does not utilize uncertainties to make training more robust, rather they aim to avoid new noise during data annotation. More closely related to our work, Bischke [3] and Bressan [5] propose to leverage uncertainties to improve training on imbalanced aerial image datasets. The former use the per-class uncertainty of the model together with the median frequency to balance training [3]. We argue that dynamically weighting each pixel individually during training, which is what U-CE does, is even more valuable. The latter utilize pixel-wise weights, but only consider the class and labeling uncertainty [5] instead of the predictive uncertainties like U-CE. In addition to these methods, Chen [6] propose to transform the embeddings of the last layer from Euclidean space into Hyperbolic space to dynamically weight pixels based on the hyperbolic distance, which they interpret as uncertainty. Similarly, Bian [2] propose an uncertainty estimation and segmentation module to estimate uncertainties that they use to improve the segmentation performance. Unlike U-CE, however, these two works do not incorporate traditional uncertainty quantification methods into training. In contrast to existing literature, U-CE fully utilizes predictive uncertainties dynamically during training. By pixel-wise uncertainty weighting of the cross-entropy loss, U-CE harnesses valuable insights from the uncertainties to guide the optimization process. This approach enables more effective training, resulting in models that are naturally capable of predicting meaningful uncertainties after training while also improving their segmentation performance. ## 3 Methodology In the following, we provide an overview of U-CE, explain our novel uncertainty-aware cross-entropy loss and outline the implementation details. ### Overview The central idea of U-CE is to incorporate predictive uncertainties into the training process to enhance segmentation performance. As depicted in Figure 2, we propose two simple yet highly effective adaptions to the regular training process: 1. During training, we sample from the posterior distribution with Monte Carlo Dropout to obtain predictive uncertainties alongside the regular segmentation prediction. 2. We apply pixel-wise weighting to the regular cross-entropy loss based on the collected uncertainties. To compute predictive uncertainties during training, we choose Monte Carlo Dropout. It is straightforward to implement, requires minimal tuning, and is computationally Figure 2: A schematic overview of the training process of U-CE. U-CE integrates the predictive uncertainties of a Monte Carlo Dropout (MC-Dropout) model into the training process to enhance segmentation performance. In comparison to most applications of Monte Carlo Dropout, U-CE utilizes the uncertainties not only at test time but also dynamically during training by applying pixel-wise weighting to the regular cross-entropy loss. more efficient than Deep Ensembles. However, it is worth noting that other uncertainty quantification methods could also be utilized for U-CE. Exploring these alternatives is an interesting avenue for future work, which we will discuss in Section 5. ### Uncertainty-aware Cross-Entropy **Segmentation Sampling.** In contrast to typical usage of Monte Carlo Dropout, U-CE incorporates the sampling process from the posterior distribution not only at test time but also during training. To compute the necessary uncertainties for our uncertainty-aware cross-entropy loss, we perform \(\beta\) sampling iterations at each training step. This generates \(\beta\) segmentation samples in addition to the regular segmentation prediction. Notably, gradient computation is disabled during the sampling process as it is unnecessary for backward propagation, which relies solely on the regular segmentation prediction. By disabling gradient computation during sampling, we reduce the additional computational overhead of U-CE in terms of training time and GPU memory usage. **Uncertainty-aware Cross-Entropy Loss.** The final objective function of U-CE builds upon the well-known categorical cross-entropy loss and can be defined as: \[L_{u\text{-}ce}=-\frac{1}{N}\sum_{n=1}^{N}w_{n}\sum_{c=1}^{C}y_{n,c}\cdot\log( p_{n,c}), \tag{1}\] where \(L_{u\text{-}ce}\) is the uncertainty-aware cross-entropy loss for a single image, \(N\) is the number of pixels in the image, \(C\) is the number of classes, \(y_{n,c}\) is the respective ground truth label, \(p_{n,c}\) is the respective predicted softmax probability, and \(w_{n}\) represents the pixel-wise uncertainty weight. It is worth noting that Equation 1 simplifies to the regular cross-entropy loss by setting \(w_{n}\) to one for all pixels. **Pixel-wise Uncertainty Weight.** The pixel-wise uncertainty weight \(w_{n}\) can be formulated as: \[w_{n}=(1+\sigma_{n})^{\alpha}, \tag{2}\] where \(\sigma_{n}\) denotes the predictive uncertainty, and \(\alpha\) controls the influence of the uncertainties in an exponential manner. The predictive uncertainty \(\sigma\) represents the standard deviation of the softmax probabilities of the predicted class of the segmentation samples. **Pseudocode.** Finally, Algorithm 1 shows how to use U-CE in the training step in a simplified way. As mentioned earlier, the two key adaptations to the regular training process are sampling with Monte Carlo Dropout during training and applying pixel-wise uncertainty weighting to the regular cross-entropy loss. ## 4 Experiments In this section, we conduct an extensive range of experiments to demonstrate the value of incorporating predictive uncertainties into the training process. Firstly, we provide quantitative results comparing regular CE to U-CE under diverse settings. Secondly, we analyze qualitative examples. Lastly, we provide multiple ablation studies. ### Setup **Architecture.** For all of our experiments, we employ DeepLabv3+ [7] as the decoder and either a ResNet-18 or ResNet-101 [15] as the encoder. Both backbones are commonly used for semantic segmentation [34, 49], making our work highly comparable and serving as an excellent baseline for future research. **Monte Carlo Dropout.** In order to convert our architectures into Monte Carlo Dropout models, we add a dropout layer after each of the four residual block layers of the ResNets, inspired by Kendall _et al_. [20] and Gustafsson _et al_. [14]. **Training.** For all training processes, we use a Stochastic Gradient Descent (SGD) optimizer [40] with a base learning rate of 0.01, momentum of 0.9, and weight decay of 0.0001. Additionally, we multiply the learning rate of the decoder and segmentation head by ten. Finally, we employ polynomial learning rate scheduling to decay the initial learning rate during the training process, following the formula: \[lr=lr_{base}\cdot(1-\frac{iteration}{total\ iterations})^{0.9}, \tag{3}\] where \(lr\) is the current learning rate, and \(lr_{base}\) is the initial base learning rate. In all training processes, we use a batch size of 16 and train on four NVIDIA A100 GPUs with 40 GB of memory using mixed precision [33]. **Datasets.** All of our experiments are based on either the Cityscapes dataset [8] or the ACDC dataset [41]. Both datasets are publicly available street scene datasets aimed at advancing the current state-of-the-art in autonomous driving. The former consists of 2975 training images, 500 validation images, and 1525 test images. The latter contains 1600 training images, 406 validation images, and 2000 test images. Although both datasets share the same 19 evaluation classes and a void class, the ACDC dataset exclusively focuses four adverse conditions: fog, nighttime, rain, and snow. **Data Augmentations.** To prevent overfitting, we apply a common data augmentation strategy for all training procedures, regardless of the dataset or architecture used. The strategy includes the following steps: 1. Random scaling with a factor between \(0.5\) and \(2.0\). 2. Random cropping with a crop size of \(768\times 768\) pixels. 3. Random horizontal flipping with a flip chance of \(50\%\). **Evaluation.** Since both test splits are withheld for benchmarking purposes, we utilize the validation images for testing in all our experiments. Unless otherwise specified, we only report single forward pass results based on the original validation images without resizing or sampling for a fair comparison between all of the models. Also, we set the number of segmentation samples \(\beta\) to ten by default. **Metrics.** For quantitative evaluations, we primarily report the mean Intersection over Union (mIoU), also known as the Jaccard Index, to measure the segmentation performance. In addition to the mIoU, we also utilize the Expected Calibration Error (ECE) [37] to evaluate the calibration as well as the mean class-wise predictive uncertainty (mUnc) to quantitatively compare the resulting uncertainties. ### Quantitative Evaluation Tables 1 and 2 outline a quantitative comparison between regular CE and our proposed U-CE loss using two different \(\alpha\) values for various dropout ratios and training lengths on the Cityscapes [8] and ACDC [41] datasets. Remarkably, U-CE\({}_{\alpha=10}\) achieves the highest mIoU across all dropout ratios, even outperforming the baseline models that do not use dropout in most cases. Notably, U-CE\({}_{\alpha=10}\) achieves a maximum improvement of up to \(9.3\%\) over regular CE when training on ACDC [41] for 200 epochs using a ResNet-18 with a dropout ratio of \(40\%\). On average, U-CE\({}_{\alpha=10}\) outperforms CE by \(2.0\%\) on Cityscapes [8] and by \(4.6\%\) on ACDC [41]. Interestingly, U-CE\({}_{\alpha=1}\) also matches or improves upon regular CE training in most cases. On average, U-CE\({}_{\alpha=1}\) outperforms CE by \(0.3\%\) on Cityscapes and by \(1.3\%\) on ACDC. Table 3 provides additional information on the ECE and mUnc for CE and U-CE using a dropout ratio of \(20\%\). In comparison to regular CE and U-CE\({}_{\alpha=1}\), which exhibit similar results, U-CE\({}_{\alpha=10}\) not only improves segmentation performance but also yields slightly better calibrated networks, as measured by the ECE. Moreover, the mUnc is also slightly lower for U-CE\({}_{\alpha=10}\). Overall, Tables 1, 2 and 3 provide strong evidence for the effectiveness of leveraging predictive uncertainties in the training process. \begin{table} \begin{tabular}{l|c|c c c|c c c} & Encoder & \multicolumn{4}{c|}{200 Epochs} & \multicolumn{4}{c}{500 Epochs} \\ & CE & U-CE\({}_{\alpha=1}\) & U-CE\({}_{\alpha=10}\) & CE & U-CE\({}_{\alpha=1}\) & U-CE\({}_{\alpha=10}\) \\ \hline Dropout (\(\%\)) & RN18 & **70.0** & - & **72.0** & - & - \\ Dropout (\(10\%\)) & RN18 & 69.4 & 69.6 & **71.6** & 72.3 & 72.3 & **74.2** \\ Dropout (\(20\%\)) & RN18 & 69.0 & 69.5 & **71.8** & 71.9 & 72.6 & **73.5** \\ Dropout (\(30\%\)) & RN18 & 68.2 & 69.0 & **71.0** & 71.9 & 72.4 & **74.1** \\ Dropout (\(40\%\)) & RN18 & 66.6 & 67.7 & **70.5** & 71.1 & 71.1 & **73.7** \\ Dropout (\(50\%\)) & RN18 & 64.3 & 65.3 & **69.6** & 69.0 & 69.4 & **72.6** \\ \hline Dropout (\(9\%\)) & RN101 & **74.6** & - & - & **76.1** & - & - \\ Dropout (\(10\%\)) & RN101 & 74.8 & 75.1 & **76.1** & 76.3 & 76.6 & **77.5** \\ Dropout (\(20\%\)) & RN101 & 74.6 & 74.8 & **76.6** & 76.3 & 77.0 & **77.7** \\ Dropout (\(30\%\)) & RN101 & 74.5 & 74.7 & **76.1** & 76.4 & 76.6 & **77.5** \\ Dropout (\(40\%\)) & RN101 & 74.7 & 74.0 & **75.8** & 76.1 & 76.5 & **78.2** \\ Dropout (\(50\%\)) & RN101 & 74.1 & 73.7 & **75.9** & 76.6 & 76.6 & **77.3** \\ \end{tabular} \end{table} Table 1: Quantitative comparison between regular CE and U-CE on the Cityscapes dataset [8] for different dropout ratios. The provided numbers represent the mIoU \(\uparrow\). Best respective results are marked in **bold**. \begin{table} \begin{tabular}{l|c|c c c|c c c} & Encoder & \multicolumn{4}{c|}{200 Epochs} & \multicolumn{4}{c}{500 Epochs} \\ & CE & U-CE\({}_{\alpha=1}\) & U-CE\({}_{\alpha=10}\) & CE & U-CE\({}_{\alpha=1}\) & U-CE\({}_{\alpha=10}\) \\ \hline Dropout (\(\%\)) & RN18 & **56.3** & - & - & **62.2** & - & - \\ Dropout (\(10\%\)) & RN18 & 55.5 & 56.4 & **60.0** & 62.1 & 62.8 & **65.0** \\ Dropout (\(20\%\)) & RN18 & 54.6 & 56.1 & **60.5** & 61.5 & 62.0 & **65.0** \\ Dropout (\(30\%\)) & RN18 & 52.2 & 54.3 & **59.2** & 59.6 & 61.6 & **64.3** \\ Dropout (\(40\%\)) & RN18 & 48.9 & 50.8 & **58.2** & 56.8 & 58.8 & **63.9** \\ Dropout (\(50\%\)) & RN18 & 47.7 & 49.3 & **56.3** & 53.3 & 56.0 & **62.4** \\ \hline Dropout (\(9\%\)) & RN101 & **65.0** & - & - & **68.8** & - & - \\ Dropout (\(10\%\)) & RN101 & 64.5 & 65.3 & **67.0** & 68.4 & 69.3 & **69.3** \\ Dropout (\(20\%\)) & RN101 & 64.1 & 65.0 & **65.8** & 65.5 & 68.7 & **70.2** \\ Dropout (\(30\%\)) & RN101 & 62.7 & 64.3 & **65.3** & 68.4 & 68.5 & **69.9** \\ Dropout (\(40\%\)) & RN101 & 61.1 & 63.1 & **65.4** & 67.8 & 67.8 & **70.0** \\ Dropout (\(50\%\)) & RN101 & 58.0 & 60.2 & **63.7** & 66.0 & 67.4 & **70.2** \\ \end{tabular} \end{table} Table 2: Quantitative comparison between regular CE and U-CE on the ACDC dataset [41] for different dropout ratios. The provided numbers represent the mIoU \(\uparrow\). Best respective results are marked in **bold**. \begin{table} \begin{tabular}{l|c|c c c|c c c} & Encoder & \multicolumn{4}{c|}{200 Epochs} & \multicolumn{4}{c}{500 Epochs} \\ & & mIoU & \(\uparrow\) & CE & U-CE\({}_{\alpha=10}\) & U-CE\({}_{\alpha=10}\) & CE & U-CE\({}_{\alpha=10}\) \\ \hline CE & RN18 & 69.0 & 0.035 & 0.088 & 71.9 & 0.025 & 0.088 \\ U-CE\({}_{\alpha=1}\) & RN18 & 69.5 & 0.036 & 0.089 & 72.6 & 0.027 & 0.088 \\ U-CE\({}_{\alpha=10}\) & RN18 & 71.8 & 0.029 & 0.085 & 73.5 & 0.018 & 0.084 \\ \hline CE & RN101 & 74.6 & 0.026 & 0.080 & 76.3 & 0.041 & 0.076 \\ U-CE\({}_{\alpha=1}\) & RN101 & 74.8 & 0.024 & 0.079 & 77.0 & 0.041 & 0.076 \\ U-CE\({}_{\alpha=10}\) & RN101 & 76.6 & 0.022 & 0.073 & 77.7 & 0.040 & 0.073 \\ \end{tabular} \end{table} Table 3: A more detailed quantitative comparison between regular CE and U-CE on the Cityscapes dataset [8] using a dropout ratio of 20%. The provided numbers represent the mIoU \(\uparrow\), ECE \(\downarrow\), and mUnc. ### Qualitative Evaluation In addition to the quantitative evaluation, we also provide qualitative examples in Figure 3 showing the original input image, the corresponding ground truth label, the model's segmentation prediction, a binary accuracy map, and the student's predictive uncertainty. The first three rows depict results from models with a ResNet-18 backbone and a dropout ratio of \(20\%\), trained for 200 epochs with CE, U-CE\({}_{\alpha=1}\), U-CE\({}_{\alpha=10}\) on Cityscapes [8]. The last three rows show examples from models using a ResNet-101 backbone and a dropout ratio of \(20\%\), trained for 500 epochs on the ACDC dataset [41]. The binary accuracy map visualizes incorrectly predicted pixels and void classes in white, and correctly predicted pixels in black. Generally, for large areas and well-represented classes like road, building, sky, and car, all models perform exceptionally well with minimal errors. Furthermore, there is a strong correlation between the binary accuracy map and the predictive uncertainty, indicating that all models provide meaningful uncertainties. Nonetheless, there are nuanced differences between the models. For example, in the first two rows of Figure 3, which represent models trained CE and U-CE\({}_{\alpha=1}\), there are noticeable misclassifications on top of the human standing in front of the truck. Naturally, this area is also accompanied with high uncertainties. In contrast, the model trained with U-CE\({}_{\alpha=10}\) exhibits significantly fewer difficulties, resulting in a better segmentation prediction and lower uncertainties. A similar situation is observable in the last three rows, showing examples from the more challenging ACDC dataset [41]. Here, the model trained with regular CE struggles to correctly segment the truck on the left as well as differentiate between the sidewalk and the terrain on the right side of the image. The model trained with U-CE\({}_{\alpha=1}\) does slightly better in these areas, but is equally uncertain. Only the model trained with U-CE\({}_{\alpha=10}\) successfully classifies the truck and differentiates between the sidewalk and the terrain Figure 3: Example images from the Cityscapes and ACDC validation set (a), corresponding ground truth labels (b), the model’s segmentation predictions (c), a binary accuracy map (d), and the predictive uncertainty (e). White pixels in the binary accuracy map are either incorrect predictions or void classes, which appear black in the ground truth label. For the uncertainty prediction, brighter pixels represent higher predictive uncertainties. The first three rows depict results from models with a ResNet-18 backbone and dropout ratio of \(20\%\), trained for 200 epochs on Cityscapes [8]. The last three rows show examples from models using a ResNet-101 backbone and a dropout ratio of \(20\%\), trained for 500 epochs on the ACDC dataset [41]. decently. Consequently, the predictive uncertainty is also lower in these areas. In summary, the qualitative findings presented in Figure 3 concur with our quantitative evaluation, manifesting the efficacy of U-CE across different datasets and architectures. ### Ablation Studies In addition to the quantitative and qualitative evaluation, we also present multiple ablation studies. Unless otherwise noted, we confined all of the ablation studies to models that use a ResNet-18 as the backbone, have a dropout ratio of 20%, and were trained for 200 epochs. **Impact of \(\alpha\).** The most influential hyperparameter of U-CE is \(\alpha\) as it exponentially controls the weighting of the CE loss. Table 4 demonstrates the impact of different \(\alpha\) values on the mIoU for both backbones, ResNet-18 (RN18) and ResNet-101 (RN101), on both Cityscapes and ACDC. Evidently, the segmentation performance consistently improves as \(\alpha\) increases until it reaches ten, which stands as the best value in three out of four cases across the two datasets and architectures. Thus, using ten as the default value for \(\alpha\) seems to be a fair estimation to achieve the best results, not only for the mentioned cases but potentially for other applications as well. Further increasing \(\alpha\) leads to a degradation in mIoU. Additionally, training becomes more unstable as models overly focus on uncertain pixels, resulting in some models failing to converge properly. Nonetheless, U-CE exhibits robustness against changes in \(\alpha\), offering a wide range of valid hyperparameters that lead to improved segmentation results compared to regular CE training. **Impact of \(\beta\).** Table 5 exhibits another ablation study on the number of segmentation samples \(\beta\). Interestingly, there is no clear benefit of sampling more often than six times, especially with regard to the training time. As indicated by the training times, U-CE\({}_{\beta=6}\) increases the necessary training time by approximately \(10\%\), whereas U-CE\({}_{\beta=10}\) extends it by roughly \(35\%\). For comparison, Gal and Ghahramani [10] recommend sampling ten times to get a reasonable estimation of the predictive mean and uncertainty. **Impact of Data Augmentations.** The impact of various data augmentation strategies on CE and U-CE is demonstrated in Table 6. The results show that incorporating additional data augmentations on top of the baseline strategy of random cropping with a crop size of \(768\times 768\) pixels improves the mIoU across the board. More importantly, this ablation study confirms that U-CE consistently outperforms CE across different data augmentation strategies, indicating its effectiveness in improving segmentation performance. **Impact of \(lr_{base}\).** Table 7 shows the ablation study on the base learning rate \(lr_{base}\). The most notable comparison is between regular CE and U-CE\({}_{\alpha=1}\), which demonstrates that U-CE is not limited to specific learning rates. U-CE\({}_{\alpha=1}\) consistently outperforms regular CE for all examined base learning rates, despite increasing the training loss by approximately \(9\%\) as indicated by the mUnc in Table 3. Moreover, U-CE\({}_{\alpha=10}\) exceeds the results of CE and U-CE\({}_{\alpha=1}\) for all base learning rates except \(10^{-1}\), which caused divergence. Overall, this ablation study confirms the value of leveraging predictive uncertainties during training, irrespective of the learning rate, which is arguably the single most important hyperparameter in deep learning [1]. ## 5 Discussion In contrast to previous approaches, U-CE fully leverages predictive uncertainties obtained by Monte Carlo Dropout during training. As a result, we manage to train models that not only improve their segmentation performance but are \begin{table} \begin{tabular}{l|c c c c} & Random Flipping & Random Scaling & mIoU \(\uparrow\) \\ \hline CE & \(\times\) & \(\times\) & 66.1 \\ & ✓ & \(\times\) & 67.0 \\ & \(\times\) & ✓ & 68.6 \\ & ✓ & ✓ & 69.0 \\ \hline U-CE\({}_{\alpha=1}\) & \(\times\) & \(\times\) & 65.8 \\ & ✓ & \(\times\) & 67.8 \\ & \(\times\) & ✓ & 69.1 \\ & ✓ & ✓ & 69.5 \\ \hline U-CE\({}_{\alpha=10}\) & \(\times\) & \(\times\) & 69.6 \\ & ✓ & \(\times\) & 70.1 \\ & ✓ & ✓ & 71.8 \\ \end{tabular} \end{table} Table 6: Ablation study on the impact of various data augmentations strategies. We use random cropping with a crop size of \(768\times 768\) pixels as a baseline for all strategies. \begin{table} \begin{tabular}{l|c c c c c c c c} \(\alpha\) & 1 & 2 & 4 & 6 & 8 & 10 & 12 & 14 & 16 \\ \hline RN18 (Cityscapes) & 69.5 & 70.0 & 70.7 & 71.2 & 71.5 & **71.8** & 71.0 & 47.0 & 70.9 \\ RN101 (Cityscapes) & 74.8 & 75.2 & 75.6 & 76.1 & 76.4 & **76.6** & 76.3 & 75.8 & 72.6 \\ RN18 (ACDC) & 56.1 & 56.9 & 57.6 & 58.8 & 58.8 & **60.5** & 60.3 & 60.1 & 37.5 \\ RN101 (ACDC) & 65.0 & 65.0 & 65.7 & 65.5 & 66.0 & 65.8 & **66.7** & 64.5 & 19.9 \\ \end{tabular} \end{table} Table 4: Ablation study on the impact of \(\alpha\). The provided numbers represent the mIoU \(\uparrow\). Best respective results are marked in **bold**. \begin{table} \begin{tabular}{l|c c c c c c c} \(\beta\) & 0 & 2 & 6 & 10 & 14 & 18 \\ \hline CE & \(\theta\)0.1 (1:49) & - & - & - & - & - \\ U-CE\({}_{\alpha=10}\) & - & 71.1 (1:52) & 71.6 (2:01) & 71.6 (2:27) & 71.6 (2:53) & 71.7 (3:17) \\ \end{tabular} \end{table} Table 5: Ablation study on the number of segmentation samples \(\beta\). In addition to the mIoU \(\uparrow\), we provide the training time in hours:minutes \(\downarrow\) in paranthesis. also naturally capable of predicting meaningful uncertainties after training as well. While U-CE appears to have no apparent shortcomings, except for a minor increase in training time, we acknowledge the need for a transparent discussion about its potential limitations. Our aim is to effectively guide future work in pushing the boundaries of state-of-the-art techniques, especially in safety-critical applications like autonomous driving. **Limitations.** One limitation of U-CE arises in the absence of densely annotated ground truth labels. If most pixels are either labeled as background or designated to be ignored while training, U-CE will likely offer next to no benefit, except for a higher loss around object boundaries. Additionally, U-CE may not contribute to improved segmentation performance if the network is already overfitting the training data. Having said that, the impact of U-CE on generalization needs further examination. **Future Work.** With regards to future work, we have multiple suggestions that might be worth investigating. Potentially, the results of U-CE could be further improved if the quality of the uncertainty estimates would be better. Therefore, it would be interesting to integrate Deep Ensembles [23], the state-of-the-art uncertainty quantification method [14, 39, 48], with U-CE, which we could not realize because of computational restrains. On a similar note, it could be worth employing warmup epochs, which we omitted to refrain from introducing another hyperparameter. Additionally, we would like to see \(\alpha\) removed from U-CE by incorporating statistical hypothesis testing. This would be beneficial in two ways: Firstly, it would remove the most influential hyperparameter of U-CE. Secondly, and maybe more importantly, it would leverage all of the available uncertainties and not just the predictive uncertainty. Finally, we encourage other researchers to incorporate U-CE into state-of-the-art semantic segmentation approaches and to explore its usefulness in other computer vision tasks that rely on pixel-wise predictions, such as depth estimation. Overall, we believe that U-CE presents a promising paradigm in semantic segmentation by dynamically leveraging uncertainties to create more robust and reliable models. Despite a minor increase in training time and room for further improvement, we see no reason not to employ U-CE in comparison to regular CE. ## 6 Conclusion In this paper, we introduced U-CE, a novel uncertainty-aware cross-entropy loss for semantic segmentation. U-CE incorporates predictive uncertainties, based on Monte Carlo Dropout, into the training process through pixel-wise weighting of the regular cross-entropy loss. As a result, we manage to train models that are naturally capable of predicting meaningful uncertainties after training while simultaneously improving their segmentation performance. Through extensive experimentation on the Cityscapes and ACDC datasets using ResNet-18 and ResNet-101 architectures, we demonstrated the superiority of U-CE over regular cross-entropy training. We hope that U-CE and our thorough discussion of potential limitations and future work contribute to the development of more robust and trustworthy segmentation models, ultimately advancing the state-of-the-art in safety-critical applications and beyond. ## Acknowledgment The authors acknowledge support by the state of Baden-Wurttemberg through bwHPC. This work is supported by the Helmholtz Association Initiative and Networking Fund on the HAICORE@KIT partition.
2309.00571
Correction to: Criteria for Strong and Weak Random Attractors
In the article 'Criteria for Strong and Weak Random Attractors' necessary and sufficient conditions for strong attractors and weak attractors are studied. In this note we correct two of its theorems on strong attractors.
Hans Crauel, Sarah Geiss, Michael Scheutzow
2023-09-01T16:32:51Z
http://arxiv.org/abs/2309.00571v1
# Correction to: Criteria for Strong and Weak Random Attractors ###### Abstract In the article 'Criteria for Strong and Weak Random Attractors' necessary and sufficient conditions for strong attractors and weak attractors are studied. In this note we correct two of its theorems on strong attractors. **Keywords:** Random attractor, pullback attractor, weak attractor, Omega limit set, compact random set **MSC2020 subject classifications:** 37B25, 37C70, 37G35, 37H99, 37L55, 60D05, 60H10, 60H15, 60H25 We correct two theorems which provide criteria for strong attractors given in [4]. We use the same assumptions and notation as in [4], i.e. let \(\varphi\) be a continuous random dynamical system on a Polish space \((E,d)\) over a metric dynamical system \((\Omega,\mathscr{F},(\vartheta_{t})_{t\in\mathbb{R}},P)\). We use the same letter \(d\) for the complete metric on \(E\) and the Hausdorff semi-distance on subsets of \(E\). For a subset \(A\) of \(E\) we denote the closed \(\delta\)-neighborhood of \(A\) by \(A^{\delta}\). In the article the following two types of strong attractors are studied: * \(B\)-attractors, i.e. attractors that attract all bounded subsets of \(E\), * \(C\)-attractors, i.e. attractors that attract all compact subsets of \(E\). In [4, Theorem 3.1, Theorem 3.2] the following two theorems have been stated: **Theorem 1** (Original erroneous formulation).: _The following are equivalent:_ * \(\varphi\) _has a strong_ \(B\)_-attractor._ * _For every_ \(\varepsilon>0\) _there exists a compact subset_ \(C_{\varepsilon}\) _such that for each_ \(\delta>0\) _and each bounded and closed subset_ \(B\) _of_ \(E\) _it holds that_ \[P\left\{\bigcup_{s\geq 0}\bigcap_{t\geq s}\varphi(t,\vartheta_{-t}\omega)B \subseteq C_{\varepsilon}^{\delta}\right\}\geq 1-\varepsilon.\] * _There exists a compact strongly_ \(B\)_-attracting set_ \(\omega\mapsto K(\omega)\)_._ **Theorem 2** (Original erroneous formulation).: _The following are equivalent:_ * \(\varphi\) _has a strong_ \(C\)_-attractor._ * _For every_ \(\varepsilon>0\) _there exists a compact subset_ \(C_{\varepsilon}\) _such that for each_ \(\delta>0\) _and each compact subset_ \(B\) _of_ \(E\) _it holds that_ \[P\left\{\bigcup_{s\geq 0}\bigcap_{t\geq s}\varphi(t,\vartheta_{-t}\omega)B \subseteq C_{\varepsilon}^{\delta}\right\}\geq 1-\varepsilon.\] * _There exists a compact strongly_ \(C\)_-attracting set_ \(\omega\mapsto K(\omega)\)_._ The following example shows that the original formulations of Theorem 1 and Theorem 2 are incorrect. **Example 1**.: _Choose \(E=\mathbb{R}\), \(\Omega=\{0\}\) and consider \(\varphi(t,\omega)x:=x+t\) for all \(t\geq 0,x\in E\), \(\omega\in\Omega\). This continuous RDS satisfies for all bounded subsets \(B\subset\mathbb{R}\)_ \[\bigcup_{T\geq 0}\bigcap_{t\geq T}\varphi(t,\vartheta_{-t}\omega)B \subseteq\bigcap_{T\geq 0}\bigcup_{t\geq T}\varphi(t,\vartheta_{-t} \omega)B\] \[\subseteq\Omega_{B}(\omega):=\bigcap_{T\geq 0}\overline{\bigcup_{t \geq T}\varphi(t,\vartheta_{-t}\omega)B}=\emptyset.\] _This RDS has no C-attractor and hence also no B-attractor._ _In particular, (i) and (ii) of Theorem 1 and Theorem 2 of the original formulation are not equivalent. This example shows in particular that also the following stronger property is not sufficient to ensure strong \(B\)-attractors:_ 1. _For every_ \(\varepsilon>0\) _there exists a compact subset_ \(C_{\varepsilon}\) _such that for each_ \(\delta>0\) _and each bounded and closed subset_ \(B\) _of_ \(E\) _it holds that_ \[P\left\{\Omega_{B}(\omega)\subseteq C_{\varepsilon}^{\delta}\right\}\geq 1-\varepsilon.\] The following is a corrected version of Theorem 1: The condition (ii) is modified. In addition, condition (iii) is formulated more precisely than in the original formulation. **Theorem 1** (Corrected formulation).: _The following are equivalent:_ 1. \(\varphi\) _has a strong_ \(B\)_-attractor._ 2. _For every_ \(\varepsilon>0\) _there exists a compact subset_ \(C_{\varepsilon}\) _such that for each_ \(\delta>0\) _and each bounded and closed subset_ \(B\) _of_ \(E\) _there exists a_ \(T>0\) _such that_ \[P\left\{\bigcup_{t\geq T}\varphi(t,\vartheta_{-t}\omega)B\subseteq C_{ \varepsilon}^{\delta}\right\}\geq 1-\varepsilon.\] 3. _There exists a random set_ \(K\subseteq E\times\Omega\) _such that_ \(K(\omega)\) _is_ \(P\)_-a.s. compact and_ \(K\) _attracts all bounded subsets, i.e._ \[\lim_{t\to\infty}d(\varphi(t,\vartheta_{-t}\omega)B,K(\omega))=0\quad P\text{-a.s.}\] _for every bounded subset_ \(B\)_._ **Remark 1**.: _By [1, Lemma 3.5] and its proof we have that_ \[\bigcup_{t\geq T}\varphi(t,\vartheta_{-t}\omega)B\in\mathcal{B}\otimes \mathscr{F}\quad\text{and}\quad\Omega_{B}(\omega)\in\mathcal{B}\otimes \mathscr{F}\] _for all bounded closed subsets \(B\) of \(E\). Here \(\mathcal{B}\) denotes the Borel \(\sigma\)-algebra of \(E\) and \(\bar{\mathscr{F}}\) the \(P\)-completion of \(\mathscr{F}\). Therefore, we have by the measurable projection theorem that_ \[\Omega\backslash\left\{\omega\in\Omega\ \middle|\ \bigcup_{t\geq T} \varphi(t,\vartheta_{-t}\omega)B\subseteq C_{\varepsilon}^{\delta}\right\}\] \[=p_{\Omega}\left(\left\{\bigcup_{t\geq T}\varphi(t,\vartheta_{-t} \omega)B\right\}\cap\left\{(E\backslash C_{\varepsilon}^{\delta})\times\Omega \right\}\right)\in\mathscr{F}\] _where \(p_{\Omega}:E\times\Omega\to\Omega\) denotes the projection onto \(\Omega\). Hence, the expression in (ii) of Theorem 1 is well-defined._ Proof.: The proof is similar to the proof presented in [4]. Equivalence of (i) and (iii) is proven in [5, Theorem 13], see also [2, Theorem 3.4, Remark 3.5]. We first show (i) \(\implies\) (ii): Let \(\varepsilon>0\) be arbitrary. Since \(E\) is a Polish space and the attractor \(A\) is a random variable taking values in the compact sets, there exists a compact subset \(C_{\varepsilon}\subseteq E\) such that \[P\{A(\omega)\subseteq C_{\varepsilon}\}\geq 1-\varepsilon/2 \tag{1}\] (see Crauel [3, Proposition 2.15]). Let \(B\subseteq E\) be a bounded and closed set. Then we have by (i) \[\lim_{t\to\infty}d(\varphi(t,\vartheta_{-t}\omega)B,A(\omega))=0\quad\text{$P $-a.s.},\] i.e. for every \(\delta>0\) there exists a \(T(\omega)>0\) such that for all \(t\geq T(\omega)\) we have \(d(\varphi(t,\vartheta_{-t}\omega)B,A(\omega))\leq\delta\)\(P\)-almost surely. Hence, there exists some deterministic \(T>0\) such that \[P\left\{\bigcup_{t\geq T}\varphi(t,\vartheta_{-t}\omega)B\subseteq A(\omega) ^{\delta}\right\}\geq 1-\varepsilon/2. \tag{2}\] Combining (1) and (2) implies (ii). Now we show (ii) \(\implies\) (iii): Let \((B_{k})_{k\in\mathbb{N}}\) be a sequence of bounded closed subsets of \(E\) such that \(B_{0}\subseteq B_{1}\subseteq B_{2}\dots\) and such that for any bounded subset \(B\subseteq E\) there exists some \(k\in\mathbb{N}\) such that \(B\subseteq B_{k}\). We modify the random attractor constructed in the proof given in [4] to ensure that it is indeed a random set: We define \(A(\omega)\) to be the (unique) smallest closed random set that contains \(\bigcup_{k\in\mathbb{N}}\Omega_{B_{k}}(\omega)\), see [5, Proposition 17]. By (ii) for all \(\varepsilon>0\) there exists a compact set \(C_{\varepsilon}\subseteq E\) such that for every \(\delta>0\) and for every \(k\in\mathbb{N}\) there exist \(T(k)>0\) such that \[P\left\{\bigcup_{t\geq T(k)}\varphi(t,\vartheta_{-t}\omega)B_{k}\subseteq C _{\varepsilon}^{\delta}\right\}\geq 1-\varepsilon.\] Using that \(C_{\varepsilon}\) is closed, this implies that \(P\{\Omega_{B_{k}}(\omega)\subseteq C_{\varepsilon}\}\geq 1-\varepsilon\). As \(\Omega_{B_{k}}(\omega)\subseteq\Omega_{B_{k+1}}(\omega)\) this implies \(P\{\bigcup_{k\in\mathbb{N}}\Omega_{B_{k}}(\omega)\subseteq C_{\varepsilon}\} \geq 1-\varepsilon\). This implies by the properties of \(A(\omega)\) given by [5, Proposition 17] that \(A(\omega)\) is a compact random set. It remains to prove that \(A(\omega)\) attracts all bounded sets. To this end consider an arbitrary bounded subset \(B\) of \(E\) and let \(k\in\mathbb{N}\) be such that \(B\subseteq B_{k}\). Let \(\varepsilon>0\) be arbitrary. By (ii) there exists for every \(m\in\mathbb{N}\) some \(T_{m}>0\) such that \[P\left\{d\left(\bigcup_{t\geq T_{m}}\varphi(t,\vartheta_{-t}\omega)B_{k},C_{ \varepsilon}\right)\leq 1/m\right\}\geq 1-\varepsilon.\] which implies \[P\left\{\sup_{t\geq T_{m}}d(\varphi(t,\vartheta_{-t}\omega)B_{k},C_{ \varepsilon})\leq 1/m\text{ for infinitely many }\mathrm{m}\right\}\geq 1-\varepsilon.\] To obtain the previous inequality we used for \(M_{m}:=\{\sup_{t\geq T_{m}}d(\varphi(t,\vartheta_{-t}\omega)B_{k},C_{ \varepsilon})\leq 1/m\}\) that \[P\left[\bigcap_{n\in\mathbb{N}}\bigcup_{m=n}^{\infty}M_{m}\right]=\lim_{n\to \infty}P\left[\bigcup_{m=n}^{\infty}M_{m}\right]\geq\limsup_{m\to\infty}P[M_{m }]\geq 1-\varepsilon.\] Hence, we have \[P\left\{\lim_{t\to\infty}d(\varphi(t,\vartheta_{-t}\omega)B_{k},C_{ \varepsilon})=0\right\}\geq 1-\varepsilon.\] Due to \(\Omega_{B_{k}}\subseteq A\) and compactness of \(C_{\varepsilon}\) this implies \[P[\lim_{t\to\infty}d(\varphi(t,\vartheta_{-t}\omega)B_{k},A(\omega))\neq 0]<\varepsilon.\] The assertion follows as the previous inequality holds for arbitrary \(\varepsilon>0\) The following is a corrected version of Theorem 2. It follows from the proof given in [4] and the corrected proof of Theorem 1. (One can use e.g. [5, Lemma 8] to verify that \(\Omega_{B}\) is invariant.) **Theorem 2** (Corrected formulation).: _The following are equivalent:_ 1. \(\varphi\) _has a strong_ \(C\)_-attractor._ 2. _For every_ \(\varepsilon>0\) _there exists a compact subset_ \(C_{\varepsilon}\) _such that for each_ \(\delta>0\) _and each compact subset_ \(B\) _of_ \(E\) _there exists a_ \(T>0\) _such that_ \[P\left\{\bigcup_{t\geq T}\varphi(t,\vartheta_{-t}\omega)B\subset C_{ \varepsilon}^{\delta}\right\}\geq 1-\varepsilon.\] 3. _There exists a random set_ \(K\subseteq E\times\Omega\) _such that_ \(K(\omega)\) _is_ \(P\)_-a.s. compact and_ \(K\) _attracts all compact subsets._
2303.04994
Distributional Vector Autoregression: Eliciting Macro and Financial Dependence
Vector autoregression is an essential tool in empirical macroeconomics and finance for understanding the dynamic interdependencies among multivariate time series. In this study, we expand the scope of vector autoregression by incorporating a multivariate distributional regression framework and introducing a distributional impulse response function, providing a comprehensive view of dynamic heterogeneity. We propose a straightforward yet flexible estimation method and establish its asymptotic properties under weak dependence assumptions. Our empirical analysis examines the conditional joint distribution of GDP growth and financial conditions in the United States, with a focus on the global financial crisis. Our results show that tight financial conditions lead to a multimodal conditional joint distribution of GDP growth and financial conditions, and easing financial conditions significantly impacts long-term GDP growth, while improving the GDP growth during the global financial crisis has limited effects on financial conditions.
Yunyun Wang, Tatsushi Oka, Dan Zhu
2023-03-09T02:38:15Z
http://arxiv.org/abs/2303.04994v1
# Distributional Vector Autoregression: ###### Abstract Vector autoregression is an essential tool in empirical macroeconomics and finance for understanding the dynamic interdependencies among multivariate time series. In this study, we expand the scope of vector autoregression by incorporating a multivariate distributional regression framework and introducing a distributional impulse response function, providing a comprehensive view of dynamic heterogeneity. We propose a straightforward yet flexible estimation method and establish its asymptotic properties under weak dependence assumptions. Our empirical analysis examines the conditional joint distribution of GDP growth and financial conditions in the United States, with a focus on the global financial crisis. Our results show that tight financial conditions lead to a multimodal conditional joint distribution of GDP growth and financial conditions, and easing financial conditions significantly impacts long-term GDP growth, while improving the GDP growth during the global financial crisis has limited effect on financial conditions. _Keywords:_ Vector Autoregression, Impulse Response Function, Multivariate Time Series, Distributional Regression _JEL Codes:_ C14, C32, C53, E17, E44 Introduction Since the seminal work of Sims (1980), vector autoregression (VAR) has emerged as an essential tool in empirical macroeconomics and finance to facilitate the basic quantitative description, forecasting, and structural analysis of multivariate time series (see Litterman, 1986; Stock and Watson, 2001). The standard VAR is built upon the mean regression for multivariate systems, often with multivariate Gaussian errors. It enables insightful structural analysis, most notably through impulse response functions (IRFs). However, sharp macroeconomic downturns triggered by the recent financial crisis and the pandemic have led to increasing interest in exploring the distributional features of multivariate time series. This line of research moves beyond the traditional focus on mean estimation and studies the the distributional effect of a shock, such as the growth-at-risk of economic activity. This study proposes a semiparametric distributional VAR model that serves as a flexible alternative for analyzing the distributional properties of multivariate time series. We introduce an estimation method that combines the distribution factorization and the distributional regression (DR) approach. Unlike traditional parametric models, our approach does not impose a global parametric assumption on either the marginal or joint distributions of the variables, conditional on their past values. The framework employs regression models that can incorporate a moderately large number of conditional variables, capturing the influence of past events on the entire response distribution. Additionally, we introduce a distributional counterpart to the commonly used IRFs to examine how the conditional distributions evolve after a perturbation in the distribution of a variable in the system at a specific point in time. This enables us to identify the heterogeneity and nonlinearity in the response dynamics. Our work builds on and contributes to several strands of literature. Firstly, the fundamental basis of our estimation method is DR, which is a semiparametric method for marginal conditional distributions. Williams and Grizzle (1972) first introduced DR to analyze ordered categorical outcomes using multiple binary regressions. It was later extended by Foresi and Peracchi (1995) to characterize any conditional distribution, and a local version was proposed by Hall et al. (1999). Chernozhukov et al. (2013) established the uniform validity of the inference for the entire conditional distribution. The DR approach has also been explored by Rothe and Wied (2013) and Chernozhukov et al. (2020), among others. More recently, the DR method has been extended to the conditional multivariate distributions of independent cross-sectional data. Meier (2020) considered the direct application of the DR method to estimate the joint conditional distribution, while Wang et al. (2022) proposed a method based on the DR and distribution factorization, which addressed a possible computational issue arising from the direct DR application. Our study further extends the scope of the DR approach to analyze the multivariate time-series data and their dynamic interdependencies. We also provide asymptotic properties of the proposed estimator under the \(\beta\)-mixing condition. Our paper also contributes to the literature on the structural vector autoregression (SVAR) model. Structural interpretations of VAR models require additional identifying assumptions based on institutional knowledge, economic theory, or other external constraints on the model responses (Blanchard and Quah, 1989; Rubio-Ramirez et al., 2010; Kilian and Lutkepohl, 2017). A commonly used identification scheme in the VAR literature is to assume a lower triangular form for the contemporaneous variance matrix of the endogenous variables (Sims, 1980; Primiceri, 2005). Our study emlopyes a similar structural identification scheme based on a triangular assumption. When estimating the multiperiod conditional forecasting distributions for impulse responses analysis, we adopt the concepts of local projection (Jorda, 2005) and direct multistep forecasting (McCracken and McGillicuddy, 2019). This approach allows us to estimate a distinct multivariate distribution at each forecast horizon, rather than using an iterative forecast. Recent research by Plagborg-Moller and Wolf (2021) has shown that the local projection and VAR models are conceptually equivalent in estimating the impulse response functions. This paper also contributes to the growing literature on extending the quantile regression framework to VAR models, building on previous works by Koenker and Bassett (1978) and Koenker and Xiao (2004, 2006). More specifically, White et al. (2015) developed a multivariate autoregression model of the quantiles to directly study the degree of tail interdependence among multivariate time series. Montes-Rojas (2019) suggested a reduced form quantile VAR model based on the directional quantiles framework and estimated a quantile impulse response function (QIRF) to explore the dynamic effects for a fixed collection of quantile indices. Chavleishvili and Manganelli (2019) proposed a quantile SVAR model and analyzed QIRFs based on fixed sample paths. Our proposed approach differs from existing Quantile VAR models in that we target the joint conditional distribution, while Quantile VAR is modeled by a system of quantile-regression equations, similar to mean VAR models. In some applications, our approach provides a straightforward way to analyze joint distributional features and dynamic propagation of shocks on the entire joint distribution conditional on past events. However, it should be noted that the conditional quantile and distribution of a continuous random variable are equivalent up to an inverse transformation. Therefore, both quantile and distributional VAR can extract similar information from multivariate time series and can be used depending on the research goal. This paper applies the proposed framework to analyze the joint distribution of the real GDP growth rate and the national financial condition index (NFCI) of the United States (U.S.), conditional upon past lagged variables. Insightful work by Adrian et al. (2021) previously studied the same dataset, using a nonparametric kernel framework to estimate the joint conditional distribution and a density IRF. The key findings from this work suggest that the joint distribution is unimodal during normal times but exhibits clear multimodality during the Great Recession. Adrian et al. (2019) studied the distribution of GDP growth given the past financial and economic conditions using quantile regressions, suggesting that the distribution exhibits much more variation over time in the median and lower tail compared to the upper tail. Our approach complements theirs by allowing for more lagged variables as conditional regressors, which is not possible in nonparametric approaches due to the curse of dimensionality. Our empirical results confirm their key findings even after conditioning more lagged variables. We also find multimodality only appears in short-term forecasts and resolves within a few quarters with relatively mild tightness. We further investigate the DIRFs for the possible policy effect on the moments, quantiles, and entire distributions of all variables during the Great Recession. Our findings indicate that if the policies implemented in 2008:Q3 had been successful in preventing financial tightening in 2008:Q4, the likelihood of adverse real GDP growth and tight NFCI would have been improved in 2009:Q1-Q2 and reduced in 2009:Q3-Q4, which is consistent with the findings of Adrian et al. (2021). Additional evidence from the mixed-frequency model suggests that an impulse on the NFCI has a long-term effect on the NFCI and real GDP growth. However, our analysis of a distributional impulse on the real GDP growth suggests a different result that limiting the likelihood of negative real GDP growth in 2008:Q4 only increases the likelihood of positive economic activity in the short run but has almost no effect on the NFCI even in the subsequent quarter. The remainder of this paper is organized as follows. Section 2 introduces the proposed multivariate model and DIRF. Section 3 explains the estimation procedures and presents the estimators' asymptotic results. In Section 4, we apply our approach to study the U.S. time series data on macroeconomic and financial conditions. Section 5 concludes the paper. The proofs of the theoritical results and additional empirical analysis are provided in Appendix. ## 2 Multivariate Distributional Regression We introduce a semiparametric regression approach for conditional multivariate distributions and explain the DIRF, which describes the dynamic effect of a shock on the entire distribution of multivariate time series. In what follows, \(\mathbb{R}\) denotes the the set of real numbers, and \(1\!\!1\{\cdot\}\) denotes the indicator function taking the value \(1\) if the condition inside \(\{\cdot\}\) is satisfied and \(0\) otherwise. We use \(\ell^{\infty}(D)\) to refer to the collection of all real-valued bounded functions defined on an arbitrary set \(D\). We denote by \(\|\cdot\|\) the Euclidean norm for vectors and use \(a^{\top}\) to represent the transpose of a vector \(a\). ### Model Suppose that we observe a stationary time series \(\{(Y_{t},Z_{t})\}_{t=1}^{T}\) with a sample size of \(T\), where \(Y_{t}=(Y_{1t},\ldots,Y_{Jt})^{\top}\) is a \(J\)-dimensional outcome variables and \(Z_{t}\) is a \(k\times 1\) vector of conditioning variables. We denote supports \(\mathcal{Y}:=\times_{j=1}^{J}\mathcal{Y}_{j}\subset\mathbb{R}^{J}\) and \(\mathcal{Z}\subset\mathbb{R}^{k}\) for \(Y_{t}\) and \(Z_{t}\), respectively, where \(\mathcal{Y}_{j}\) denotes the support of \(Y_{jt}\). Given the multivariate time series, the objective is to estimate the conditional joint distribution \(F_{Y_{t}|Z_{t}}(y|z)\) of \(Y_{t}\) given \(Z_{t}=z\), for \((y,z)\in\mathcal{Y}{\times}\mathcal{Z}\). This study mainly focuses on the VAR setup wherein \(Z_{t}\) comprises only lagged dependent variables, while our framework can also be applied to other cases, such as VAR models with proxy or instrumental variables (Bloom, 2009; Jurado et al., 2015). We propose a semiparametric estimation method for joint conditional distributions. We first apply distribution factorization, which expresses the joint conditional distribution using a collection of marginal conditional distributions through a hierarchical structure, and then estimate conditional marginal distributions. Specifically, we define \[X_{1t}:=(1,Z_{t}^{\top})^{\top}\ \ \text{and}\ \ X_{jt}:=(1,Z_{t}^{\top},Y_{1t}, \ldots,Y_{j-1,t})^{\top}\ \ \text{for}\ j=2,\ldots,J,\] with supports \(\mathcal{X}_{j}\subset\mathbb{R}^{k+j}\) for \(j=1,\ldots,J\). Let \(F_{Y_{jt}|X_{jt}}\) be the marginal conditional distribution of \(Y_{jt}\) given \(X_{jt}\). Subsequently, the distribution factorization yields the existence of a transformation \(\rho:\times_{j=1}^{J}\ell^{\infty}(\mathcal{Y}_{j}{\times}\mathcal{X}_{j}) \rightarrow\ell^{\infty}(\mathcal{Y}{\times}\mathcal{Z})\), given by \[F_{Y_{t}|Z_{t}}=\rho(F_{Y_{1t}|X_{1t}},\ldots,F_{Y_{Jt}|X_{Jt}}). \tag{1}\] This expression is useful because a multivariate joint distribution can be obtained by separately modeling these \(J\) marginal conditional distributions. There are many different transformations for the distribution factorization, each of which is mathematically valid and relevant depending on its empirical applications. This is because the transformation \(\rho(\cdot)\) in (1) depends on the ordering of \(Y_{t}\) coordinates. Selecting a different permutation-based ordering yields an alternative transformation. We shall discuss this ordering issue later in Section 2.2 for the purpose of structural analysis. In the following example, we illustrate the transformation in the bivariate outcome case. Example 1.Let \(Y_{t}=(Y_{1t},Y_{2t})^{\top}\) and the predetermined variables be up to two lags or \(Z_{t}=(Y_{t-1}^{\top},Y_{t-2}^{\top})^{\top}\). The conditional joint distribution of \(Y_{t}\) given \(Z_{t}\) is then characterized via the following distribution factorization: \[F_{Y_{t}|Z_{t}}=\int F_{Y_{2t}|X_{2t}}dF_{Y_{1t}|X_{1t}},\] where \(F_{Y_{1t}|X_{1t}}\) and \(F_{Y_{2t}|X_{2t}}\) are conditional marginal distributions with \(X_{1t}=(1,Z_{t}^{\top})^{\top}\) and \(X_{2t}=(1,Z_{t}^{\top},Y_{1t})^{\top}\). When studying an univariate conditional distribution, a common method is to assume an appropriate parametric distribution based on sample information, often with the conditioning variables affecting only its location or scale parameters. However, when observations exhibit complex statistical features, such as long-tails and extreme skewness, selecting an appropriate parametric model becomes challenging. In this study, we apply the DR method, which does not impose restrictive global parametric assumptions. The DR approach characterizes the entire conditional distribution of an outcome variable, conditional upon a vector of covariates, by fitting a collection of parametric linear-index models over the outcome locations. Specifically, for the estimation of the \(j\)-th conditional distribution \(F_{Y_{jt}|X_{jt}}\), we consider, for any \((y_{j},x_{j})\in\mathcal{Y}_{j}\times\mathcal{X}_{j}\), \[F_{Y_{jt}|X_{jt}}(y_{j}|x_{j})=\Lambda\big{(}\phi_{j}(x_{j})^{\top}\theta_{j}( y_{j})\big{)}, \tag{2}\] where \(\Lambda:\mathbb{R}\rightarrow[0,1]\) is a known link function such as a logistic or probit function1, \(\phi_{j}:\mathcal{X}_{j}\mapsto\mathbb{R}^{d_{j}}\) is a transformation and \(\theta_{j}(y_{j})\) is a \(d_{j}\times 1\) vector of unknown parameters specific to the location \(y_{j}\). The entire conditional distribution of \(Y_{jt}\) is characterised by considering different locations over the support \(\mathcal{Y}_{j}\), and the set of marginal conditional distributions results in a joint conditional distribution by the transformation in (1). The proposed model is sufficiently flexible in its ability to incorporate covariates and set regression coefficients for each outcome location. Footnote 1: In practice, for each \(Y_{jt}\), one can choose different link functions, while we use the same notation for simplification. For sufficiently rich transformation of the covariates, one can approximate the conditional distribution function arbitrarily well without extra concern about the choice of the link function. ### Distributional Impulse Response Function IRFs are standard structural analysis tools that characterize the dynamic propagation of contemporaneous shocks on multivariate time series in empirical macroeconomics and finance. Sims (1980) originally proposed IRFs using the moving average representation of VAR, whereas Jorda (2005) introduced the local projection approach, which evaluates the dynamic effects of shocks under the multistep ahead forecast framework. Plagborg-Moller and Wolf (2021) recently proved that the local projections and VARs estimate the same impulse responses in population. Unlike the VAR literature, which has traditionally considered mean IFRs, the recent literature explores the dynamic effect of a shock on the entire distribution using QIRFs (Montes-Rojas, 2019; Chavleishvili and Manganelli, 2019) and density IRF (Adrian et al., 2021). We consider a local projection approach by integrating the conditional distribution of observable variables with respect to a counterfactual distribution to develop the DIRFs. The proposed approach can be viewed as a dynamic extension of Chernozhukov et al. (2013), who considered the counterfactual unconditional distributions for program evaluation with cross-sectional observations. Given a non-negative integer \(h\), the baseline joint distribution \(F_{Y_{t+h}|Z_{t}}\) of \(h\)-ahead outcomes \(Y_{t+h}\) conditional on \(Z_{t}\) is written as \[F_{Y_{t+h}|Z_{t}}=\int F_{Y_{t+h}|Y_{t},Z_{t}}dF_{Y_{t}|Z_{t}},\] where \(F_{Y_{t}|Z_{t}}\) and \(F_{Y_{t+h}|Y_{t},Z_{t}}\) are two different conditional distributions of the observed variables that are identified from the data and characterized in a manner similar to the proposed semiparametric approach. When estimating \(F_{Y_{t+h}|Y_{t},Z_{t}}\), the concept of local projection is adopted, that is, we estimate different models for different horizons \(h\) by regressing \(Y_{t+h}\) on \((Y_{t},Z_{t})\) with the DR approach. We consider a scenario where an alternative conditional distribution, \(G_{Y_{t}|Z_{t}}\), is used instead of the actual distribution \(F_{Y_{t}|Z_{t}}\). Throughout the paper, we assume that the counterfactual distribution \(G_{Y_{t}|Z_{t}}\) is supported by a subset of \(\mathcal{Y}_{t}\) for identification purposes. Under the scenario with the distribution \(G_{Y_{t}|Z_{t}}\), the counterfactual conditional joint distribution is defined as \[F^{*}_{Y_{t+h}|Z_{t}}:=\int F_{Y_{t+h}|Y_{t},Z_{t}}dG_{Y_{t}|Z_{t}}.\] In addition, the baseline and counterfactual marginal distributions of the \(j\)-th variable \(Y_{j,t+h}\) can be defined in the similar way with \[F_{Y_{j,t+h}|Z_{t}}=\int F_{Y_{j,t+h}|Y_{t},Z_{t}}dF_{Y_{t}|Z_{t}}\quad\text{ and }\quad F^{*}_{Y_{j,t+h}|Z_{t}}:=\int F_{Y_{j,t+h}|Y_{t},Z_{t}}dG_{Y_{t}|Z_{t}}, \tag{3}\] where the conditional distribution \(F_{Y_{j,t+h}|Y_{t},Z_{t}}\) can be modeled using the univariate DR approach by regressing \(Y_{j,t+h}\) on \((Y_{t},Z_{t})\). We consider a distributional change of only one element of \(Y_{t}\), which benefits the analysis and interpretation in empirical applications, to set up the counterfactual joint distribution \(G_{Y_{t}|Z_{t}}\). For instance, we replace the actual \(i\)-th marginal distribution \(F_{Y_{it}|X_{it}}\) with a counterfactual marginal distribution \(G_{Y_{it}|X_{it}}\); thus, the counterfactual joint distribution is given by \(G_{Y_{t}|Z_{t}}=\rho(F_{Y_{1t}|X_{1t}},\ldots,G_{Y_{it}|X_{it}},\ldots,F_{Y_{ lt}|X_{Jt}})\) under (1). When applying the proposed semi-parametric approach to characterize \(G_{Y_{t}|Z_{t}}\), the ordering of the observables is required for distribution factorization to identify the shock. Under the SVAR setting, one standard identification scheme is the recursive short-run restriction, which assumes a lower triangular form for the contemporaneous covariance matrix of the endogenous variables (Sims, 1980). Our triangular assumption is similar to that of the structural identification scheme. Other identification strategies in the SVAR literature include the long-run restriction (Blanchard and Quah, 1989), sign restriction (Antolin-Diaz and Rubio-Ramirez, 2018), and identification using instrumental variables (Jurado et al., 2015). We formally define the DIRF as follows. **Definition 1**.: _The distributional impulse response function that describes the effect of the shock on the joint distribution of \(Y_{t}\) after \(h\) periods is defined as,_ \[DIR_{h}:=F^{*}_{Y_{t+h}|Z_{t}}-F_{Y_{t+h}|Z_{t}}. \tag{4}\] _The distribution impulse response function of the \(j\)-th variable after \(h\) periods is defined as_ \[DIR_{j,h}:=F^{*}_{Y_{j,t+h}|Z_{t}}-F_{Y_{j,t+h}|Z_{t}}. \tag{5}\] The proposed framework is general in several ways. First, standard impulse response analysis is often conducted by evaluating the effects of a one-unit change on \(h\)-ahead outcomes. This is a special case of the counterfactual scenarios in which the counterfactual joint distribution \(G_{Y_{t}|Z_{t}}\) can take a degenerate distribution or a point mass at one value for a variable of interest. Next, when estimating the DIRF, the identification restriction is only required for constructing the counterfactual distribution \(G_{Y_{t}|Z_{t}}\) in order to identify the shock, but it is unnecessary for \(F_{Y_{t+h}|Y_{t},Z_{t}}\). Finally, given distributional information, other statistics of interest, such as the mean, standard deviation, quantiles, can be easily obtained. Therefore, the proposed DIRF is sufficiently flexible for researchers to investigate other impulse response functions generally considered in the literature. Specifically, the mean IRF for the \(j\)-th variable is given by \[MIRF_{j,h}:=\int y_{j,t+h}dF^{*}_{Y_{j,t+h}|Z_{t}}(y_{j,t+h})-\int y_{j,t+h}dF _{Y_{j,t+h}|Z_{t}}(y_{j,t+h}),\] and the \(\tau\)-th quantile IRF of the \(j\)-th element for \(\tau\in(0,1)\) is given by \[QIRF_{j,h}(\tau):=F^{*-1}_{Y_{j,t+h}|Z_{t}}(\tau)-F^{-1}_{Y_{j,t+h}|Z_{t}}( \tau),\] where \(F^{*-1}_{Y_{j,t+h}|Z_{t}}(\cdot)\) and \(F^{-1}_{Y_{j,t+h}|Z_{t}}(\cdot)\) are the quantile functions as inverse of the \(j\)-th variable's distribution functions \(F^{*}_{Y_{j,t+h}|Z_{t}}(\cdot)\) and \(F_{Y_{j,t+h}|Z_{t}}(\cdot)\), respectively. Example 1 (continued).We illustrate how the proposed framework works in the bivariate case. First, the joint baseline distribution \(F_{Y_{t+h}|Z_{t}}\) is written as \[F_{Y_{t+h}|Z_{t}}(y_{t+h}|z_{t})=\int F_{Y_{t+h}|Y_{t},Z_{t}}(y_{t+h}|y_{t},z_ {t})dF_{Y_{t}|Z_{t}}(y_{t}|z_{t}),\] Letting \(G_{Y_{2t}|X_{2t}}\) be a counterfactual marginal distribution for \(Y_{2t}\) given \(X_{2t}\), we obtain the counterfactual joint distribution at time \(t\): \[G_{Y_{t}|Z_{t}}=\int F_{Y_{1t}|X_{1t}}dG_{Y_{2t}|X_{2t}}.\] Then, the joint counterfactual distribution after \(h\) periods is given by \[F^{*}_{Y_{t+h}|Z_{t}}(y_{t+h}|z_{t})=\int F_{Y_{t+h}|Y_{t},Z_{t}}(y_{t+h}|y_{t},z_{t})dG_{Y_{t}|Z_{t}}(y_{t}|z_{t}).\] The difference between \(F_{Y_{t+h}|Z_{t}^{*}}\) and \(F_{Y_{t+h}|Z_{t}}\) leads to \(DIR_{h}\) in (4). As a special case, we can set the counterfactual marginal distribution to be a degenerate distribution with \(\Pr(Y_{2t}=y_{2t}^{*}|X_{2t})=1\). In this case, the counterfactual distribution can be reduced to \(\int F_{Y_{t+h}|Y_{1t},Y_{2t},Z_{t}}(y_{t+h}|y_{1t},y_{2t}^{*},z_{t})dF_{Y_{1 t}|X_{1t}}(y_{1t}|x_{1t})\). ## 3 Estimation and Asymptotic Properties We introduce the estimation procedures of the conditional distribution and the DIRF and then provide the asymptotic properties for the estimators of conditional distribution functions and their transformations. ### Estimation For estimating the multivariate joint distributions, the primary step is to estimate a collection of univariate conditional distributions using the DR approach. In this study, we estimate Model (2) using a binary choice model for the binary outcome \(\mbox{1l}\{Y_{jt}\leq y_{j}\}\) with \(y_{j}\in\mathcal{Y}_{j}\) under the maximum likelihood framework for each \(j\in\{1,\ldots,J\}\). The estimators of the unknown parameters are defined as the maximizer of a log-likelihood function as follows: \[\widehat{\theta}_{j}(y_{j})=\arg\max_{\theta_{j}\in\Theta_{j}}\widehat{\ell}_{ y,j}(\theta_{j}), \tag{6}\] where \(\Theta_{j}\subset\mathbb{R}^{d_{j}}\) is the parameter space and the log likelihood is given by \[\widehat{\ell}_{y,j}(\theta_{j}):=\frac{1}{T}\sum_{t=1}^{T}\big{[} \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\{Y_{jt}\leq y_{j}\}\ln\Lambda \big{(}\phi_{j}(X_{jt})^{\top}\theta_{j}\big{)}+\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1}\{Y_{jt}>y_{j}\}\ln\big{(}1-\Lambda\big{(}\phi_{j}(X_{jt})^{ \top}\theta_{j}\big{)}\big{)}\big{]}.\] The conditional distribution estimator of \(Y_{jt}\) given \(X_{jt}=x_{j}\) is given by \[\widehat{F}_{Y_{jt}|X_{jt}}(y_{j}|x_{j}):=\Lambda\big{(}\phi_{j} (x_{j})^{\top}\widehat{\theta}_{j}(y_{j})\big{)}. \tag{7}\] In practice, a sufficient number of discrete points of the support \(\mathcal{Y}_{j}\) can be selected to obtain the estimator \(\widehat{F}_{Y_{jt}|X_{jt}}(y_{j}|x_{j})\). One important property is that the map \(y_{j}\mapsto F_{Y_{jt}|X_{jt}}(y_{j}|x_{j})\) is non-decreasing by definition. However, the estimated distribution function \(\widehat{F}_{Y_{jt}|X_{jt}}(\cdot|x_{j})\) does not necessarily satisfy monotonicity in finite samples. We monotonize the conditional distribution estimators at different locations using the rearrangement method proposed by Chernozhukov et al. (2009). This procedure can yield finite-sample improvement (for instance, see Chetverikov et al., 2018) and permit a straightforward application of the functional delta method when transforming the estimated distributions using Hadamard differentiable maps. Given the estimators of marginal conditional distributions and the transformation in (1), the conditional joint distribution can then be estimated as \[\widehat{F}_{Y_{t}|Z_{t}}=\rho(\widehat{F}_{Y_{1t}|X_{1t}},\widehat{F}_{Y_{2t} |X_{2t}},\ldots\widehat{F}_{Y_{Jt}|X_{Jt}}). \tag{8}\] For different horizons \(h\), we estimate the conditional joint distribution \(F_{Y_{t+h}|Y_{t},Z_{t}}\) in the similar way and denote the estimator by \(\widehat{F}_{Y_{t+h}|Y_{t},Z_{t}}\). If we consider a marginal counterfactual distribution \(G_{Y_{it}|X_{it}}\) for the \(i\)-th element \(Y_{it}\), the joint counterfactual distribution \(G_{Y_{t}|Z_{t}}\) can be estimated with \(\widehat{G}_{Y_{t}|Z_{t}}=\rho(\widehat{F}_{Y_{1t}|X_{1t}},\ldots,G_{Y_{it}|X _{it}},\ldots,\widehat{F}_{Y_{Jt}|X_{Jt}}).\) These estimated distributions enable us to obtain the estimator of the actual and conterfactual joint distributions:2 Footnote 2: Alternanvily, we can estimate the actual distribution \(\widehat{F}_{Y_{t+h}|Z_{t}}\) directly using the transformation in equation (8). To maintain comparability between the estimators of the actual and counterfactual distributions, we use the same estimators for both distributions for our empirical application, as described in the main text. \[\widehat{F}_{Y_{t+h}|Z_{t}}=\int\widehat{F}_{Y_{t+h}|Y_{t},Z_{t}}d\widehat{F}_ {Y_{t}|Z_{t}},\ \ \text{and}\ \ \widehat{F}_{Y_{t+h}|Z_{t}}^{*}=\int\widehat{F}_{Y_{t+h}|Y_{t},Z_{t}}d\widehat{G }_{Y_{t}|Z_{t}}.\] We apply the univariate DR approach to estimate the conditional distribution \(F_{Y_{j,t+h}|Y_{t},Z_{t}}\), with the estimator denoted as \(\widehat{F}_{Y_{j,t+h}|Y_{t},Z_{t}}\). The actual and conterfactual marginal distributions of \(Y_{j,t+h}\) can be estimated in a similar way by replacing the conditional distributions with the estimated counterparts in (3). Finally, the \(DIR_{h}\) and \(DIR_{j,h}\) can be estimated by the difference between the corresponding baseline and counterfactual distributions. ### Asymptotic Properties In this subsection, we provide the asymptotic properties of the conditional joint distribution estimators and the estimator of DIRFs. The proofs of all theorems in this subsection are provided in Appendix A. Let \(\ell_{y,j}(\cdot)\) be the population log-likelihood; we define the true parameters \(\theta_{j}(y_{j})\) as the solution to the following maximization problem: \[\theta_{j}(y_{j})=\arg\max_{\theta_{j}\in\Theta_{j}}\ell_{y,j}( \theta_{j}). \tag{9}\] A vector of the true parameters related to \(J\) conditional marginal distributions and a vector of the corresponding estimators are respectively given by \[\theta(y):=\big{(}\theta_{1}(y_{1})^{\top},\theta_{2}(y_{2})^{ \top},\ldots,\theta_{J}(y_{J})^{\top}\big{)}^{\top}\ \ \text{and}\ \ \widehat{\theta}(y):=\big{(}\widehat{\theta}_{1}(y_{1})^{\top},\widehat{ \theta}_{2}(y_{2})^{\top},\ldots,\widehat{\theta}_{J}(y_{J})^{\top}\big{)}^{ \top}.\] Also, let \(\Theta:=\times_{j=1}^{J}\Theta_{j}\) be the parameter space for \(\theta(y)\) and \(\widehat{\theta}(y)\). We denote the second derivative of the population log-likelihood at the true parameters by \(H_{j}(y_{j}):=\nabla^{2}\ell_{y,j}\big{(}\theta_{j}(y_{j})\big{)}\). The following assumptions are imposed to obtain the asymptotic results: **Assumptions** 1. The time series \(\{(Y_{t},Z_{t})\}_{t=1}^{T}\) are strictly stationary \(\beta\)-mixing or absolutely regular process, with \(\beta\)-mixing coefficients \(\{\beta_{k}\}\) satisfying the condition that \(\sum_{k>0}\beta_{k}<\infty\). The supports \(\mathcal{Y}\) and \(\mathcal{Z}\) are compact subsets of \(\mathbb{R}^{J}\) and \(\mathbb{R}^{k}\), respectively. 2. The link function \(\Lambda(\cdot)\) is twice continuously differentiable with its first derivative \(\lambda(\cdot)\). The log-likelihood function \(\theta_{j}\mapsto\widehat{\ell}_{y,j}(\theta_{j})\) is uniformly concave for any \(y_{j}\in\mathcal{Y}_{j}\) with \(j=1,\ldots,J\). 3. The true parameters \(\theta_{j}(\cdot)\) are contained in the interior of the compact parameter space \(\Theta_{j}\) for every \(j=1,\ldots,J\). 4. The conditional density function \(f_{Y_{jt}|X_{jt}}(y_{j}|x_{j})\) is uniformly bounded in \(\mathcal{Y}_{j}{\times}\mathcal{X}_{j}\) and continuous in \(\mathcal{Y}_{j}\) for every \(j\in\{1,\ldots,J\}\). Assumption A1 requires that the time series are \(\beta\)-mixing sequences, which allows for heteroscedasticity and serial dependence. In the theorems presented below, we establish the weak convergence of the empirical processes by utilizing the result in Rio (1998). The requirement of this result is that \(\beta\)-mixing sequences satisfy \(\sum_{k>0}\beta_{k}<\infty\), which is a weaker condition than the one considered in Arcones and Yu (1994), where it is required that \(\beta_{k}=O(k^{-c})\) for some \(c>1\). Furthermore, to obtain the limit processes, a compact support is required, which can be satisfied in our empirical application. Assumption A2 ensures that standard optimization procedures based on derivatives can be used to obtain the maximum likelihood estimators, and that both the logit and probit links satisfy this assumption. Additionaly, this assumption implies that the maximum eigenvalue of the Hessian matrix \(H_{j}(y_{j})\) is strictly negative uniformly over its support (see Boyd et al., 2004), which with Assumption A3 guarantees that the true parameters exist uniquely. Even when Model (2) is misspecified, under assumptions A2 and A3, the true parameters can be considered as pseudo-parameters satisfying the first-order condition, \(\nabla\ell_{y,j}(\theta_{j}(y_{j}))=0\); thus, the parameter estimators can be interpreted under the quasi-likelihood framework for each \(y_{j}\in\mathcal{Y}_{j}\)(see Huber, 1967; White, 1982). Assumption A4 is necessary to obtain the limit process of the estimators of the joint conditional distribution and the DIRFs over the supports for statistical inference. For the maximum likelihood estimation in (6), we use the first derivative of the objective function \(\nabla\widehat{\ell}_{y,j}(\theta_{j})\) for each \(j=1,\ldots,J\). We define, for \((\theta,y)\in\Theta\times\mathcal{Y}\), \[\widehat{\Psi}_{y}(\theta):=\big{[}\widehat{\Psi}_{y,1}(\theta_{1})^{\top} \ldots,\widehat{\Psi}_{y,J}(\theta_{J})^{\top}\big{]}^{\top},\] where \(\widehat{\Psi}_{y,j}(\theta_{j}):=\sqrt{T}\nabla\widehat{\ell}_{y,j}(\theta_{ j})\) is written as \[\widehat{\Psi}_{y,j}(\theta_{j})=\frac{1}{\sqrt{T}}\sum_{t=1}^{T}\big{[} \Lambda\big{(}\phi_{j}(X_{jt})^{\top}\theta_{j}\big{)}-1\!\!1\{Y_{jt}\leq y_{j} \}\big{]}R\big{(}\phi_{j}(X_{jt})^{\top}\theta_{j}\big{)}\phi_{j}(X_{jt}),\] with \(R(u):=\lambda(u)/\big{\{}\Lambda(u)[1-\Lambda(u)]\big{\}}\). In the below theorem, we obtain the joint limit process of the DR estimators. **Theorem 1**.: _Suppose that Assumptions A1-A3 hold. Then, we have_ \[\sqrt{T}\big{(}\widehat{\theta}(\cdot)-\theta(\cdot)\big{)}\rightsquigarrow\mathbb{ B}(\cdot)\quad\text{in}\quad\times_{j=1}^{J}\ell^{\infty}(\mathcal{Y}_{j})^{d_{j}}\] _where \(\mathbb{B}(\cdot)\) is a \(\sum_{j=1}^{J}d_{j}\)-dimensional tight mean-zero Gaussian process over \(\mathcal{Y}\). For any \(y,y^{\prime}\in\mathcal{Y}\), the covariance kernel of \(\mathbb{B}(\cdot)\) is given by \(H(y)^{-1}\Sigma(y,y^{\prime})H(y^{\prime})^{-1}\), where \(H(y):=\operatorname{diag}\bigl{(}\{H_{j}(y_{j})\}_{j=1}^{J}\bigr{)}\) and \(\Sigma(y,y^{\prime}):=\lim_{T\to\infty}\mathbb{E}[\widehat{\Psi}_{y}\big{(} \theta(y)\big{)}\widehat{\Psi}_{y^{\prime}}\big{(}\theta(y^{\prime})\big{)}^{ \top}]\)._ The result in Theorem 1 shows that the covariance kernel exhibits a sandwich form owing to possible miss-specification under the quasi-likelihood framework. Additionally, the covariance kernel depends on the long-run covariance matrix in the presence of serial dependence. Since the limit process depends on unknown nuisance parameters, a moving block bootstrap (Kunsch, 1989; Liu and Singh, 1992), stationary bootstrap (Politis and Romano, 1994) or sub-sampling (Politis et al., 1997) can be used for practical inference. We introduce a map from the DR parameters to a collection of the marginal conditional distributions. For each \(j=1,\ldots,J\), we define a map \(\varphi_{j}:\mathbb{D}_{\varphi_{j}}\subset\mathbb{D}_{j}:=\ell^{\infty}(\mathcal{ Y}_{j})^{d_{j}}\mapsto\mathbb{S}_{\varphi_{j}}\subset\ell^{\infty}(\mathcal{X}_{j} \times\mathcal{Y}_{j})\), as \[\varphi_{j}(b_{j})(x_{j},y_{j}):=\Lambda\big{(}\phi_{j}(x_{j})^{\top}b_{j}(y_{j })\big{)}.\] Let \(\varphi(b):=\big{[}\varphi_{1}(b_{1}),\ldots,\varphi_{J}(b_{J})\big{]}^{\top}\) for \(b=(b_{1}^{\top},\ldots,b_{J}^{\top})^{\top}\in\mathbb{D}_{\varphi}\subset \mathbb{D}\), where \(\mathbb{D}_{\varphi}:=\times_{j=1}^{J}\mathbb{D}_{\varphi_{j}}\) and \(\mathbb{D}:=\times_{j=1}^{J}\mathbb{D}_{j}\). Then, using the map \(\varphi:\mathbb{D}_{\varphi}\mapsto\mathbb{S}_{\varphi}:=\times_{j=1}^{J} \mathbb{S}_{\varphi_{j}}\), we can write \[\varphi(\widehat{\theta})=(\widehat{F}_{Y_{1t}|X_{1t}},\widehat{F}_{Y_{2t}|X_ {2t}},\ldots,\widehat{F}_{Y_{Jt}|X_{Jt}})^{\top}\text{ and }\varphi(\theta)=(F_{Y_{1t}|X_{1t}},F_{Y_{2t}|X_{2t}},\ldots,F_{Y_{Jt}|X_{Jt }})^{\top}.\] The map \(\varphi\) is Hadamard differentiable at \(\theta\in\mathbb{D}_{\varphi}\) tangentially to \(\mathbb{D}\) with its Hadamard derivative, given by, \[\varphi^{\prime}_{\theta(\cdot)}(b):=\big{[}\varphi^{\prime}_{1,\theta_{1}( \cdot)}(b_{1}),\ldots,\varphi^{\prime}_{J,\theta_{J}(\cdot)}(b_{J})\big{]}^{ \top},\] where \(\varphi^{\prime}_{j,\theta_{j}(\cdot)}(b_{j})(x_{j},y_{j}):=\lambda\big{(} \phi_{j}(x_{j})^{\top}\theta_{j}(y_{j})\big{)}\phi_{j}(x_{j})^{\top}b_{j}(y_{j })\) for \(j=1,\ldots,J\). The theorem below provides the joint asymptotic distribution of the \(J\) univariate distribution function estimators, applying the functional delta method with the Hadamard derivative in the above display. Furthermore, we can derive the asymptotic distribution of the estimator of any distributional characteristic that can be obtained through Hadamard differentiable maps. **Theorem 2**.: _Suppose that Assumptions A1-A3 hold. Then,_ 1. _we have_ \[\sqrt{T}\left(\begin{array}{c}\widehat{F}_{Y_{1t}|X_{1t}}-F_{Y_{1t}|X_{1t}} \\ \vdots\\ \widehat{F}_{Y_{Jt}|X_{Jt}}-F_{Y_{Jt}|X_{Jt}}\end{array}\right)\rightsquigarrow \varphi^{\prime}_{\theta(\cdot)}(\mathbb{B})\quad\text{in}\quad\times_{j=1}^ {J}\ell^{\infty}(\mathcal{Y}_{j}\times\mathcal{X}_{j}),\] _where_ \(\mathbb{B}\) _is the tight mean-zero Gaussian process defined in Theorem_ 1_._ 2. _additionally, if a map_ \(\nu:\mathbb{S}_{\varphi}\mapsto\ell^{\infty}(\mathcal{Z}\times\mathcal{Y})\) _is Hadamard differentiable at_ \((F_{Y_{1t}|X_{1t}},\ldots,F_{Y_{Jt}|X_{Jt}})\) tangentially to \(\varphi^{\prime}_{\theta(\cdot)}(\mathbb{D})\) with the Hadamard derivative \(\nu^{\prime}_{F_{Y_{1t}|X_{1t}},\ldots,F_{Y_{Jt}|X_{Jt}}}\), then_ \[\sqrt{T}\big{\{}\nu(\widehat{F}_{Y_{1t}|X_{1t}},\ldots,\widehat{F}_{Y_{Jt}|X_{Jt }})-\nu(F_{Y_{1t}|X_{1t}},\ldots,F_{Y_{Jt}|X_{Jt}})\big{\}}\rightsquigarrow\nu^{ \prime}_{F_{Y_{1t}|X_{1t}},\ldots,F_{Y_{Jt}|X_{Jt}}}\circ\varphi^{\prime}_{\theta (\cdot)}(\mathbb{B}),\] _in \(\ell^{\infty}(\mathcal{Z}\times\mathcal{Y})\)._ As the composition of Hadamard differentiable transformations remains Hadamard differentiable (Lemma 3.9.3, van der Vaart and Wellner, 1996), Theorem 2 can be applied to all these distributional characteristics of interest. As explained in Section 3.1, the conditional distributions \(F_{Y_{t}|Z_{t}}\), \(F_{Y_{t+h}|Y_{t},Z_{t}}\) and \(G_{Y_{t}|Z_{t}}\) can be obtained via Hadamard differentiable transformations of several univariate conditional distributions under Assumption A4. Furthermore, the DIRFs, \(DIR_{h}\) and \(DIR_{j,h}\), are Hadamard differentiable transformations of these conditional distributions. Theorem 2(b) can be applied to perform statistical inference on the estimators of the conditional joint distribution and the DIRFs. ## 4 Macroeconomic and Financial Dependence We apply the proposed approach to examine the time series data of macroeconomic and financial conditions in the U.S. The real GDP growth and the NFCI are used as indicators to measure the state of the economy and financial sector. A growing body of research has attempted to investigate the macro-financial interactions during recessions via VAR models of the GDP growth and NFCI (Carriero et al., 2020; Clark et al., 2021; Gertler and Gilchrist, 2018). The present empirical study extends this direction to conduct a more comprehensive distributional analysis. We address two main questions in this section. First, does the joint distribution of the GDP growth and NFCI conditional on past lagged information change during financial stress? Secondly, how does their joint distribution respond to a distributional shock in macro and financial conditions? ### Data and Modeling Specification The real GDP growth is computed using quarterly real GDP data from the Bureau of Economic Analysis3. The NFCI is a weighted average of 105 measures of national financial activity, each expressed relative to their sample averages and scaled by their sample standard deviations, which are released weekly by the Federal Reserve Bank of Chicago4. A positive NFCI suggests that the financial conditions are tighter than average. We use data from these two indices for the period 1973:Q1 to 2019:Q1, for analysis. These time series are released at different frequencies. Following Adrian et al. (2021), we convert the NFCI data into quarterly observations by averaging each quarter's weekly observations. We consider a bivariate model for the outcome \(Y_{t}=(Y_{1t},Y_{2t})^{\top}\), with \(Y_{1t}\) representing the quarterly NFCI and \(Y_{2t}\) representing the real GDP growth. Footnote 3: The data is downloaded from FRED [https://fred.stlouisfed.org/series/A191RL1Q225SBEA](https://fred.stlouisfed.org/series/A191RL1Q225SBEA) Footnote 4: More details about NFCI are available at [https://www.chicagofed.org/publications/nfci/index](https://www.chicagofed.org/publications/nfci/index) We apply the proposed multivariate DR approach to estimate horizon-specific multiperiod forecasting distributions for each variable. Specifically, we consider two-lag information \(Z_{t}=(Y_{t-1}^{\top},Y_{t-2}^{\top})^{\top}\) to estimate the joint conditional distribution \(F_{Y_{t}|Z_{t}}\) and three-lag information to develop the \(h\)-ahead forecasting distribution \(F_{Y_{t+h}|Y_{t},Z_{t}}\) for different \(h\). DIRFs can then be estimated based on these conditional distributions, which enables the design of different counterfactual scenarios to investigate the possible policy effect on the entire distributions of both the NFCI and real GDP growth over time. ### Multiperiod Ahead Conditional Distribution Forecast Focusing on the one-quarter and one-year ahead horizons, we present the out-of-sample performance of the multivariate DR approach in estimating the multiperiod ahead forecasting distributions of the NFCI and real GDP growth. First, using the expanding window beginning with the estimation of the sample ranging from 1973:Q1 to 1982:Q3, we evaluate the out-of-sample performance of the distribution forecasts by analyzing the probability integral transform (PIT), which reflects the percentage of observations below any given quantile. In a perfectly calibrated model, the fraction of realizations below any given quantile of the predictive distribution exactly equals the quantile probability, thus the cumulative distribution of the PITs is a 45-degree line. The closer the empirical cumulative distribution of the PITs is to the 45-degree line, the better the model is calibrated. For different forecast horizons, the empirical distribution of PITs together with 95% confidence bands of Rossi and Sekhposyan (2019) PITs test5 for the predicted marginals of the real GDP growth and NFCI are shown in Figure 1. This illustrates that the empirical distributions of the PITs by the proposed approach are all well within the confidence intervals. Figure 1: Empirical CDF of the Out-of-sample PITs Additionally, we examine the estimated one-quarter and one-year ahead marginal distributions of the real GDP growth and NFCI using the expanding window (out-of-sample). Based on the predicted distributions, the 5th to 95th and 25th to 75th percentile intervals, the median, along with the data realizations are plotted in Figure 2. The distribution evolution of the real GDP growth shows that the median and lower tail (downside risk) exhibit significant time-series variation compared to the upper tail (upside risk). Comparing the predicted quantiles to the realizations reveals that the possibilities of adverse GDP growth and tight financial conditions can be detected by the predicted distributions in real time. Based on the realizations of the real GDP growth and the NFCI, we find that when the NFCI is relatively loose, the economy evolves as usual. Simultaneously, extreme tightening of the NFCI coincides with extremely adverse GDP growth. In Figure 3, we plot the in-sample Figure 2: Out-of-sample Predicted Distributions conditional correlation coefficients between the real GDP growth and NFCI. The correlation coefficient fluctuates around 0 during the normal times but becomes significantly negative during recession periods. This suggests a nonlinear relationship between financial conditions and real activity, with their conditional joint distribution behaving very differently during normal times and recessions. ### Multimodality in Macro-Financial Dynamics During the Great Recession We further study how the joint distribution of the real GDP growth and NFCI evolved during the Great Recession. The joint distribution dynamics of the out-of-sample forecasting distributions, with different columns corresponding to forecast horizons from one (leftmost column, \(h=1\)) to four (rightmost column, \(h=4\)) quarters and rows corresponding to different conditioning information from 2008:Q1-Q3 (top row) to 2009:Q1-Q3 (bottom row), are illustrated in Figure 4. The joint distributions of the real GDP growth and NFCI predicted using the data up Figure 3: In-sample Conditional Correlation between NFCI and real GDP Growth to 2008:Q1-Q3, displayed in the top row of Figure 4, are characterized by a single mode for all forecasting horizons. As the forecasting horizon increases, there is an increased likelihood of higher growth and looser financial conditions. However, with the inclusion of information from 2008:Q4, the predicted distributions exhibit multimodality. As depicted in the second row of Figure 4, the one-quarter-ahead predicted distribution displays two distinct modes, both centered around low GDP growth of approximately \(-2\) and tight financial conditions (one around 1 and the other around 2). As the forecasting horizon extends, the predicted distribution gradually shifts its weight towards higher GDP growth and looser financial conditions. Finally, the one-year-ahead distribution resolves into a unimodal distribution centered near 2 for real GDP growth and average financial conditions (the NFCI is approximately 0). As additional information from 2009:Q1 and 2009:Q2 becomes available, the predicted distributions in the third and fourth rows evolve similarly to those shown in the second row. However, given the more recent information, the multimodality resolves more quickly. Notably, the predicted distributions conditional on information as of 2009:Q1-Q3 are approximately unimodal for all horizons, as shown in the last row of Figure 4. It is noteworthy that the multimodality in the distributions primarily stems from the shift in the distribution of the NFCI. Therefore, the distributional behavior depicted in Figure 4 implies that during normal periods, the joint distribution of the real GDP growth and NFCI is characterized by a single mode. During periods of tight financial conditions, however, there is a marked change in the distribution's shape, with the emergence of multiple modes. When financial conditions are less severe, the multimodality is observed only in short-term forecasts and is usually resolved within a couple of quarters. Additionally, the plots indicate that as the forecasting horizon extends, both variables become increasingly volatile, with the real GDP growth exhibiting a greater degree of uncertainty. Figure 4: Contour Plots of the Joint Distribution during the Great Recession ### Counterfactual Analysis During the Great Recession Assuming that the information available about the economic and financial conditions is up to 2008:Q3, based on the out-of-sample forecasting distributions, we use the DIRF to explore the potential policy effects during the Great Recession. Specifically, we investigate the impact of the policy intervention aimed at limiting the possibility of tightening financial conditions or worsening GDP growth during 2008:Q4 on the predicted distributions in the following quarters. #### 4.4.1 Counterfactual Analysis of Distributional Impulse on the NFCI We first explore the effect of the policy intervention in 2008:Q3, which could limit the possibility of tightening financial conditions during 2008:Q4. A counterfactual distribution, truncated normal distribution with mean of 0 and standard deviation of 0.2 on \((-1.5,2)\), is considered for the NFCI in 2008:Q4. In Figure 5, we provide the initial one-step-ahead (baseline) joint and marginal distributions in 2008:Q4, together with their counterfactual counterparts under the distributional impulse on \(Y_{1t}\). Additionally, their differences in different quantiles over the distribution and moments, including the mean, standard deviation, skewness and kurtosis, are presented in Table 1, with \(h=0\). The distributional impulse significantly reduces the 95% quantiles, skewness, and kurtosis of the NFCI, thereby also Figure 5: Distributional Impulse on NFCI in 2008:Q4 lowering other quantiles and moments. Simultaneously, it increases the 95% quantiles of GDP by approximately 1 and kurtosis by approximately 0.65; however, it has a minor influence on other quantiles and moments. The results suggest a minor contemporaneous effect of the NFCI shock on the GDP. With the distributional impulse on the NFCI in 2008:Q4, we study the DIRFs for the following year, which ranges from 2009:Q1 to 2009:Q4 corresponding to the horizons \(h=1,2,3\) and 4 quarters, respectively. Figure 6 presents a complete picture of how the entire joint and marginal distributions of the real GDP growth and NFCI respond to the impulse on the NFCI in the following year. Table 1 presents the results for the quantile and moment IRFs. From Figure 6, in the following two quarters, the right tail of the NFCI is greatly reduced, the left tail of the GDP becomes much thinner, and the left tail of the NFCI and right tail of the GDP become slightly fatter. Such an effect continues but decays at more distant horizons. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{Variables} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{NFCI} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{GDP} & \\ \cline{3-14} \multicolumn{2}{c}{} & \(h\) & 0 & 1 & 2 & 3 & 4 & 0 & 1 & 2 & 3 & 4 \\ \hline \multirow{9}{*}{Quantiles} & 0.05 & Base & 0.01 & -0.36 & -0.36 & -0.41 & -0.49 & -1.77 & -3.03 & -3.03 & -2.10 & -3.03 \\ & & Diff & -0.34 & -0.09 & -0.09 & -0.08 & -0.16 & 0.00 & 0.93 & 1.26 & 0.80 & 0.93 \\ & & Base & 0.15 & -0.09 & -0.11 & -0.23 & -0.37 & 0.68 & 0.40 & 0.34 & 1.20 & 1.36 \\ & & Diff & -0.28 & -0.29 & -0.25 & -0.16 & -0.05 & 0.02 & 0.96 & 1.11 & 0.80 & 0.09 \\ & & Base & 0.55 & 0.18 & 0.17 & -0.09 & -0.14 & 2.03 & 1.96 & 1.96 & 2.90 & 3.00 \\ & & Diff & -0.55 & -0.45 & -0.48 & -0.23 & -0.22 & 0.00 & 0.94 & 0.94 & 0.20 & 0.10 \\ & & Base & 1.00 & 0.68 & 0.62 & 0.56 & 0.40 & 3.20 & 3.60 & 3.60 & 4.67 & 4.94 \\ & & Diff & -0.87 & -0.57 & -0.51 & -0.69 & -0.42 & 0.00 & 0.60 & 0.60 & 0.17 & 0.17 \\ & & Base & 2.40 & 2.72 & 2.55 & 2.47 & 2.72 & 6.26 & 7.06 & 7.44 & 8.10 & 8.10 \\ & & Diff & -2.08 & -1.63 & -1.68 & -0.06 & -0.32 & -0.88 & 0.38 & 0.00 & 0.00 & 0.00 \\ \hline \multirow{9}{*}{Moments} & \multirow{3}{*}{Mean} & Base & 0.71 & 0.52 & 0.46 & 0.35 & 0.27 & 1.86 & 1.85 & 1.84 & 2.81 & 2.91 \\ & & Diff & -0.71 & -0.53 & -0.50 & -0.34 & -0.28 & 0.05 & 0.88 & 0.92 & 0.54 & 0.17 \\ \cline{1-1} & & Base & 0.77 & 0.89 & 0.88 & 0.94 & 1.00 & 2.50 & 2.90 & 3.00 & 3.12 & 3.25 \\ \cline{1-1} & & Diff & -0.56 & -0.22 & -0.23 & -0.10 & -0.14 & -0.23 & -0.16 & -0.30 & -0.43 & -0.18 \\ \cline{1-1} & & Base & 1.53 & 1.55 & 1.70 & 1.66 & 1.70 & -0.20 & -0.06 & -0.03 & -0.41 & -0.59 \\ \cline{1-1} & & Diff & -1.52 & 1.54 & 1.00 & 0.71 & -0.02 & -0.22 & -0.22 & 0.16 & 0.10 \\ \cline{1-1} & & Base & 4.83 & 4.70 & 5.14 & 4.66 & 4.72 & 4.90 & 3.07 & 3.16 & 3.90 & 3.92 \\ \cline{1-1} & & Diff & -1.79 & 8.49 & 9.43 & 4.40 & 3.44 & 0.65 & 0.53 & 0.72 & 0.59 & 0.03 \\ \hline \end{tabular} _Notes_: This table presents different quantiles and moments for the baseline distributions \(F_{Y_{j,t+k}|Z_{t}}\) (Base) and the quantile and moments differences from the counterfactual distributions \(F_{Y_{j,t+k}|Z_{t}}^{*}\) to the baseline distributions (Diff) of NFCI and real GDP growth from 2008:Q4 (\(h=0\)) to 2009:Q4 (\(h=4\)). \end{table} Table 1: QIR and MIR to NFCI Impulse Figure 6: Distributional Response to the NFCI Impulse Distributions of \(Y_{t+h}\) given \(Z_{t}\) for \(t\)=2008:Q4 \(h=1\) This observation is further confirmed by investigating quantile IRFs. All quantiles of the NFCI decreases; however, the differences lessen over time. For the GDP, the impulse effect on the 95% quantile is negligible from 2009:Q2. All other quantiles, specifically the 5%, 25% and 75% quantiles, increase significantly in the following two quarters. The difference becomes smaller at further horizons but remains significant. Moreover, the exploration of the moments IRFs reveals that the mean of NFCI at all horizons decreases to approximately 0, driven by the significant impulse effect on the upper tail. A more substantial impact on the lower tail for GDP increases its mean to 2\(\sim\)3 for all horizons. The impulse significantly changes the skewness and kurtosis of the NFCI, and the results for the other moments also demonstrate significant long-run effects of both variables. Finally, we conclude that if the policies in 2008:Q3 had been able to limit the possibility of financial tightening in 2008:Q4, the likelihood of adverse GDP growth (left tail) and tight financial conditions (right tail) would have been largely eliminated in 2009:Q1-Q2 and reduced in 2009:Q3-Q4. #### 4.4.2 Counterfactual Analysis of Distributional Impulse on GDP We explore the effect of the policy in 2008:Q3 that could have limited the possibility of low economic activity during 2008:Q4. Specifically, as shown in Figure 7, we maintain the conditional distribution of the NFCI in 2008:Q4 as it is, and a counterfactual distribution, truncated gamma distribution on \((0,11)\) with scale parameter of 6 and shape parameter of 0.6 is considered for the GDP. Figure 8 shows the joint and marginal distributions of Figure 7: Distributional Impulse to real GDP growth in 2008:Q4 given \(Z_{t}\) and their counterfactual counterparts; their differences in quantiles and moments are presented in Table 2, with \(h=0\). The distributional impulse on GDP increases the 5% quantile significantly from -2.1 to 1.5, and the other quantiles increases by approximately 1\(\sim\)2. This impulse also significantly changes different moments of the GDP. Using the distributional impulse on real GDP growth in 2008:Q4, we study the DIRFs for the following year from 2009:Q1 to 2009:Q4, which corresponds to the horizons \(h=1,2,3\), and 4 quarters, respectively. The dynamic effect of this impulse on the entire distributions of the NFCI and real GDP growth is illustrated in Figure 8, where the baseline distributions and the counterfactual distributions are plotted in blue and red, respectively. A slight difference between the baseline and counterfactual distributions for the NFCI and real GDP growth is observed for all horizons. We further explore the quantile and moment IRFs presented in Table 2 to draw a more Figure 8: Distributional Response to the GDP Impulse Distributions of \(Y_{t+h}\) given \(Z_{t}\) for \(t\)=2008:Q4 \(h=1\) concrete comparison. First, the quantile and moment IRFs of the NFCI at one- to four-quarters ahead are all close to 0. Regarding real GDP growth, the one- and two-quarters-ahead counterfactual distributions have a slightly fatter right tail and a larger mean than the baseline distributions. However, the counterfactual and baseline distributions are almost identical for more distant horizons. Thus, in the absence of a corresponding improvement in the distribution of NFCI, limiting the likelihood of negative real GDP growth in 2008:Q4 only increases the likelihood of positive economic activity in the short run and does not significantly affect the NFCI, even in the subsequent quarter. We provide more exploration of this empirical application in Appendix B. Section B.1 introduces the simulation of samples of the outcome variables from the multiperiod forecasting distributions. The same model specification but with an alternative order is considered in Section B.2, where \(Y_{1t}\) represents the real GDP growth and \(Y_{2t}\) represents the quarterly NFCI. The results suggest that the order does not significantly affect the estimation of multiperiod forecasting distributions. We compare the proposed approach with the kernel regression method introduced by Adrian et al. (2021) in Section B.3. The results indicate that the performance of the kernel regression approach is sensitive to bandwidths, whereas the proposed DR approach demonstrats superior performance in eliciting this conditional distribution, specifically for the NFCI distribution. In Section B.4, we explore the use of the monthly NFCI in the system and a four-dimensional mixed-frequency model conditional on two lags by treating the three monthly NFCI series as separate observations within a quarter. Based on this model, we revisit the distribution forecasting and counterfactual analysis. For each scenario, 95% confidence bands for the entire distribution as well as density impulse responses are estimated using the moving block bootstrap approach, the results of which are presented in Section B.5. ## 5 Conclusion This study develops a flexible semiparametric approach for characterizing the conditional joint distribution of multivariate time series. The resulting DIRF provides a more compre hensive picture of the dynamic heterogeneity. The asymptotic properties of the conditional distribution estimators and their transformations are also derived. Based on an analysis of the real GDP growth and NFCI in U.S., the empirical results confirm some existing findings in the literature: First, the tight financial conditions create multimodality in the conditional joint distribution. Second, restricting the upper tail of financial conditions has a noticeable impact on long-term GDP growth. However, with the inclusion of additional lag information, the extracted results of the proposed model on the effect of restricting the lower tail of the GDP during the global financial crisis suggest a negligible impact on the financial conditions.
2302.02014
Multi-Task Learning for Screen Content Image Coding
With the rise of remote work and collaboration, compression of screen content images (SCI) is becoming increasingly important. While there are efficient codecs for natural images, as well as codecs for purely-synthetic images, those SCIs that contain both synthetic and natural content pose a particular challenge. In this paper, we propose a learning-based image coding model developed for such SCIs. By training an encoder to provide a latent representation suitable for two tasks -- input reconstruction and synthetic/natural region segmentation -- we create an effective SCI image codec whose strong performance is verified through experiments. Once trained, the second task (segmentation) need not be used; the codec still benefits from the segmentation-friendly latent representation.
Rashid Zamanshoar Heris, Ivan V. Bajić
2023-02-03T21:43:08Z
http://arxiv.org/abs/2302.02014v1
# Multi-Task Learning for Screen Content Image Coding ###### Abstract With the rise of remote work and collaboration, compression of screen content images (SCI) is becoming increasingly important. While there are efficient codecs for natural images, as well as codecs for purely-synthetic images, those SCIs that contain both synthetic and natural content pose a particular challenge. In this paper, we propose a learning-based image coding model developed for such SCIs. By training an encoder to provide a latent representation suitable for two tasks - input reconstruction and synthetic/natural region segmentation - we create an effective SCI image codec whose strong performance is verified through experiments. Once trained, the second task (segmentation) need not be used; the codec still benefits from the segmentation-friendly latent representation. Image compression, screen content image, learning-based compression, image segmentation ## I Introduction Traditional image compression standards such as JPEG [1] and JPEG2000 [2] were developed mostly with natural images in mind. However, with the rise of remote work and collaboration, transmission of screen content images (SCI) has become important. Recognizing this, a screen content coding extension [3] was developed for High Efficiency Video Coding (HEVC). More recently, low-level coding techniques tailored to screen content were introduced into the Versatile Video Coding (VVC) standard [4, 5, 6]. Due to the unique signal characteristics of SCIs, such as sharp edges, repetitive patterns, and the absence of noise, many encoding techniques that are effective for natural images are either ineffective or less efficient for SCIs [7]. As an example, Fig. 1 shows one natural and one screen content image along with their luminance histograms. It is easy to see that the histograms are very different. Also the effect of coding artifacts in the two types of images can be different [8]. For instance, quality loss near the edges is a common side effect of compressing natural images; yet, if done right, it may be imperceptible. However, in synthetic images, blurring of sharp edges (e.g., in SCIs showing text) can be quite noticeable and annoying, and may affect the ability of the viewer to understand the text. In the meantime, steady progress has been made in learning-based image compression [11, 12, 13, 14, 15, 16, 17], however, the focus has been mostly on natural images. When used on SCIs, coding models trained on natural images tend to be less effective, as will be seen in our results. Therefore, in this paper, we develop a learning-based coding model targeted at SCI. Note that SCIs can contain both synthetic and natural content. In fact, one of the key challenges in SCI compression is to adapt coding to the nature of the content (synthetic vs. natural), because these have different statistics and therefore may require different coding tools. We leverage the fact that a learning-based model can be trained to distinguish natural from synthetic content, as well as perform compression, in order to boost the coding performance on SCI. The paper is structured as follows. In Section II, we provide a brief overview of SCI compression and learning-based image compression, and outline our contribution. The proposed SCI coding model is presented in Section III. Experiments are described in Section IV followed by conclusions in Section V. ## II Related work and our contribution ### _Traditional screen content image compression_ High-Efficiency Video Coding (HEVC) standard contains specialized tools for screen content compression, which have been wrapped into its screen content coding extension (HEVC-SCC) [3, 4]. These include pallette mode [18], sample-based prediction [19], and DPCM-based edge prediction [20]. Line-by-line template matching, which is more flexible than HEVC's native intra-block-copy mode, has been proposed in [21]. In [22], a layer-based method is presented for extracting the synthetic data from the screen content image. While the remaining natural video is compressed using HEVC-SCC, detected text and graphical structures are encoded using a customized compression strategy. In [23], a pre-processing step is suggested to alter parts of the original image data based on the utility of the related image region in order to decrease Fig. 1: Top: a natural image from the CLIC dataset [9] and its luma histogram. Bottom: a SCI from the SCID dataset [10] and its luma histogram.
2304.11480
Initial state radiation in $e^+e^-$ annihilation
In two earlier papers it was demonstrated that Lorentzian and Galilean symmetries could both be useful in the analysis of the annihilation reaction $e^+ e^- \to \gamma \Lambda(\rightarrow p\pi^-) \bar{\Lambda}(\rightarrow \bar{p}\pi^+)$. It was also demonstrated that any pair of hyperon form factors would be acceptable, but that the $\{G_E, G_M\}$ pair would probably be the simplest one to handle.
Göran Fäldt
2023-04-22T20:52:55Z
http://arxiv.org/abs/2304.11480v1
# Initial state radiation in \(e^{+}e^{-}\) annihilation ###### Abstract In two earlier papers it was demonstrated that Lorentzian and Galilean symmetries could both be useful in the analysis of the annihilation reaction \(e^{+}e^{-}\to\gamma\Lambda(\to p\pi^{-})\bar{\Lambda}(\to\bar{p}\pi^{+})\). It was also demonstrated that any pair of hyperon form factors would be acceptable, but that the \(\{G_{E},G_{M}\}\) pair would probably be the simplest one to handle. ## I Introduction The BABAR Collaboration [1] has measured initial-state-radiation in the annihilation channel, \(e^{+}e^{-}\to\gamma\Lambda(\to p\pi^{-})\bar{\Lambda}(\to\bar{p}\pi^{+})\). Theoretical analyses of the same reaction are presented in ref.[2], for the \(\Lambda\bar{\Lambda}\gamma\) final state with single hyperon polarization, and in ref.[3], for the \(\Lambda\bar{\Lambda}\gamma\) final state with double hyperon polarizations. A Lorentz-covariant description of the cross-section-distribution functions, including those representing the hyperon decays, but with sums over polarizations, is presented in ref.[4]. In the present investigation we work in the covariant Galilean formalism [5], and choose as form factors the \(\{G_{E},G_{M}\}\) pair. Our aim is to demonstrate how, in this formalism, the phase-space density can be treated. Cross Section The cross-section distribution for the initial-state-radiation reaction under consideration, is factorized as in refs.[4; 5], \[\mathrm{d}\sigma = \frac{1}{2\sqrt{\lambda(s,m_{e}^{2},m_{e}^{2})}}\,\mathcal{K}\, \overline{|\mathcal{M}_{red}|^{2}} \tag{1}\] \[\times \mathrm{d}\mathrm{Lips}(k_{1}+k_{2};q,l_{1},l_{2},q_{1},q_{2}),\] where the average over the squared matrix element indicates summation over final-state nucleon spins and average over initial-state lepton spins, and with the \(\mathcal{K}\) factor \[\mathcal{K}=\frac{(4\pi\alpha)^{3}}{(P^{2})^{2}} \cdot \frac{1}{(s_{1}-M_{\Lambda}^{2})^{2}+M_{\Lambda}^{2}\Gamma_{\Lambda }^{2}} \tag{2}\] \[\cdot \frac{1}{(s_{2}-M_{\Lambda}^{2})^{2}+M_{\Lambda}^{2}\Gamma_{ \Lambda}^{2}}.\] Notations for variables and parameters are all explained in FIG.1 of refs.[4; 5]. In particular, \(s_{1}=(l_{1}+q_{1})^{2}\) and \(s_{2}=(l_{2}+q_{2})^{2}.\) A consequence of the factorization of the cross section is the factorization of the squared matrix element, \[\overline{|\mathcal{M}|^{2}}=\mathcal{K}\overline{|\mathcal{M}_{red}|^{2}}, \tag{3}\] with \(\mathcal{K}\) of eq.(2). ## III Reduced matrix element The cross-section distribution, or rather the covariant square of the annihilation matrix element \(\overline{|\mathcal{M}_{red}|^{2}},\) is obtained by contracting hadronic \(H_{\mu\nu}\) and leptonic \(L^{\mu\nu}\) tensors, so that \[\overline{|\mathcal{M}_{red}|^{2}}=L^{\mu\nu}H_{\mu\nu}. \tag{4}\] The right-hand-side of this equation can be rewritten as a sum of four terms, \[\overline{|\mathcal{M}_{red}|^{2}} = \bar{R}_{\Lambda}R_{\Lambda}M^{RR}+\bar{R}_{\Lambda}S_{\Lambda} M^{RS} \tag{5}\] \[+ \bar{S}_{\Lambda}R_{\Lambda}M^{SR}+\bar{S}_{\Lambda}S_{\Lambda} M^{SS},\] with coefficients \(R_{\Lambda},S_{\Lambda}\) and \(R_{\bar{\Lambda}},S_{\bar{\Lambda}}\) that refer to the \(\Lambda\) and \(\bar{\Lambda}\) decay constants of ref.[4], and with \(R\) the spin-independent and \(S\) the spin-dependent ones. From the structure of the lepton tensor, eq.(24) of ref.[4], it follows that each of the \(M^{XY}\) functions of eq.(5) has two parts, one \(A\)-part and one \(B\)- part, \[M^{XY}=-a_{y}A^{XY}(G_{M},G_{E})-b_{y}B^{XY}(G_{M},G_{E}), \tag{6}\] where the \(A^{XY}\) factor is obtained by contracting the hadron tensor with the symmetric tensor \(k_{1\mu}k_{1\nu}+k_{2\mu}k_{2\nu},\) and the \(B^{XY}\) factor by contraction with the tensor \(g_{\mu\nu}.\) For details see ref.[4]. The weight factors \(a_{y}\) and \(b_{y}\) are defined in the appendix. The arguments of the form factors can be chosen equal to \(P^{2}\). In particular, when \(P^{2}=4M^{2}\) then \(G_{M}=G_{E}\). The functions \(A^{XY}\) and \(B^{XY}\) are bilinear forms of \(G_{M}\) and \(G_{E}\), and expanded accordingly, in which case \[A^{XY}(G_{M},G_{E}) = |G_{M}|^{2}{\cal L}_{1}^{AXY}+|G_{E}|^{2}{\cal L}_{2}^{AXY} \tag{7}\] \[+ 2\Re(G_{M}G_{E}^{*}){\cal L}_{3}^{AXY}\] \[+ 2\Im(G_{M}G_{E}^{*}){\cal L}_{4}^{AXY},\] and similarly for \(B^{XY}\). We refer to the set of functions \(\{{\cal L}\}\) as co-factors. ## IV The co-factors The most remarkable fact about the new set is that several co-factors vanish; \[{\cal L}_{3}^{ARR}={\cal L}_{3}^{BRR}=0. \tag{8}\] Also, as we shall see, \({\cal L}_{3}^{BSS}=0\) but \({\cal L}_{3}^{ASS}\neq 0\). Our results are the following. Co-factors suffixed \(ARR\); \[{\cal L}_{1}^{ARR} = (2\epsilon\omega)^{2}\,\biggl{[}\Phi_{\perp}\biggr{]}, \tag{9}\] \[{\cal L}_{2}^{ARR} = (2\epsilon\omega)^{2}\,\biggl{[}\Phi_{f}\,\frac{1}{\gamma_{\Lambda }^{2}}\biggr{]},\] (10) \[{\cal L}_{3}^{ARR} = 0. \tag{11}\] Co-factors suffixed \(BRR\); \[{\cal L}_{1}^{BRR} = 8M_{\Lambda}^{2}\biggl{[}-2\gamma_{\Lambda}^{2}\biggr{]}, \tag{12}\] \[{\cal L}_{2}^{BRR} = 8M_{\Lambda}^{2}\biggl{[}1\biggr{]},\] (13) \[{\cal L}_{3}^{BRR} = 0. \tag{14}\] Co-factors suffixed \(ARS\) and \(ASR\); \[{\cal L}_{4}^{ARS} = 2M_{\Lambda}p_{g}(2\epsilon\omega)^{2}\frac{1}{\gamma_{\Lambda}} \biggl{[}\Phi_{g}\biggr{]}, \tag{15}\] \[{\cal L}_{4}^{ASR} = 2M_{\Lambda}p_{g}(2\epsilon\omega)^{2}\frac{1}{\gamma_{\Lambda}} \biggl{[}\Phi_{h}\biggr{]}. \tag{16}\] Co-factors suffixed \(ASS\) ; \[{\cal L}_{1}^{ASS} = (p_{\Lambda}p_{g}2\epsilon\omega)^{2}Z\biggl{[}\Phi_{\perp}(X_{a }-2X_{b}) \tag{17}\] \[+ 2(L_{0}+ZL_{M}/\gamma_{\Lambda})\biggr{]},\] \[{\cal L}_{2}^{ASS} = (p_{\Lambda}p_{g}2\epsilon\omega)^{2}Z\biggl{[}\Phi_{f}X_{a}\frac {1}{\gamma_{\Lambda}^{2}}\biggr{]},\] (18) \[{\cal L}_{3}^{ASS} = (p_{\Lambda}p_{g}2\epsilon\omega)^{2}Z\biggl{[}-Z\gamma_{\Lambda }L_{M}\biggr{]}. \tag{19}\] Co-factors suffixed \(BSS\); \[{\cal L}_{1}^{BSS} = 2(2p_{\Lambda}^{2}p_{g})^{2}Z^{2}\bigg{[}-2X_{b}\gamma_{\Lambda}^{2 }\bigg{]}, \tag{20}\] \[{\cal L}_{2}^{BSS} = 2(2p_{\Lambda}^{2}p_{g})^{2}Z^{2}\bigg{[}X_{a}\bigg{]},\] (21) \[{\cal L}_{3}^{BSS} = 0. \tag{22}\] The \(X\), \(L\), and, \(\Phi\) functions are essential in building the co-factors. They are defined in sect. VI. Important parameters are; the common lepton energies \(\epsilon\) in the overall c.m.; the common hyperon energies \(E_{\Lambda}\) in the hyperon c.m.; and \(\omega\) the energy of the ISR photon. Three important relations between parameters are, \[\epsilon\omega = \epsilon^{2}-E_{\Lambda}^{2}, \tag{23}\] \[Z = \frac{4M_{\Lambda}^{2}}{Q^{2}}=1-\frac{1}{v_{\Lambda}^{2}},\] (24) \[\gamma_{\Lambda} = E_{\Lambda}/M_{\Lambda}, \tag{25}\] with \(Q^{2}=(p_{1}-p_{2})^{2}\), and \(v_{\Lambda}=p_{\Lambda}/E_{\Lambda}\) the hyperon velocity in the \(\Lambda\bar{\Lambda}\) rest frame. ## V Angular variables The co-factors of sect. 4 are expressed in terms of functions \(X\), \(L\), and \(\Phi\), which themselves are scalar functions of a set of unit vectors that we now proceed to define. \(\hat{\bf k}\) is a unit vector in the direction of motion of the initial-state lepton in the overall cms; \({\bf n}=\hat{\bf q}\) is a unit vector in the direction of motion of the photon in the overall cms; \({\bf g}({\bf h})\) is a unit vector in the direction of motion of the proton (antiproton) in the rest system of the \(\Lambda(\bar{\Lambda})\) hyperon; \({\bf f}(-{\bf f})\) is a unit vector in the direction of motion of the \(\Lambda(\bar{\Lambda})\) in the rest system of the hyperon pair; and finally \({\bf N}\) is a vector defined by, \[{\bf N}=\frac{1}{v\gamma}\bigg{[}\hat{\bf k}+(\gamma-1)({\bf n}\cdot\hat{\bf k }){\bf n}\bigg{]}, \tag{26}\] given the velocity \(v\) and the function \(\gamma(v)\) \[v = \frac{\omega}{\sqrt{\omega^{2}+W^{2}}}, \tag{27}\] \[\gamma(v) = \frac{\sqrt{\omega^{2}+W^{2}}}{W}. \tag{28}\] Angular functions The \(\Phi\) functions are by definition functions of \({\bf n}\) and \({\bf N}\); \[\Phi_{f} = ({\bf n}\cdot{\bf f})^{2}+({\bf N}\cdot{\bf f})^{2}, \tag{29}\] \[\Phi_{\perp} = ({\bf n}\times{\bf f})^{2}+({\bf N}\times{\bf f})^{2},\] (30) \[\Phi_{g} = {\bf f}\cdot{\bf nf}\cdot({\bf g}\times{\bf n})+{\bf f}\cdot{\bf N }{\bf f}\cdot({\bf g}\times{\bf N}),\] (31) \[\Phi_{h} = {\bf f}\cdot{\bf nf}\cdot({\bf h}\times{\bf n})+{\bf f}\cdot{\bf N }{\bf f}\cdot({\bf h}\times{\bf N}),\] (32) \[\Phi = {\bf n}^{2}+{\bf N}^{2}=\frac{1}{v^{2}}+({\bf n}\cdot\hat{\bf k}) ^{2}\] (33) \[= \Phi_{f}+\Phi_{\perp}.\] The \({\bf N}\) vector is defined in eq.(26). Introduced are also functions \(L_{0}\) and \(L_{M}\), \[L_{0} = {\bf n}\cdot{\bf g}_{\perp}\,{\bf n}\cdot{\bf h}_{\perp}+{\bf N} \cdot{\bf g}_{\perp}\,{\bf N}\cdot{\bf h}_{\perp}, \tag{34}\] \[L_{M} = ({\bf f}\cdot{\bf gh}_{\perp}+{\bf f}\cdot{\bf h}{\bf g}_{\perp})\] (35) \[\times({\bf nf}\cdot{\bf n}+{\bf N}{\bf f}\cdot{\bf N}),\] and the bilinear functions \(X_{a}\) and \(X_{b}\) \[X_{a} = 2{\bf g}\cdot{\bf f}{\bf h}\cdot{\bf f}-{\bf g}\cdot{\bf h}, \tag{36}\] \[X_{b} = {\bf g}\cdot{\bf f}{\bf h}\cdot{\bf f}. \tag{37}\] Here, orthogonal means orthogonal with respect to the \({\bf f}\) vector, i.e., \({\bf g}_{\perp}={\bf g}-{\bf f}({\bf f}\cdot{\bf g})\). ## VII Phase space Since the intermediate-state hyperons represent states whose masses may be considered fixed, it is useful to rewrite the phase-space differential making this fact explicit, by using the nesting formula [6] \[{\rm dLips}(k_{1}+k_{2};q,l_{1},l_{2},q_{1},q_{2}) =\] \[\frac{1}{(2\pi)^{2}}{\rm d}s_{g}{\rm d}s_{h}{\rm dLips}(k_{1}+k_ {2};q,p_{1},p_{2})\] \[\times{\rm dLips}(s_{g};l_{1},q_{1})\,{\rm dLips}(s_{h};l_{2},q_ {2}), \tag{38}\] where by definition \[p_{1} = l_{1}+q_{1}, \tag{39}\] \[p_{2} = l_{2}+q_{2}, \tag{40}\] with \(p_{1}^{2}=s_{g}\) and \(p_{2}^{2}=s_{h}\). Now, we know the analytic expressions for the two- and three-body phase-space differentials of eq.(38) as given in ref.[6]. For the two-body differential \[{\rm dLips}(s_{g};l_{1},q_{1})=\frac{p_{g}{\rm d}\Omega_{g}}{16\pi^{2}\sqrt{s _{g}}}, \tag{41}\] where index \(g\) reminds us we are in the cms of the \(N\pi\) pair, where each particle has momentum \[p_{g}=\frac{1}{2\sqrt{s_{g}}}\sqrt{\lambda(s_{g},m_{N}^{2},m_{\pi}^{2})}. \tag{42}\] A corresponding differential, but with index \(h\), refers to the contribution from the \(\bar{N}\bar{\pi}\) pair. The three-body differential of eq.(38) is calculated after having been reduced to a product of two-body differentials, \[\mathrm{dLips}(s;q,p_{1},p_{2}) =\] \[\frac{1}{2\pi}\mathrm{d}s_{c}\mathrm{dLips}(s;p_{c},q)\mathrm{dLips} (s_{c};p_{1},p_{2}), \tag{43}\] with \(p_{c}=p_{1}+p_{2}\) and \(s_{c}=p_{c}^{2}\). Furthermore, \[s_{c}=s-2\omega\sqrt{s}, \tag{44}\] so that \(\mathrm{d}s_{c}=-2\sqrt{s}\,\mathrm{d}\omega\). A straightforward evaluation of the expressions for the two-particle differentials leads to \[\mathrm{dLips}(s;p_{c},q) = \frac{|\mathbf{q}|\,\mathrm{d}\Omega_{q}}{16\pi^{2}\sqrt{s}}, \tag{45}\] \[\mathrm{dLips}(s_{c};p_{1},p_{2}) = \frac{p_{d}\mathrm{d}\Omega_{d}}{16\pi^{2}\sqrt{s_{c}}}. \tag{46}\] Consequently, the cms momentum in each of these two-body cases equals \[p_{q} = \frac{1}{2\sqrt{s}}\sqrt{\lambda(s,s_{c},0)}=|\mathbf{q}|\,, \tag{47}\] \[p_{d} = \frac{1}{2\sqrt{s_{c}}}\sqrt{\lambda(s_{c},s_{g},s_{h})}. \tag{48}\] There are four two-body states to be reckoned with. They are characterized by the following energies and momenta: for the energies \[\sqrt{s} =2\epsilon \sqrt{s_{c}}=2E_{\Lambda}\] \[\sqrt{s_{g}} =M_{\Lambda} \sqrt{s_{h}}=M_{\Lambda},\] and for the momenta \[p_{g} =p_{\lambda} p_{q} =\|\mathbf{q}\|\] \[p_{h} =p_{\lambda} p_{d} =p_{\Lambda}.\] with \(p_{\lambda}\) defined as the \[p_{\lambda}=\frac{1}{2M_{\Lambda}}\sqrt{\lambda(M_{\Lambda}^{2},m_{N}^{2},m_{ \pi}^{2})}. \tag{49}\] ## VIII Discussion In this paper we describe and analyze the structure of the differential-cross-section distribution in \(e^{+}e^{-}\) annihilation for a \(\Lambda\bar{\Lambda}\) final state, which is accompanied by an initial-state-radiation gamma. To be specific, the final state is fixed by the directional angles in the decays \(\Lambda\to N\pi\) and \(\bar{\Lambda}\rightarrow\bar{N}\pi\), denoted \(\Omega_{g}\) and \(\Omega_{h}\), and the directional angle \(\Omega_{d}\) in the \(\Lambda\bar{\Lambda}\) final state. Finally, there is the gamma three-momentum-directional angle \(\Omega_{\omega}\), and the radiated gamma energy \(\omega\). We start by rewriting the cross-section-distribution function \({\rm d}\sigma\) of eq.(1) as a product of three factors. The first factor is \[{\cal K}=\frac{1}{2s}\frac{(4\pi\alpha)^{3}}{(P^{2})^{2}}\Bigg{[}\frac{\pi}{M_ {\Lambda}\Gamma_{\Lambda}(M_{\Lambda})}\Bigg{]}^{2}. \tag{52}\] This is what remains when the delta functions \(\delta(s_{g}-M_{\Lambda}^{2})\delta(s_{h}-M_{\Lambda}^{2})\) hidden in the \({\cal K}\) factor of eq.(2) are removed. The second factor is the phase-space-density-distribution function. After having absorbed the abovementioned delta functions this factor reads \[{\rm dLips} = \frac{1}{(2\pi)^{3}}~{}4\epsilon{\rm d}\omega\frac{\omega{\rm d} \Omega_{\omega}}{(4\pi)^{2}2\epsilon}\frac{p_{\Lambda}{\rm d}\Omega_{d}}{(4 \pi)^{2}2E_{\Lambda}} \tag{53}\] \[\times \frac{p_{\lambda}{\rm d}\Omega_{g}}{(4\pi)^{2}2M_{\Lambda}}\frac {p_{\lambda}{\rm d}\Omega_{h}}{(4\pi)^{2}2M_{\Lambda}},\] where \(p_{\lambda}=\sqrt{\lambda(M_{\Lambda}^{2},m_{N}^{2},m_{\pi}^{2})/4M_{\Lambda} ^{2}}\). The third factor in the cross-section-distribution function is the squared matrix element \(\overline{|{\cal M}_{red}|^{2}}\) of eq.(5), to which we now turn our attention. We expand \(\overline{|{\cal M}_{red}|^{2}}\) on a set of functions, \(A^{XY}\) and \(B^{XY}\), which are bilinear forms in the form factors \(G_{M}\) and \(G_{E}\), \[A^{XY}(G_{M},G_{E}) = |G_{M}|^{2}{\cal L}_{1}^{AXY}+|G_{E}|^{2}{\cal L}_{2}^{AXY} \tag{54}\] \[+ 2\Re(G_{M}G_{E}^{\star}){\cal L}_{3}^{AXY}\] \[+ 2\Im(G_{M}G_{E}^{\star}){\cal L}_{4}^{AXY},\] and similarly for \(B^{XY}\). We refer to the set of functions \(\{{\cal L}\}\) as co-factors. Each of the suffixes \(X\) and \(Y\) stand for \(R\) or \(S\), where suffix \(R\) represents parity-conserving hyperon decay, and \(S\) parity-violating hyperon decay. Several co-factors vanish; \({\cal L}_{3}^{ARR}={\cal L}_{3}^{BRR}={\cal L}_{3}^{BSS}=0\). Co-factors suffixed \(RR\) are independent of the hyperon decay angles \(\Omega_{g}\) and \(\Omega_{h}\); co-factors \(RS\) and \(SR\) are linear in one of the decay angles \(\Omega_{g}\) or \(\Omega_{h}\); co-factors \(SS\) are linear in both \(\Omega_{g}\) and \(\Omega_{h}\). Until now, we have been concerned with the \(\gamma(N\pi)(\bar{N}\bar{\pi})\) final state, but it is also possible to determine the co-factors when one or both hyperons remain intact, i.e., after integration over \({\rm d}\Omega_{g}\) or \({\rm d}\Omega_{h}\) or both. It turns out the co-factors of sect. IV either vanish or remain intact. Thus, \[{\rm Operation:}~{}\int{\rm d}\Omega_{g}\int{\rm d}\Omega_{h}/(16\pi^{2})\] \[{\rm Co-factors;}~{}{\cal L}_{1}^{ARR},{\cal L}_{2}^{ARR},{\cal L }_{1}^{BRR},{\cal L}_{2}^{BRR}.\] \[{\rm Operation:}~{}\int{\rm d}\Omega_{g}/(4\pi)\] \[{\rm Co-factors;}~{}{\cal L}_{4}^{ARS}.\] Operation: \(\int{\rm d}\Omega_{h}/(4\pi)\) Co-factors; \({\cal L}_{4}^{ASR}\). Co-factors not listed vanish. ## Appendix The energy-momentum parameters describing the decay of Lambda into proton and pion are, \[p_{g} = \frac{1}{2M_{\Lambda}}\sqrt{\lambda(M_{\Lambda}^{2},m_{N}^{2},m_{ \pi}^{2})}, \tag{55}\] \[E_{g} = \frac{1}{2M_{\Lambda}}\big{(}M_{\Lambda}^{2}+m_{N}^{2}-m_{\pi}^{2 }\big{)}, \tag{56}\] representing the proton in the Lambda rest system. The weight functions \(a_{y}\) and \(b_{y}\) of eq.(6), can be written as \[a_{y} = \frac{2}{(\epsilon\omega)^{2}\sin^{2}\theta}\Big{[}2E_{\Lambda}^{ 2}\Big{]}, \tag{57}\] \[b_{y} = \frac{2}{(\epsilon\omega)^{2}\sin^{2}\theta}\Big{[}2(\epsilon^{ 4}+E_{\Lambda}^{4})\] (58) \[- (\epsilon\omega)^{2}\sin^{2}\theta\Big{]}.\] We notice that \(a_{y}\) and \(b_{y}\) have different dimensions.
2308.00658
Considerations on the EMF Exposure Relating to the Next Generation Non-Terrestrial Networks
The emerging fifth generation (5G) and the upcoming sixth generation (6G) communication technologies introduce the use of space- and airborne networks in their architectures under the scope of non-terrestrial networks (NTNs). With this integration of satellite and aerial platform networks, better coverage, network flexibility and easier deployment can be achieved. Correspondingly, satellite broadband internet providers have launched an increasing number of small satellites operating in low earth orbit (LEO). These recent developments imply an increased electromagnetic field (EMF) exposure to humans and the environment. In this work, we provide a short overview of the state of consumer-grade satellite networks including broadband satellites and future NTN services. We also consider the regulatory state governing their operation within the context of EMF exposure. Finally, we highlight the aspects that are relevant to the assessment of EMF exposure in relation to NTNs.
Amina Fellan, Ainur Daurembekova, Hans D. Schotten
2023-08-01T16:48:50Z
http://arxiv.org/abs/2308.00658v1
# Considerations on the EMF Exposure Relating to the Next Generation Non-Terrestrial Networks ###### Abstract The emerging fifth generation (5G) and the upcoming sixth generation (6G) communication technologies introduce the use of space- and airborne networks in their architectures under the scope of non-terrestrial networks (NTNs). With this integration of satellite and aerial platform networks, better coverage, network flexibility and easier deployment can be achieved. Correspondingly, satellite broadband internet providers have launched an increasing number of small satellites operating in low earth orbit (LEO). These recent developments imply an increased electromagnetic field (EMF) exposure to humans and the environment. In this work, we provide a short overview of the state of consumer-grade satellite networks including broadband satellites and future NTN services. We also consider the regulatory state governing their operation within the context of EMF exposure. Finally, we highlight the aspects that are relevant to the assessment of EMF exposure in relation to NTNs. non-terrestrial networks, mega-constellations, 6G, human EMF exposure, regulations, satellite networks ## I Introduction Discussions about the next sixth generation (6G) of communications systems are well on course in both the academia and industry communities. Innovative solutions and disruptive technologies are being proposed to account for the shortcomings and challenges that faced previous generations. One of the challenges faced by current terrestrial networks (TNs) is the difficulty and non-feasibility of providing global coverage to users in remote areas. The infrastructure deployment for TNs is often costly, rigid, and requires meticulous planning. Not to mention, in the case of natural disasters, how prone TNs are to breakdowns and how challenging it could be to restore their operation. Non-terrestrial networks (NTNs) are seen as one of the appealing solutions to complement TNs services and expand their coverage to remote areas around the globe. Recent advancements in the satellite and space industries lowered launch costs for satellite systems and advanced the markets for consumer-service based deployments. Nowadays, the Third Generation Partnership Project (3GPP) is laying out the foundations for integrating NTNs in the next generation of communication networks. With 6G, two connectivity scenarios would be possible. Namely, direct connectivity between a satellite or an aerial platform and handset devices as well as indirect connectivity that exploit the satellites' connectivity in the backhaul [1]. On the other hand, over the past decade, several companies announced revolutionary plans to provide high-speed internet access using constellations of a large number of satellites to serve user terminals around the globe. This increasing interest in bringing NTNs closer to end-users raises questions about how would such deployments affect the overall level of electromagnetic field (EMF) exposure experienced by humans and the environments. For TNs, organizations such as the World Health Organization (WHO), International Commission on Non-Ionizing Radiation Protection (ICNIRP), Institute of Electrical and Electronics Engineers (IEEE), and International Telecommunication Union (ITU) are concerned with evaluating research and studies on human EMF exposure [2, 3, 4, 5]. The outcomes of their evaluation is used to derive safe limits and guidelines to restrict the levels of EMF exposure. In this work, we consider previous efforts done to study EMF exposure related to satellite communications. This work is organized as follows. In Section II, we provide a summary on existing broadband internet satellite networks as well as the next generation NTN networks as proposed by the 3GPP. Section III explores considerations with respect to EMF exposure from satellite and airborne networks and possible challenges related to EMF measurements for such scenarios. The state of regulations concerning EMF exposure within the scope of satellite communications is explored in Section IV. Finally, our conclusions are given in V. ## II Non-Terrestrial Networks Interest in leveraging satellite and airborne networks for civilian applications has been gaining momentum over the recent years. Compared to TNs, satellite networks have the advantage of a larger coverage area which translates to better accessibility to remote and rural areas where the deployment of TNs can be challenging, not feasible, or not possible. NTNs could also ensure a fallback solution in natural disasters situations on the occasion that TNs' services are disrupted. Within the scope of 6G networks, the introduction and integration of NTNs in the overall network architecture play an essential role in providing global network coverage and improving the network's resiliency. This will not be the first attempt to extend the coverage of TNs via satellite connectivity [7], nor to provide mobile satellite services to users around the globe [8]. However, the advent of the new low-cost space- and air-borne vehicles as well as the substantial development of communications technologies and markets makes the reintroduction of NTNs to support TNs quite attractive. In this section we provide an overview of the state of the current and upcoming satellite networks, focusing particularly on those providing direct links of two-way satellite communications to user terminals and handheld devices. We first consider the 3GPP's ongoing work on NTNs, followed by existing examples of mega-constellations of non-geostationary satellite orbit (NGSO) satellite systems. ### _New Radio - Non-Terrestrial Networks_ As per the 3GPP's vision, the scope of NTNs includes spaceborne systems operating in the geostationary earth orbit (GEO), NGSO (including low earth orbit (LEO) and medium earth orbit (MEO)), as well as airborne platforms such as high altitude platform station (HAPS) and unmanned aerial systems (UASs) [10]. The ongoing work on the NTN standards can be broadly split into two classes, namely, new radio (NR)-NTN and Internet of Things (IoT)-NTN. The former addresses enhanced mobile broadband (eMBB) use cases, thus complementing and expanding the capacity of services provided by TNs over a larger coverage area. Whereas the latter is concerned with massive machine type communication (mMTC) applications, providing satellite connectivity to IoT devices. The first studies considering the introduction of NTNs to the fifth generation (5G) NR were initiated in 2017 by the 3GPP and appear in Rel-15 under TR 38.811 [9]. The 3GPP study considered use cases, propagation channel models, network architectures, deployment scenarios, and potential challenges associated with the integration of NTNs with the NR interface. Figure 1 illustrates some of the possible scenarios considered for NTNs based on [9]. In Rel-16 under TR 32.321, the 3GPP resumed its investigation relating to the support of NR protocols in NTNs, focusing mainly on satellite nodes but also considering other non-terrestrial platforms such as UASs and HAPSs [11]. The technical report provided a refined proposal for NTN-based next generation radio access network (NG-RAN) architectures, system- and link-level simulations for the physical layer, and considerations of issues relating to delay, Doppler shifts, tracking area and user mobility that are particular to NTNs. Rel-17 builds upon the first studies carried out in [9] and [11] to define an initial set of specifications to support NTNs in NR. Co-existence issues between NTNs and TNs channels as well as the radio frequency (RF) requirements for satellite access nodes (SANs) and NTN-supporting user equipments (UEs) were the central point that the 3GPP considered in TR 38.863 [6]. Based on the ITU radio regulations [12], the frequency bands n255 and n256 in the L- and the S-bands were designated by the 3GPP for NTN operation in the United States and internationally, respectively. Details of the 3GPP approved allocated frequencies to NTNs are listed in Table I. At present, frequency division duplex (FDD) channels are supported with plans to consider time division duplex (TDD) subsequently. For the operation at frequencies above 10 GHz, the satellite Ka-band is being considered [11]. In the current Rel-18, the 3GPP is discussing in TR 38.882 the regulatory aspects and challenges arising from the verification of UEs location information within the coverage area of NTNs [13]. The definition and allocation of frequencies to the uplink and downlink in frequency range (FR)2 is also currently under consideration. Despite the fact that the specifications and standardization of NTNs are at the moment still being defined by the 3GPP, the first initiatives to realize the 3GPP's NTN vision of a global hybrid mobile network are currently underway. For \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline NR & Satellite & \multicolumn{2}{c|}{Frequency} & \multicolumn{1}{c|}{**Duplex**} & \multicolumn{1}{c|}{**Coexisting terrestrial**} \\ \cline{3-6} **band** & **band** & _Uplink_ & _Downlink_ & **mode** & **NR bands** \\ \hline **n255** & L. & 1626.5 MHz - 1660.5 MHz & 1525 MHz - 1559 MHz & FDD & n24*, n99* \\ \hline **n256** & S & 1980 MHz – 2010 MHz & 2170 MHz – 2200 MHz & FDD & n2, n25, n65, n66, n70 \\ \hline \multicolumn{6}{l}{*US-specific.} \\ \end{tabular} \end{table} TABLE I: Frequencies allocated to NTNs by the 3GPP [6] Fig. 1: NTN architecture according to the different scenarios considered by the 3GPP’s study in [9] instance, the Omnipace Spark program has launched its first nanosatellite in 2022, part of a global NGSO network and is foreseen to operate in the S-band, supporting the specifications of the 3GPP's n256 band and targeting mainly IoT applications [14]. Another milestone is the first UE chipset supporting NR-NTN. It was announced in February 2023 by MediaTek [15]. The chipset is compliant with the 3GPP's specifications for NR-NTN defined in Rel-17. It is capable of supporting two-way satellite communications and is seen as an enabler for NTNs services in future 5G smartphones and satellite-enabled UEs. ### _Satellite Networks_ Over the last decade, we witnessed a surge in the number of satellites being launched into space. With companies such as SpaceX, OneWeb, Telesat, and Blue Origin racing to kick-off their mega-constellations by sending hundreds of LEO satellites to space [16]. The main goal of LEO mega-constellations is to provide global broadband internet access to end-users over a fixed satellite service (FSS). The architecture of mega-constellation satellite systems consists of three main components: the constellation of satellites in LEO, a network of ground gateway stations, and user terminals; in the form of very small aperture terminals (VSATs). We consider here aspects of two of the currently largest constellations, namely, SpaceX and OneWeb. In 2015, SpaceX announced its Starlink program with initial plans to deploy more than 4000 LEO satellites in orbit. Starlink satellites are operated on circular orbits at an altitude of 550 km. As of the time of this writing, more than 3500 of Starlink's mini-satellites are in-orbit. The Ku and Ka satellite frequency bands are used for the satellite-to-user and satellite-to-ground links, respectively [17]. The satellite and earth terminals support beamforming transmissions using phase-array antennas. Each satellite could support at least 8 beams. The peak effective isotropic radiated power (EIRP) values for the user's downlink could reach 36.71 dBW [17]. OneWeb's constellation is planned to operate polar orbit planes at an altitude of 1200 km. A total number of 648 satellites are to be delivered in orbit to establish a global coverage of OneWeb's satellite network. Between the satellite and user terminals, the Ku band is used for communications while the Ka band is allocated for the satellite to gateway links. OneWeb satellites can support up to 16 beams. The maximum EIRP for the user's downlink is around 34.6 dBW. Table II lists a comparison between characteristics of the two major NGSO constellations currently in operation. Mega-constellation operators have been also extending their operations to include mobile satellite service (MSS) applications. SpaceX and T-Mobile announced in 2022 their plans to provide "Coverage Above and Beyond" services that support direct satellite connectivity to cellphone users over a long term evolution (LTE) interface [18]. The new service will be hosted on SpaceX's second generation NGSO satellites and will support text messaging at the initial stage, with voice and data services to follow. The satellite-to-cellular links are planned to operate on LTE band 25 frequencies, namely, 1910-1915 MHz and 1990-1995 MHz for the earth-to-space and space-to-earth links, respectively [19]. Mega-constellation satellites are not the only nor the pioneer providers for MSS. Commercial two-way communication for voice and data services was available by companies like Iridium, Inmarsat, Globalstar, and Thuraya ever since the late 1990s and early 2000s. For instance, the Iridium satellite constellation provides global voice and data services to mobile users via its 66 LEO satellites [8]. Its services are available to its subscribers on the L-band (from 1616-1626.5 MHz). In 2017, Iridium launched the second generation of its satellites, known as Iridium-NEXT [20]. Figure 2 depicts the number of satellites launches of the major mobile satellite communication providers over the last decade with a clear upward trend. In 2019, Iridium-NEXT has launched the required number of satellites to complete its planned constellation and provide a stable operation. Additional satellites could be launched in the future as spares to maintain the constellation. OneWeb and Starlink are still in the process of building their constellations. \begin{table} \begin{tabular}{|c|c|c|} \hline **Parameter** & **Starlink** & **OneWeb** \\ \hline **Altitude** & 550 km & 1200 km \\ \hline **Satellite** & 4408 & 648 \\ \hline **constellation** & (initial phase) & \\ \hline **Number of user beams** & \(\geq\) 8 & 16 \\ \hline **User-to-space** & & \\ **Uplink frequency** & 14.0 – 14.5 GHz & 14.0-14.5 GHz \\ **Downlink frequency** & 10.7 – 12.7 GHz & 10.7-12.7 GHz \\ \hline **Gateway-to-space** & & \\ **Uplink frequency** & 27.5 – 29.1 GHz & 27.5-30.0 GHz \\ **Downlink frequency** & 17.8 – 18.6 GHz & 17.8-19.3 GHz \\ \hline \end{tabular} \end{table} TABLE II: Summary of Starlink and OneWeb constellations operational characteristics Fig. 2: Number of satellite launches within the period of 2017-2022 for the three major satellite mobile communication providers. Based on data from [21] ## III Measurements considerations and challenges The recent growing interest in ubiquitous satellite communications with direct connectivity to UEs calls for more measurement and simulation studies on the exposure levels to users and the environment. Measurement studies on EMF exposure near satellite earth stations are scarce and the emerging technologies to be implemented in both TNs and NTNs require more attention to their implications on the EMF exposure levels. Table III summarizes some of the aspects that might prove relevant when comparing between EMF exposure assessment methods for TNs and NTNs. Hankin in [22], supported by the US Environmental Protection Agency, provides one of the earliest and few measurement studies assessing the EMF exposure levels from satellite communication systems. He first evaluated mathematically the EMF exposure in terms of power density as a function of distance from the source, then he performed measurements in the vicinity of the sources to determine the actual exposure levels. However, the study considers satellite communication earth terminals transmitting at an EIRP in the orders of megawatts, unlike the case for NTNs which will transmit at considerably lower powers. Moreover, such terminals are normally located away from the general population and are within sites that are restricted to public access. Besides, EMF measurement equipment and satellite communication technologies have evolved significantly since the 1970s, when this study took place. More recent measurement studies were conducted in [23] and [24], where the authors evaluated the EMF exposure using broadband EMF meters in the vicinity of maritime satellite transmitters and satellite earth terminals, respectively. They later on compared it to the ICNIRP guidelines levels for occupational and general public scenarios. In [25], the authors consider the exposure from large-phased array antennas used to communicate with LEO satellite systems and provide a device-based time-averaging method to estimate the EMF exposure based on the time-averaged power density. In this section we present a few considerations on EMF exposure measurements for NTNs communications scenarios. ### _Downlink: space-to-earth_ **Path loss**. By the time the signal arrives at the earth terminal, it would have experienced high losses due to several factors such as, free space path loss, atmospheric loss, and ionospheric and tropospheric scintillation losses. Free space path loss is dependant on the distance between the transmitter and receiver, which in the case of NTNs can be anywhere between 300 km for LEO satellites and could reach up to 35,786 km for GEO satellites. The free space path loss \(L_{FS}\) is given by: \[L_{FS}\ (dB)=20\ log(\frac{4\ \pi\ d\ f}{c}) \tag{1}\] where \(d\) is the distance between the transmitter and receiver, \(f\) is the frequency, and \(c\) is the speed of light in vacuum. Atmospheric loss is the gaseous attenuation caused by oxygen and water vapor density [26]. Whereas scintillation losses are caused by irregularities in the atmosphere [27]. The overall path loss is dependant on the elevation angles of a satellite. Figure 3 depicts the path loss experienced at frequencies where satellite communications are operated, mainly from the L- to the Ka-bands. The highlighted and zoomed-in section corresponds to NR bands n255 and n256 that are planned for NTNs' operation to support mobile satellite services. Five different elevations for NTN space terminals are considered. Namely, 20 km for HAPS, 300 km for the lower LEO satellites altitude range, 1500 km for the upper LEO satellites altitude range, 20,000 km for an average MEO satellite altitude, and 35,786 km for GEO satellites. ### _Uplink: earth-to-space_ **Path loss** The uplink undergoes similar free space path loss to that experienced by the downlink. As a result, higher antenna gains are required at the user terminals transmitting from earth to space. **Terminal type**. Depending on the earth terminal type, the maximum transmitted power is defined accordingly. For instance, in [6] the 3GPP has assigned the UE power class 3 to UEs for NTN use-cases, which include handheld devices and allows a maximum transmit power of 26 dBm with a power tolerance level of +/- 2 dB for any transmission bandwidth within the channel bandwidth. UE power class 3 is the default power class for TNs as well. Other terminal types such as VSATs and earth stations in motions (ESIMs) are configured to transmit at higher powers. VSATs and ESIMs would provide users indirect access to NTN services. They are planned to be used for FR2 operation in NTN. **Antenna type**. In contrast to UEs, VSATs and ESIMs would use directional antennae such as parabolic antennae, i.e., the resulting EMF exposure is concentrated in their antennae's main transmission lobe. Thus additional safety measures need Fig. 3: Path loss due to free space (FS) and atmospheric gas (AG) conditions at a temperature of 20\({}^{\circ}\)C and corresponding to the frequency range of satellite communications to be taken into account when mounting such terminals to ensure the safety of users. **Service usage profiles**. For handheld UE devices, a major component of the EMF exposure occurs in the antenna's near-field. Adequate specific absorption rate (SAR) assessments methods are thus necessary in this case to determine the level of RF power absorbed by the human body [3]. The usage profile (voice calls, texting, browsing, etc.) influences factors such as the proximity of the device to the body, the duration of usage, and consequently the intensity of EMF exposure [28]. ## IV Regulations and standardization The recent advancements of wireless communications networks and their pervasiveness in our daily lives raised few calls from the public concerning the increased levels of EMF exposure in the environment. Several organizations, such as the WHO, ICNIRP, IEEE, ITU and International Electrotechnical Commission (IEC), evaluate continuously the research studying the influence of EMF exposure on humans. They use the outcome of their evaluations to define and maintain the guidelines and recommendations with the goal to protect humans and the environment against any possible harmful effects caused by the exposure to radio-frequency electromagnetic fields [3, 4, 29, 30]. The EMF exposure limits set by the regulatory organizations target the overall maximum possible radiation at a given frequency and location. This maximum level should take into consideration the EMF exposure due to all different sources and radio access technologys (RATs) surrounding the point of evaluation. Also, it has to take into account characteristics of the transmissions present, for instance, whether they are continuous or pulsed [3]. In practice, such assessments are not necessarily straight-forward with regular measurement equipment as they only provide the instantaneous and average values of EMF radiation at a given location. The ICNIRP guidelines are the most widely adopted recommendations internationally. They define the reference levels for limiting the EMF exposure from sources operating at frequencies up to 300 GHz [3]. The reference levels are frequency dependant and can be evaluated in terms of the electric field, magnetic field, or the power flux density. Depending on the type of exposure in question, e.g., due to a handheld device versus the to a base station, they are defined for local and whole-body exposure. Table IV lists the exposure limits as specified by the reference levels defined by the ICNIRP in [3]. For instance, the incident power density caused by a VSAT terminal transmitting in the Ku-band, and taking into consideration any other sources transmitting in the same band, when measured in the far-field of its antenna (i.e., at a distance greater than \(2\ D\ 2\ /\ \lambda\ \ (m)\); where \(D\) is the largest dimension of the VSAT antenna and \(\lambda\) is the wavelength) should not exceed the limit of \(10\ W/m^{2}\) when averaged over a duration of 30 minutes and over the whole-body. On the other hand, for handheld devices, due to the fact that they are used in close proximity to the body and particularly to certain critical parts such as the head, reference levels concerned with local EMF exposure apply in this case. Handheld devices must undergo rigorous EMF exposure testing to determine the SAR levels. As per the ICNIRP, for the general public, the local SAR limit for the head and trunk lies at \(2\ W/kg\) averaged over \(10\ g\) of cubic mass and for exposure intervals greater than or equal to 6 minutes [3]. The ITU is the foremost authority regulating and organizing international access to the space spectrum [12]. The ITU also provides recommendations on measuring the EMF radiation by broadcast stations and considerations regarding the placement of satellite earth stations in [31]. The choice of the location for such stations needs to be carefully determined such that the near-field and transition zones are far from residential and industrial areas. Since satellite earth stations transmit to satellites at higher powers, it is very likely that the resulting EMF field exceeds the reference levels. Thus, access to such facilities must be restricted against unauthorized personnel. With the introduction of NTNs, the existing guidelines and recommendations need to be re-assessed to take into account the increased number of satellites in space, and user terminals on earth and any resulting consequences on the EMF levels. ## V Conclusions In this work, we presented a short overview of the state of the next generation NTNs and their position in future 6G networks. Given this growing interest in NTNs, questions on how to assess their possible contribution to the EMF exposure on humans and the environment need to be taken into account. NTNs will bring direct connectivity of UEs, such as smartphones and tablets, to satellite and air-bone communication systems, which raises questions on the levels of EMF exposures the users are subject to experience in such scenarios. We provided considerations regarding the implications of the spread of these technologies on EMF exposure levels in the environment from the downlink and uplink perspectives. We observed there is a shortage of EMF assessment studies concerned with NTN scenarios. More \begin{table} \begin{tabular}{|c|c|c|} \hline **Network** & **TN** & **NTN** \\ \hline **Antenna type** & omni-directional & directional \\ \hline **Cell** & stationary & stationary (GEO), \\ **coverage** & & mobile (NGSO) \\ \hline **Coverage** & up to 30km & up to 1000 km (NGSO), \\ **radius** & & up to 3500 km (GEO) \\ \hline **Frequency** & FR1, & FR1, \\ **range (FR)** & FR2 & FR2* \\ \hline **Operation** & up to 7.125 GHz, & 1-2 GHz \\ **frequencies** & 24.25-71.0 GHz & above 10 GHz* \\ \hline **Duplex mode** & TDD / FDD & FDD* \\ \hline **Terminal type** & UE & UE, \\ & & VSAT, \\ & & ESIM \\ \hline **UE power** & class 2, & class 3 \\ **class** & class 3 & \\ \hline \end{tabular} * NTN specifics are still underdevelopment by 3GPP. \end{table} TABLE III: Comparison of some of the consideration on TNs vs NTNs
2307.08481
Derivation-Graph-Based Characterizations of Decidable Existential Rule Sets
This paper establishes alternative characterizations of very expressive classes of existential rule sets with decidable query entailment. We consider the notable class of greedy bounded-treewidth sets (gbts) and a new, generalized variant, called weakly gbts (wgbts). Revisiting and building on the notion of derivation graphs, we define (weakly) cycle-free derivation graph sets ((w)cdgs) and employ elaborate proof-theoretic arguments to obtain that gbts and cdgs coincide, as do wgbts and wcdgs. These novel characterizations advance our analytic proof-theoretic understanding of existential rules and will likely be instrumental in practice.
Tim S. Lyon, Sebastian Rudolph
2023-07-17T13:39:08Z
http://arxiv.org/abs/2307.08481v2
# Derivation-Graph-Based Characterizations of Decidable Existential Rule Sets+ ###### Abstract This paper establishes alternative characterizations of very expressive classes of existential rule sets with decidable query entailment. We consider the notable class of greedy bounded-treewidth sets (**gbts**) and a new, generalized variant, called weakly gbts (**wgbts**). Revisiting and building on the notion of derivation graphs, we define (weakly) cycle-free derivation graph sets ((**w**)**cdgs**) and employ elaborate proof-theoretic arguments to obtain that **gbts** and **cdgs** coincide, as do **wgbts** and **wcdgs**. These novel characterizations advance our analytic proof-theoretic understanding of existential rules and will likely be instrumental in practice. Keywords:TGDs query entailment bounded treewidth proof-theory ## 1 Introduction The formalism of existential rules has come to prominence as an effective approach for both specifying and querying knowledge. Within this context, a knowledge base takes the form \(\mathcal{K}=(\mathcal{D},\mathcal{R})\), where \(\mathcal{D}\) is a finite collection of atomic facts (called a _database_) and \(\mathcal{R}\) is a finite set of _existential rules_ (called a _rule set_), which are first-order formulae of the form \(\forall\mathbf{xy}(\varphi(\mathbf{x},\mathbf{y})\rightarrow\exists\mathbf{z} \psi(\mathbf{y},\mathbf{z}))\). Although existential rules are written in a relatively simple language, they are expressive enough to generalize many important languages used in knowledge representation, including rule-based formalisms as well as such based on description logics. Moreover, existential rules have meaningful applications within the domain of ontology-based query answering [2], data exchange and integration [9], and have proven beneficial in the study of general decidability criteria [10]. The _Boolean conjunctive query entailment problem_ consists of taking a knowledge base \(\mathcal{K}\), a Boolean conjunctive query (BCQ) \(q\), and determining if \(\mathcal{K}\models q\). As this problem is known to be undecidable for arbitrary rule sets [7], much work has gone into identifying existential rule fragments for which decidability can be reclaimed. Typically, such classes of rule sets are described in one of two ways: either, a rule set's membership in said class can be established through easily verifiable _syntactic properties_ (such classes are called _concrete classes_), or the property is more _abstract_ (which is often defined on the basis of semantic notions) and may be hard or even impossible to algorithmically determine (such classes are called _abstract classes_). Examples of concrete classes include functional/inclusion dependencies [11], datalog, and guarded rules [6]. Examples of abstract classes include finite expansion sets [4], finite unification sets [3], and bounded-treewidth sets (**bts**) [6]. Yet, there is another means of establishing the decidability of query entailment: only limited work has gone into identifying classes of rule sets with decidable query entailment based on their _proof-theoretic characteristics_, in particular, based on specifics of the derivations such rules produce. To the best of our knowledge, only the class of _greedy bounded treewidth sets_ (**gbts**) has been identified in such a manner (see [14]). A rule set qualifies as **gbts** when every derivation it produces is _greedy_, in a sense that it is possible to construct a tree decomposition of finite width in a "greedy" fashion alongside the derivation, ensuring the existence of a model with finite treewidth for the knowledge base under consideration, thus warranting the decidability of query entailment [6]. In this paper, we investigate the **gbts** class and three new classes of rule sets where decidability is determined proof-theoretically. First, we define a weakened version of **gbts**, dubbed **wgbts**, where the rule set need only produce _at least one greedy derivation_ relative to any given database. Second, we investigate two new classes of rule sets, dubbed _cycle-free derivation graph sets_ (**cdgs**) and _weakly cycle-free derivation graph sets_ (**wcdgs**), which are defined relative to the notion of a _derivation graph_. Derivation graphs were introduced by Baget et al. [5] and are directed acyclic graphs encoding _how_ certain facts are derived in the course of a derivation. Notably, via the application of _reduction operations_, a derivation graph may be reduced to a tree, which serves as a tree decomposition of a model of the considered knowledge base. Such objects helped establish that (weakly) frontier-guarded rule sets are **bts**[5]. In short, our key contributions are: 1. We investigate how proof-theoretic structures gives rise to decidable query entailment and propose three new classes of rule sets. 2. We show that \(\textbf{gbts}=\textbf{cdgs}\) and \(\textbf{wgbts}=\textbf{wcdgs}\), establishing a correspondence between greedy derivations and reducible derivation graphs. 3. We show that **wgbts**_properly subsumes_**gbts** via a novel proof transformation argument. Therefore, by the former point, we also find that **wcdgs** properly subsumes **cdgs**. Figure 1: A graphic depicting the containment relations between the classes of rule sets considered. The solid edges represent strict containment relations. The paper is organized accordingly: In Section 2, we define preliminary notions. We study **gbts** and **wgbts** in Section 3, and show that the latter class properly subsumes the former via an intricate proof transformation argument. In Section 4, we define **cdgs** and **wcdgs** as well as show that \(\mathbf{gbts}=\mathbf{cdgs}\) and \(\mathbf{wgbts}=\mathbf{wcdgs}\). Last, in Section 5, we conclude and discuss future work. ## 2 Preliminaries **Syntax and formulae.** We let \(\mathbf{Ter}\) be a set of _terms_, which is the the union of three countably infinite, pairwise disjoint sets, namely, the set of _constants_\(\mathbf{Con}\), the set of _variables_\(\mathbf{Var}\), and the set of _nulls_\(\mathbf{Null}\). We use \(a\), \(b\), \(c\), \(\ldots\) (occasionally annotated) to denote constants, and \(x\), \(y\), \(z\), \(\ldots\) (occasionally annotated) to denote both variables and nulls. A _signature_\(\Sigma\) is a set of _predicates_\(p\), \(q\), \(r\), \(\ldots\) (which may be annotated) such that for each \(p\in\Sigma\), \(ar(p)\in\mathbb{N}\) is the _arity_ of \(p\). For simplicity, we assume a fixed signature \(\Sigma\) throughout the paper. An _atom_ over \(\Sigma\) is defined to be a formula of the form \(p(t_{1},\ldots,t_{n})\), where \(p\in\Sigma\), \(ar(p)=n\), and \(t_{i}\in\mathbf{Ter}\) for each \(i\in\{1,\ldots,n\}\). A _ground atom_ over \(\Sigma\) is an atom \(p(a_{1},\ldots,a_{n})\) such that \(a_{i}\in\mathbf{Con}\) for each \(i\in\{1,\ldots,n\}\). We will often use \(\mathbf{t}\) to denote a tuple \((t_{1},\ldots,t_{n})\) of terms and \(p(\mathbf{t})\) to denote a (ground) atom \(p(t_{1},\ldots,t_{n})\). An _instance_ over \(\Sigma\) is defined to be a (potentially infinite) set \(\mathcal{I}\) of atoms over constants and nulls, and a _database_\(\mathcal{D}\) is a finite set of ground atoms. We let \(\mathcal{X}\), \(\mathcal{Y}\), \(\ldots\) (occasionally annotated) denote (potentially infinite) sets of atoms with \(\mathbf{Ter}(\mathcal{X})\), \(\mathbf{Con}(\mathcal{X})\), \(\mathbf{Var}(\mathcal{X})\), and \(\mathbf{Null}(\mathcal{X})\) denoting the set of terms, constants, variables, and nulls occurring in the atoms of \(\mathcal{X}\), respectively. **Substitutions and homomorphisms.** A _substitution_ is a partial function over the set of terms \(\mathbf{Ter}\). A _homomorphism_\(h\) from a set \(\mathcal{X}\) of atoms to a set \(\mathcal{Y}\) of atoms, is a substitution \(h:\mathbf{Ter}(\mathcal{X})\rightarrow\mathbf{Ter}(\mathcal{Y})\) such that (i) \(p(h(t_{1}),\ldots,h(t_{n}))\in\mathcal{Y}\), if \(p(t_{1},\ldots,t_{n})\in\mathcal{X}\), and (ii) \(h(a)=a\) for each \(a\in\mathbf{Con}\). If \(h\) is a homomorphism from \(\mathcal{X}\) to \(\mathcal{Y}\), we say that \(h\)_homomorphically maps_\(\mathcal{X}\) to \(\mathcal{Y}\). Atom sets \(\mathcal{X},\mathcal{Y}\) are _homomorphically equivalent_, written \(\mathcal{X}\equiv\mathcal{Y}\), _iff_\(\mathcal{X}\) homomorphically maps to \(\mathcal{Y}\), and vice versa. An _isomorphism_ is a bijective homomorphism \(h\) where \(h^{-1}\) is a homomorphism. **Existential rules.** Whereas databases encode assertional knowledge, ontologies consist in the current setting of _existential rules_, which we will frequently refer to as _rules_ more simply. An existential rule is a first-order sentence of the form: \[\rho=\forall\mathbf{x}\mathbf{y}(\varphi(\mathbf{x},\mathbf{y})\rightarrow \exists\mathbf{z}\psi(\mathbf{y},\mathbf{z}))\] where \(\mathbf{x}\), \(\mathbf{y}\), and \(\mathbf{z}\) are pairwise disjoint collections of variables, \(\varphi(\mathbf{x},\mathbf{y})\) is a conjunction of atoms over constants and the variables \(\mathbf{x},\mathbf{y}\), and \(\psi(\mathbf{y},\mathbf{z})\) is a conjunction of atoms over constants and the variables \(\mathbf{y},\mathbf{z}\). We define \(\mathit{body}(\rho)=\varphi(\mathbf{x},\mathbf{y})\) to be the _body_ of \(\rho\), and \(\mathit{head}(\rho)=\psi(\mathbf{y},\mathbf{z})\) to be the _head_ of \(\rho\). For convenience, we will often interpret a conjunction \(p_{1}(\mathbf{t}_{1})\wedge\cdots\wedge p_{n}(\mathbf{t}_{n})\) of atoms (such as the body or head of a rule) as a set \(\{p_{1}(\mathbf{t}_{1}),\cdots,p_{n}(\mathbf{t}_{n})\}\) of atoms; if is a homomorphism, then \(h(p_{1}(\mathbf{t}_{1})\wedge\cdots\wedge p_{n}(\mathbf{t}_{n})):=\{p_{1}(h( \mathbf{t}_{1})),\cdots,p_{n}(h(\mathbf{t}_{n}))\}\) with \(h\) applied componentwise to each tuple \(\mathbf{t}_{i}\) of terms. The _frontier_ of \(\rho\), written \(\mathit{fr}(\rho)\), is the set of variables \(\mathbf{y}\) that the body and head of \(\rho\) have in common, that is, \(\mathit{fr}(\rho)=\mathbf{Var}(\mathit{body}(\rho))\cap\mathbf{Var}(\mathit{ head}(\rho))\). We define a _frontier atom_ in a rule \(\rho\) to be an atom containing at least one frontier variable. We use \(\rho\) and annotated versions thereof to denote rules, as well as \(\mathcal{R}\) and annotated versions thereof to denote finite sets of rules (simply called _rule sets_). Models.We note that sets of atoms (which include instances and databases) may be seen as first-order interpretations, and so, we may use \(\models\) to represent the satisfaction of formulae on such structures. A set of atoms \(\mathcal{X}\) satisfies a set of atoms \(\mathcal{Y}\) (or, equivalently, \(\mathcal{X}\) is a model of \(\mathcal{Y}\)), written \(\mathcal{X}\models\mathcal{Y}\), _iff_ there exists a homomorphic mapping from \(\mathcal{Y}\) to \(\mathcal{X}\). A set of atoms \(\mathcal{X}\) satisfies a rule \(\rho\) (or, equivalently, \(\mathcal{X}\) is a model of \(\rho\)), written \(\mathcal{X}\models\rho\), _iff_ for any homomorphism \(h\), if \(h\) is a homomorphism from \(\mathit{body}(\rho)\) to \(\mathcal{X}\), then it can be extended to a homomorphism \(\overline{h}\) that also maps \(\mathit{head}(\rho)\) to \(\mathcal{X}\). A set of atoms \(\mathcal{X}\) satisfies a rule set \(\mathcal{R}\) (or, equivalently, \(\mathcal{X}\) is a model of \(\mathcal{R}\)), written \(\mathcal{X}\models\mathcal{R}\), _iff_\(\mathcal{X}\models\rho\) for every rule \(\rho\in\mathcal{R}\). If a model \(\mathcal{X}\) of a set of atoms, a rule, or a rule set homomorphically maps into _every_ model of that very set of atoms, rule, or rule set, then we refer to \(\mathcal{X}\) as a _universal model_ of that set of atoms, rule, or rule set [8]. Knowledge bases and querying.A _knowledge base (KB)_\(\mathcal{K}\) is defined to be a pair \((\mathcal{D},\mathcal{R})\), where \(\mathcal{D}\) is a database and \(\mathcal{R}\) is a rule set. An instance \(\mathcal{I}\) is a _model_ of \(\mathcal{K}=(\mathcal{D},\mathcal{R})\)_iff_\(\mathcal{D}\subseteq\mathcal{I}\) and \(\mathcal{I}\models\mathcal{R}\). We consider querying knowledge bases with _conjunctive queries (CQs)_, that is, with formulae of the form \(q(\mathbf{y})=\exists\mathbf{x}\varphi(\mathbf{x},\mathbf{y})\), where \(\varphi(\mathbf{x},\mathbf{y})\) is a non-empty conjunction of atoms over the variables \(\mathbf{x},\mathbf{y}\) and constants. We refer to the variables \(\mathbf{y}\) in \(q(\mathbf{y})\) as _free_ and define a _Boolean conjunctive query (BCQ)_ to be a CQ without free variables, i.e. a BCQ is a CQ of the form \(q=\exists\mathbf{x}\varphi(\mathbf{x})\). A knowledge base \(\mathcal{K}=(\mathcal{D},\mathcal{R})\)_entails_ a CQ \(q(\mathbf{y})=\exists\mathbf{x}\varphi(\mathbf{x},\mathbf{y})\), written \(\mathcal{K}\models q(\mathbf{y})\), _iff_\(\varphi(\mathbf{x},\mathbf{y})\) homomorphically maps into every model \(\mathcal{I}\) of \(\mathcal{K}\); we note that this is equivalent to \(\varphi(\mathbf{x},\mathbf{y})\) homomorphically mapping into a universal model of \(\mathcal{D}\) and \(\mathcal{R}\). As we are interested in extracting implicit knowledge from the explicit knowledge presented in a knowledge base \(\mathcal{K}=(\mathcal{D},\mathcal{R})\), we are interested in deciding the _BCQ entailment problem_:1 Footnote 1: We recall that entailment of non-Boolean CQs or even query answering can all be reduced to BCQ entailment in logarithmic space. (BCQ Entailment) Given a KB \(\mathcal{K}\) and a BCQ \(q\), is it the case that \(\mathcal{K}\models q\)? While it is well-known that the BCQ entailment problem is undecidable in general [7], restricting oneself to certain classes of rule sets (e.g. datalog or finite unification sets [5]) may recover decidability. We refer to classes of rule sets for which BCQ entailment is decidable as _query-decidable classes_. Derivations.One means by which we can extract implicit knowledge from a given KB is through the use of _derivations_, that is, sequences of instances obtained by sequentially applying rules to given data. We say that a rule \(\rho=\forall\mathbf{xy}(\varphi(\mathbf{x},\mathbf{y})\to\exists\mathbf{z}\psi( \mathbf{y},\mathbf{z}))\) is _triggered_ in an instance \(\mathcal{I}\) via a homomorphism \(h\), written succinctly as \(\tau(\rho,\mathcal{I},h)\), _iff_\(h\) homomorphically maps \(\varphi(\mathbf{x},\mathbf{y})\) to \(\mathcal{I}\). In this case, we define \[\mathbf{Ch}(\mathcal{I},\rho,h)=\mathcal{I}\cup\overline{h}(\psi(\mathbf{y}, \mathbf{z}))\] , where \(\overline{h}\) is an extension of \(h\) mapping every variable \(z\) in \(\mathbf{z}\) to a fresh null. Consequently, we define an \(\mathcal{R}\)_-derivation_ to be a sequence \(\mathcal{I}_{0},(\rho_{1},h_{1},\mathcal{I}_{1}),\ldots,(\rho_{n},h_{n}, \mathcal{I}_{n})\) such that (i) \(\rho_{i}\in\mathcal{R}\) for each \(i\in\{1,\ldots,n\}\), (ii) \(\tau(\rho_{i},\mathcal{I}_{i-1},h_{i})\) holds for \(i\in\{1,\ldots,n\}\), and (iii) \(\mathcal{I}_{i}=\mathbf{Ch}(\mathcal{I}_{i-1},\rho,h_{i})\) for \(i\in\{1,\ldots,n\}\). We will use \(\delta\) and annotations thereof to denote \(\mathcal{R}\)-derivations, and we define the length of an \(\mathcal{R}\)-derivation \(\delta=\mathcal{I}_{0},(\rho_{1},h_{1},\mathcal{I}_{1}),\ldots,(\rho_{n},h_{ n},\mathcal{I}_{n})\), denoted \(|\delta|\), to be \(n\). Furthermore, for instances \(\mathcal{I}\) and \(\mathcal{I}^{\prime}\), we write \(\mathcal{I}\!\stackrel{{\delta}}{{\longrightarrow}}\!\!\mathcal{R }\,\mathcal{I}^{\prime}\) to mean that there exists an \(\mathcal{R}\)-derivation \(\delta\) of \(\mathcal{I}^{\prime}\) from \(\mathcal{I}\). Also, if \(\mathcal{I}^{\prime\prime}\) can be derived from \(\mathcal{I}^{\prime}\) by means of a rule \(\rho\in\mathcal{R}\) and homomorphism \(h\), we abuse notation and write \(\mathcal{I}\!\stackrel{{\delta}}{{\longrightarrow}}\!\!\mathcal{R }\,\mathcal{I}^{\prime},(\rho,h,\mathcal{I}^{\prime\prime})\) to indicate that \(\mathcal{I}\!\stackrel{{\delta}}{{\longrightarrow}}\!\!\mathcal{R }\,\mathcal{I}^{\prime}\) and \(\mathcal{I}^{\prime}\!\stackrel{{\delta^{\prime}}}{{ \longrightarrow}}\!\!\mathcal{R}\,\mathcal{I}^{\prime\prime}\) with \(\delta^{\prime}=\mathcal{I}^{\prime},(\rho,h,\mathcal{I}^{\prime\prime})\). Derivations play a fundamental role in this paper as we aim to identify (and analyze the relationships between) query-decidable classes of rule sets based on _how_ such rule sets derive information, i.e. we are interested in classes of rule sets that may be _proof-theoretically characterized_. **Chase.** A tool that will prove useful in the current work is the _chase_, which in our setting is a procedure that (in essence) simultaneously constructs all \(\mathcal{K}\)-derivations in a breadth-first manner. Although many variants of the chase exist [5; 9; 12], we utilize the chase procedure (also called the _k-Saturation_) from Baget et al. [5]. We use the chase in the current work as a purely technical tool for obtaining universal models of knowledge bases, proving useful in separating certain query-decidable classes of rule sets. We define the _one-step application_ of all triggered rules from some \(\mathcal{R}\) in \(\mathcal{I}\) by \[\mathbf{Ch}_{1}(\mathcal{I},\mathcal{R})=\bigcup\nolimits_{\rho\in\mathcal{R},\tau(\rho,\mathcal{I},h)}\mathbf{Ch}(\mathcal{I},\rho,h),\] assuming all nulls introduced in the "parallel" applications of \(\mathbf{Ch}\) to \(\mathcal{I}\) are distinct. We let \(\mathbf{Ch}_{0}(\mathcal{I},\mathcal{R})=\mathcal{I}\), as well as let \(\mathbf{Ch}_{i+1}(\mathcal{I},\mathcal{R})=\mathbf{Ch}_{1}(\mathbf{Ch}_{i}( \mathcal{I},\mathcal{R}),\mathcal{R})\), and define the _chase_ to be \[\mathbf{Ch}_{\infty}(\mathcal{I},\mathcal{R})=\bigcup\nolimits_{i\in\mathbb{N} }\mathbf{Ch}_{i}(\mathcal{I},\mathcal{R}).\] For any KB \(\mathcal{K}=(\mathcal{D},\mathcal{R})\), the chase \(\mathbf{Ch}_{\infty}(\mathcal{D},\mathcal{R})\) is a universal model of \(\mathcal{K}\), that is, \(\mathcal{D}\subseteq\mathbf{Ch}_{\infty}(\mathcal{D},\mathcal{R})\), \(\mathbf{Ch}_{\infty}(\mathcal{D},\mathcal{R})\models\mathcal{R}\), and \(\mathbf{Ch}_{\infty}(\mathcal{D},\mathcal{R})\) homomorphically maps into every model of \(\mathcal{D}\) and \(\mathcal{R}\). **Rule dependence.** Let \(\rho\) and \(\rho^{\prime}\) be rules. We say that \(\rho^{\prime}\)_depends_ on \(\rho\)_iff_ there exists an instance \(\mathcal{I}\) such that (i) \(\rho^{\prime}\) is not triggered in \(\mathcal{I}\) via any homomorphism, (ii) \(\rho\) is triggered in \(\mathcal{I}\) via a homomorphism \(h\), and (iii) \(\rho^{\prime}\) is triggered in \(\mathbf{Ch}(\mathcal{I},\rho,h)\) via a homomorphism \(h^{\prime}\). We define the _graph of rule dependencies_[1] of a set \(\mathcal{R}\) of rules to be \(G(\mathcal{R})=(V,E)\) such that (i) \(V=\mathcal{R}\) and (ii) \((\rho,\rho^{\prime})\in E\)_iff_\(\rho^{\prime}\) depends on \(\rho\). **Treewidth.** A _tree decomposition_ of an instance \(\mathcal{I}\) is defined to be a tree \(T=(V,E)\) such that \(V\subseteq 2^{\mathbf{Ter}(\mathcal{I})}\) (where each element of \(V\) is called a _bag_) and \(E\subseteq V\times V\), satisfying the following three conditions: (i) \(\bigcup_{X\in V}X=\mathbf{Ter}(\mathcal{I})\), (ii) for each \(p(t_{1},\ldots,t_{n})\in\mathcal{I}\), there is an \(X\in V\) such that \(\{t_{1},\ldots,t_{n}\}\subseteq X\), and (iii) for each \(t\in\mathbf{Ter}(\mathcal{I})\), the subgraph of \(T\) induced by the bags \(X\in V\) with \(t\in X\) is connected (this condition is referred to as the _connectedness condition_). We define the _width_ of a tree decomposition \(T=(V,E)\) of an instance \(\mathcal{I}\) as follows: \[w(T):=\max\{|X|:X\in V\}-1\] i.e. the width is equal to the cardinality of the largest node in \(T\) minus \(1\). We let \(w(T)=\infty\)_iff_ for all \(n\in\mathbb{N}\), \(n\leq\max\{|X|:X\in V\}\). We define the _treewidth_ of an instance \(\mathcal{I}\), written \(tw(\mathcal{I})\), as follows: \[tw(\mathcal{I}):=\min\{w(T):\ T\ \text{is a tree decomposition of }\mathcal{I}\}\] i.e. the treewidth of an instance equals the minimal width among all its tree decompositions. If no tree decomposition of \(\mathcal{I}\) has finite width, we set \(tw(\mathcal{I})=\infty\). ## 3 Greediness We now discuss a property of derivations referred to as _greediness_. In essence, a derivation is greedy when the image of the frontier of any applied rule consists solely of constants from a given KB and/or nulls introduced by a _single_ previous rule application. Such derivations were defined by Thomazo et al. [14] and were used to identify the (query-decidable) class of _greedy bounded-treewidth sets_ (**gbts**), that is, the class of rule sets that produce only _greedy derivations_ (defined below) when applied to a database. In this section, we also identify a new query-decidable class of rule sets, referred to as _weakly greedy bounded-treewidth sets_ (**wgbts**). The **wgbts** class serves as a more liberal version of **gbts**, and contains rule sets that admit at least one greedy derivation of any derivable instance. It is straightforward to confirm that **wgbts** generalizes **gbts** since if a rule set is **gbts**, then every derivation of a derivable instance is greedy, implying that every derivable instance has _some_ greedy derivation. Yet, what is non-trivial to show is that **wgbts**_properly subsumes_**gbts**. We are going to prove this fact by means of a proof-theoretic argument and counter-example along the following lines: first, we show under what conditions we can permute rule applications in a given derivation (see Lemma 1 below), and second, we provide a rule set which exhibits non-greedy derivations (witnessing that the rule set is not **gbts**), but for which every derivation can be transformed into a greedy derivation by means of rule permutations and replacements (witnessing **wgbts** membership). Let us now formally define greedy derivations and provide examples to demonstrate the concept of (non-)greediness. Based on this, we then proceed to define the **gbts** and **wgbts** classes. Definition 1 (Greedy Derivation [14]): We define an \(\mathcal{R}\)-derivation \[\delta=\mathcal{I}_{0},(\rho_{1},h_{1},\mathcal{I}_{1}),\ldots,(\rho_{n},h_{n}, \mathcal{I}_{n})\] to be _greedy_ iff for each \(i\) such that \(0<i\leq n\), there exists a \(j<i\) such that \(h_{i}(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Remark 1: Observe that **gbts** and **wgbts** are characterized on the basis of derivations starting from given _databases_ only, that is, derivations of the form \(\mathcal{I}_{0},(\rho_{1},h_{1},\mathcal{I}_{1}),\ldots,(\rho_{n},h_{n},\mathcal{ I}_{n})\) where \(\mathcal{I}_{0}=\mathcal{D}\) is a database. In such a case, a derivation of the above form is greedy _iff_ for each \(i\) with \(0<i\leq n\), there exists a \(j<i\) such that \(h_{i}(\textit{fr}(\rho_{i}))\subseteq\mathbf{Nul}(\overline{h}_{j}(\textit{ head}(\rho_{j})))\cup\mathbf{Con}(\mathcal{D},\mathcal{R})\) as databases only contain constants (and not nulls) by definition. As noted above, it is straightforward to show that **wgbts** subsumes **gbts**. Still, establishing that **wgbts** strictly subsumes **gbts**, i.e. there are rule sets within **wgbts** that are outside **gbts**, requires more effort. As it so happens, the rule set \(\mathcal{R}_{2}\) (defined above) serves as such a rule set, admitting non-greedy \(\mathcal{R}_{2}\)-derivations, but where it can be shown that every instance derivable using the rule set admits a greedy \(\mathcal{R}_{2}\)-derivation. As a case in point, observe that the \(\mathcal{R}_{2}\)-derivations \(\delta_{1}\) and \(\delta_{2}\) both derive the same instance \(\mathcal{I}_{4}=\mathcal{I}_{3}^{\prime}\), however, \(\delta_{1}\) is a non-greedy \(\mathcal{R}_{2}\)-derivation of the instance and \(\delta_{2}\) is a greedy \(\mathcal{R}_{2}\)-derivation of the instance. Clearly, the existence of the non-greedy \(\mathcal{R}_{2}\)-derivation \(\delta_{1}\) witnesses that \(\mathcal{R}_{2}\) is not **gbts**. To establish that \(\mathcal{R}_{2}\) still falls within the **wgbts** class, we show that every non-greedy \(\mathcal{R}_{2}\)-derivation can be transformed into a greedy \(\mathcal{R}_{2}\)-derivation using two operations: (i) rule permutations and (ii) rule replacements. Regarding rule permutations, we consider under what conditions we may swap consecutive applications of rules in a derivation to yield a new derivation of the same instance. For example, in the \(\mathcal{R}_{2}\)-derivation \(\delta_{1}\) above, we may swap the consecutive applications of \(\rho_{1}\) and \(\rho_{2}\) to obtain the following derivation: \[\delta_{1}^{\prime}=\mathcal{D}_{\dagger},(\rho_{1},h_{1},\mathcal{I}_{1}),( \rho_{2},h_{3},\mathcal{I}_{1}\cup(\mathcal{I}_{3}\setminus\mathcal{I}_{2})),(\rho_{1},h_{2},\mathcal{I}_{3}),(\rho_{4},h_{4},\mathcal{I}_{4}).\] \(\mathcal{I}_{1}\cup(\mathcal{I}_{3}\setminus\mathcal{I}_{2})=\{p(a),r(b),q(a, y_{0},z_{0}),s(b,y_{2},z_{2})\}\) is derived by applying \(\rho_{2}\) and the subsequent application of \(\rho_{1}\) reclaims the instance \(\mathcal{I}_{3}\). Therefore, the same instance \(\mathcal{I}_{4}\) remains the conclusion. Although one can confirm that \(\delta_{1}^{\prime}\) is indeed an \(\mathcal{R}_{2}\)-derivation, thus serving as a successful example of a rule permutation (meaning, the rule permutation yields another \(\mathcal{R}_{2}\)-derivation), the following question still remains: for a rule set \(\mathcal{R}\), under what conditions will permuting rules within a given \(\mathcal{R}\)-derivation always yield another \(\mathcal{R}\)-derivation? We pose an answer to this question, formulated as the _permutation lemma_ below, which states that an application of a rule \(\rho\) may be permuted before an application of a rule \(\rho^{\prime}\) so long as the former rule does not depend on the latter (in the sense formally defined in Section 2 based on the work of Baget [1]). Furthermore, it should be noted that such rule permutations preserve the greediness of derivations. In the context of the above example, \(\rho_{2}\) may be permuted before \(\rho_{1}\) in \(\delta_{1}\) because the former does not depend on the latter. Lemma 1 (Permutation Lemma): _Let \(\mathcal{R}\) be a rule set with \(\mathcal{I}_{0}\) an instance. Suppose we have a (greedy) \(\mathcal{R}\)-derivation of the following form:_ \[\mathcal{I}_{0},\ldots,(\rho_{i},h_{i},\mathcal{I}_{i}),(\rho_{i+1},h_{i+1}, \mathcal{I}_{i+1}),\ldots,(\rho_{n},h_{n},\mathcal{I}_{n})\] _If \(\rho_{i+1}\) does not depend on \(\rho_{i}\), then the following is a (greedy) \(\mathcal{R}\)-derivation too:_ \[\mathcal{I}_{0},\ldots,(\rho_{i+1},h_{i+1},\mathcal{I}_{i-1}\cup(\mathcal{I}_ {i+1}\setminus\mathcal{I}_{i})),(\rho_{i},h_{i},\mathcal{I}_{i+1}),\ldots,( \rho_{n},h_{n},\mathcal{I}_{n}).\] As a consequence of the above lemma, rules may always be permuted in a given \(\mathcal{R}\)-derivation so that its structure mirrors the graph of rule dependencies \(G(\mathcal{R})\) (defined in Section 2). That is, given a rule set \(\mathcal{R}\) and an \(\mathcal{R}\)-derivation \(\delta\), we may permute all applications of rules serving as sources in \(G(\mathcal{R})\) (which do not depend on any rules in \(\mathcal{R}\)) to the beginning of \(\delta\), followed by all rule applications that depend only on sources, and so forth, with any applications of rules serving as sinks in \(G(\mathcal{R})\) concluding the derivation. For example, in the graph of rule dependencies of \(\mathcal{R}_{2}\), the rules \(\rho_{1}\), \(\rho_{2}\), and \(\rho_{3}\) serve as source nodes (they do not depend on any rules in \(\mathcal{R}_{2}\)) and the rule \(\rho_{4}\) is a sink node depending on each of the aforementioned three rules, i.e. \(G(\mathcal{R}_{2})=(V,E)\) with \(V=\{\rho_{1},\rho_{2},\rho_{3},\rho_{4}\}\) and \(E=\{(\rho_{i},\rho_{4})\ |\ 1\leq i\leq 3\}\). Hence, in any given \(\mathcal{R}_{2}\)-derivation \(\delta\), any application of \(\rho_{1}\), \(\rho_{2}\), or \(\rho_{3}\) can be permuted backward (toward the beginning of \(\delta\)) and any application of \(\rho_{4}\) can be permuted forward (toward the end of \(\delta\)). Beyond the use of rule permutations, we also transform \(\mathcal{R}_{2}\)-derivations by making use of rule replacements. In particular, observe that \(\mathit{head}(\rho_{3})\) and \(\mathit{body}(\rho_{3})\) correspond to conjunctions of \(\mathit{head}(\rho_{1})\) and \(\mathit{head}(\rho_{2})\), and \(\mathit{body}(\rho_{1})\) and \(\mathit{body}(\rho_{2})\), respectively. Thus, we can replace the first application of \(\rho_{1}\) and the succeeding application of \(\rho_{2}\) in \(\delta^{\prime}_{1}\) above by a single application of \(\rho_{3}\), thus yielding the \(\mathcal{R}_{2}\)-derivation \(\delta^{\prime\prime}_{1}=\mathcal{D}_{\dagger},(\rho_{3},h,\mathcal{I}_{1} \cup(\mathcal{I}_{3}\setminus\mathcal{I}_{2})),(\rho_{1},h_{2},\mathcal{I}_{3} ),(\rho_{4},h_{4},\mathcal{I}_{4})\), where \(h(x)=a\) and \(h(y)=b\). Interestingly, inspecting the above \(\mathcal{R}_{2}\)-derivation, one will find that it is identical to the greedy \(\mathcal{R}_{2}\)-derivation \(\delta_{2}\) defined earlier in the section, and so, we have shown how to take a non-greedy \(\mathcal{R}_{2}\)-derivation (viz. \(\delta_{1}\)) and transform it into a greedy \(\mathcal{R}_{2}\)-derivation (viz. \(\delta_{2}\)) by means of rule permutations and replacements. In the same way, one can prove in general that any non-greedy \(\mathcal{R}_{2}\)-derivation can be transformed into a greedy \(\mathcal{R}_{2}\)-derivation, thus giving rise to the following theorem, and demonstrating that \(\mathcal{R}_{2}\) is indeed **wgbts**. Theorem 3.1: \(\mathcal{R}_{2}\) _is_ **wgbts**_, but not_ **gbts**_. Thus,_ **wgbts** _properly subsumes_ **gbts**_._ ## 4 Derivation Graphs We now discuss _derivation graphs_ - a concept introduced by Baget et al. [5] and used to establish that certain classes of rule sets (e.g. weakly frontier guarded rule sets [6]) exhibit universal models of bounded treewidth. A derivation graph has the structure of a directed acyclic graph and encodes _how_ atoms are derived throughout the course of an \(\mathcal{R}\)-derivation. By applying so-called _reduction operations_, a derivation graph may (under certain conditions) be transformed into a treelike graph that serves as a tree decomposition of an \(\mathcal{R}\)-derivable instance. Below, we define derivation graphs and discuss how such graphs are transformed into tree decompositions by means of reduction operations. To increase comprehensibility, we provide an example of a derivation graph (shown in Figure 2) and give an example of applying each reduction operation (shown in Figure 3). After, we identify two (query-decidable) classes of rule sets on the basis of derivation graphs, namely, _cycle-free derivation graph sets_ (**cdgs**) and _weakly cycle-free derivation graph sets_ (**wcdgs**). Despite their prima facie distinctness, the **cdgs** and **wcdgs** classes coincide with **gbts** and **wgbts** classes, respectively, thus showing how the latter classes can be characterized in terms of derivation graphs. Let us now formally define derivation graphs, and after, we will demonstrate the concept by means of an example. Definition 3 (Derivation Graph): Let \(\mathcal{D}\) be a database, \(\mathcal{R}\) be a rule set, \(C=\mathbf{Con}(\mathcal{D},\mathcal{R})\), and \(\delta\) be some \(\mathcal{R}\)-derivation \(\mathcal{D},(\rho_{1},h_{1},\mathcal{I}_{1}),\ldots,(\rho_{n},h_{n},\mathcal{I }_{n})\). The derivation graph of \(\delta\) is the tuple \(G_{\delta}:=(\mathrm{V},\mathrm{E},\mathrm{At},\mathrm{L})\), where \(\mathrm{V}:=\{X_{0},\ldots,X_{n}\}\) is a finite set of _nodes_, \(\mathrm{E}\subseteq\mathrm{V}\times\mathrm{V}\) is a set of _arcs_, and the functions \(\mathrm{At}:\mathrm{V}\to 2^{\mathcal{I}_{n}}\) and \(\mathrm{L}:\mathrm{E}\to 2^{\mathbf{Ter}(\mathcal{I}_{n})}\) decorate nodes and arcs, respectively, such that: 1. \(\mathrm{At}(X_{0}):=\mathcal{D}\) and \(\mathrm{At}(X_{i})=\mathcal{I}_{i}\setminus\mathcal{I}_{i-1}\); 2. \((X_{i},X_{j})\in\mathrm{E}\) iff there is a \(p(\mathbf{t})\in\mathrm{At}(X_{i})\) and a frontier atom \(p(\mathbf{t}^{\prime})\) in \(\rho_{j}\) such that \(h_{j}(p(\mathbf{t}^{\prime}))=p(\mathbf{t})\). We then set \(\mathrm{L}(X_{i},X_{j})=\left(h_{j}\big{(}\mathbf{Var}(p(\mathbf{t}^{\prime})) \cap\text{fr}(\rho_{j})\big{)}\right)\setminus C\). We refer to \(X_{0}\) as the _initial node_ and define the set of _non-constant terms_ associated with a node to be \(\overline{C}(X)=\mathbf{Ter}(X)\setminus C\) where \(\mathbf{Ter}(X_{i}):=\mathbf{Ter}(\mathrm{At}(X_{i}))\cup C\). Toward an example, assume \(\mathcal{D}_{\ddagger}=\{p(a,b)\}\) and \(\mathcal{R}_{3}=\{\rho_{1},\rho_{2},\rho_{3},\rho_{4}\}\) where \[\begin{array}{rl}\rho_{1}=&p(x,y)\to\exists z.q(y,z)\\ \rho_{2}=&q(x,y)\to\exists z.(r(x,y)\wedge r(y,z))\end{array}\qquad\begin{array} []{rl}\rho_{3}=&r(x,y)\wedge q(z,x)\to s(x,y)\\ \rho_{4}=&r(x,y)\wedge s(z,w)\to t(y,w)\end{array}\] Let us consider the following derivation: \[\delta=\mathcal{D}_{\ddagger},(\rho_{1},h_{1},\mathcal{I}_{1}),(\rho_{2},h_{2 },\mathcal{I}_{2}),(\rho_{3},h_{3},\mathcal{I}_{3}),(\rho_{4},h_{4},\mathcal{I }_{4})\ \ \text{with}\] \[\mathcal{I}_{4}=\{\underbrace{p(a,b)}_{\mathcal{D}_{\ddagger}},\underbrace{q (b,z_{0})}_{\mathcal{I}_{1}\setminus\mathcal{D}_{\ddagger}},\underbrace{r(b, z_{0}),r(z_{0},z_{1})}_{\mathcal{I}_{2}\setminus\mathcal{I}_{1}},\underbrace{s(z_{0},z_{1})}_{ \mathcal{I}_{3}\setminus\mathcal{I}_{2}},\underbrace{t(z_{0},z_{1})}_{ \mathcal{I}_{4}\setminus\mathcal{I}_{3}}\}\quad\text{and}\] \(h_{1}=\{x{\mapsto}a,y{\mapsto}b\}\), \(h_{2}=\{x{\mapsto}b,y{\mapsto}z_{0}\}\), \(h_{3}=\{x{\mapsto}z_{0},y{\mapsto}z_{1},z{\mapsto}b\}\), as well as \(h_{4}=\{x{\mapsto}b,y{\mapsto}z_{0},z{\mapsto}z_{0},w{\mapsto}z_{1}\}\). The derivation graph \(G_{\delta}=(\mathrm{V},\mathrm{E},\mathrm{At},\mathrm{L})\) corresponding to \(\delta\) is shown in Figure 2 and has lives nodes, \(\mathrm{V}=\{X_{0},X_{1},X_{2},X_{3},X_{4}\}\). Each node \(X_{i}\in\mathrm{V}\) is associated with a set \(\mathrm{At}(X_{i})\) of atoms depicted in the associated circle (e.g. \(\mathrm{At}(X_{2})=\{r(b,z_{0}),r(z_{0},z_{1})\}\)), and each arc \((X_{i},X_{j})\in\mathrm{E}\) is represented as a directed arrow with \(\mathrm{L}(X_{i},X_{j})\) shown as the associated set of terms (e.g. \(\mathrm{L}(X_{3},X_{4})=\{z_{1}\}\)). For each node \(X_{i}\in\mathrm{V}\), the set \(\mathbf{Ter}(X_{i})\) of terms associated with the node is equal to \(\mathbf{Ter}(\mathrm{At}(X_{i}))\cup\{a,b\}\) (e.g. \(\mathbf{Ter}(X_{3})=\{z_{0},z_{1},a,b\}\)) since \(C=\mathbf{Con}(\mathcal{D}_{\ddagger},\mathcal{R}_{3})=\{a,b\}\). Figure 2: The derivation graph \(G_{\delta}\). As can be witnessed via the above example, derivation graphs satisfy a set of properties akin to those characterizing tree decompositions [5, Proposition 12]. Lemma 2 (Decomposition Properties): _Let \(\mathcal{D}\) be a database, \(\mathcal{R}\) be a rule set, and \(C=\mathbf{Con}(\mathcal{D},\mathcal{R})\). If \(\mathcal{D}\mathop{\longrightarrow}\limits_{\mathcal{R}}\mathcal{I}\), then \(G_{\delta}\) satisfies the following properties:_ 1. \(\bigcup_{X_{n}\in\mathrm{V}}\mathbf{Ter}(X_{n})=\mathbf{Ter}(\mathcal{I})\)_;_ 2. _For each_ \(p(\mathbf{t})\in\mathcal{I}\)_, there is an_ \(X_{n}\in\mathrm{V}\) _such that_ \(p(\mathbf{t})\in\mathrm{At}(X_{n})\)_;_ 3. _For each term_ \(x\in\overline{C}(\mathcal{I})\)_, the subgraph of_ \(G_{\delta}\) _induced by the nodes_ \(X_{n}\) _such that_ \(x\in\overline{C}(X_{n})\) _is connected;_ 4. _For each_ \(X_{n}\in\mathrm{V}\) _the size of_ \(\mathbf{Ter}(X_{n})\) _is bounded by an integer that only depends on the size of_ \((\mathcal{D},\mathcal{R})\)_, viz._ \(\max\{|\mathbf{Ter}(\mathcal{D})|,|\mathbf{Ter}(\mathit{head}(\rho_{i}))|_{ \rho_{i}\in\mathcal{R}}\}+|C|\)_._ Let us now introduce our set of _reduction operations_. As remarked above, in certain circumstances such operations can be used to transform derivation graphs into tree decompositions of an instance. We make use of three reduction operations, namely, (i) _arc removal_, denoted \((\mathsf{ar})^{[i,j]}\), (ii) _term removal_, denoted \((\mathsf{tr})^{[i,j,k,t]}\), and (iii) _cycle removal_, denoted \((\mathsf{cr})^{[i,j,k,\ell]}\). The first two reduction operations were already proposed by Baget et al. [5] (they presented \((\mathsf{tr})\) and \((\mathsf{ar})\) as a single operation called _redundant arc removal_), whereas cycle removal is introduced by us as a new operation that will assist us in characterizing **gbts** and **wgbts** in terms of derivation graphs.2 Footnote 2: Beyond \((\mathsf{tr})\) and \((\mathsf{ar})\), we note that Baget et al. [5] introduced an additional reduction operation, referred to as _arc contraction_. We do not consider this rule here however as it is unnecessary to characterize **gbts** and **wgbts** in terms of derivation graphs and prima facie obstructs the proof of Theorem 2. Definition 4 (Reduction Operations): _Let \(\mathcal{D}\) be a database, \(\mathcal{R}\) be a rule set, \(\mathcal{D}\mathop{\longrightarrow}\limits_{\mathcal{R}}\mathcal{I}_{n}\), and \(G_{\delta}\) be the derivation graph of \(\delta\). We define the set \(\mathsf{RO}\) of reduction operations as \(\{(\mathsf{ar})^{[i,j]},(\mathsf{tr})^{[i,j,k,t]},(\mathsf{cr})^{[i,j,k,\ell]} \mid i,j,k,\ell\leq n,\,t\in\mathbf{Ter}(\mathcal{I}_{n})\},\) whose effect is further specified below. We let \((\mathsf{r})\mathsf{\Sigma}(G_{\delta})\) denote the output of applying the operation \((\mathsf{r})\) to the (potentially reduced) derivation graph \(\mathsf{\Sigma}(G_{\delta})=(\mathrm{V},\mathrm{E},\mathrm{At},\mathrm{L})\), where \(\mathsf{\Sigma}\in\mathsf{RO}^{*}\) is a reduction sequence, that is, \(\mathsf{\Sigma}\) is a (potentially empty) sequence of reduction operations._ 1. _Arc Removal_ \((\mathsf{ar})^{[i,j]}\)_: Whenever_ \((X_{i},X_{j})\in\mathrm{E}\) _and_ \(\mathrm{L}(X_{i},X_{j})=\emptyset\)_, then_ \((\mathsf{ar})^{[i,j]}\mathsf{\Sigma}(G_{\delta}):=(\mathrm{V},\mathrm{E}^{ \prime},\mathrm{At},\mathrm{L}^{\prime})\) _where_ \(\mathrm{E}^{\prime}:=\mathrm{E}\setminus\{(X_{i},X_{j})\}\) _and_ \(\mathrm{L}^{\prime}=\mathrm{L}\upharpoonright\mathrm{E}^{\prime}\)_._ 2. _Term Removal_ \((\mathsf{tr})^{[i,j,k,t]}\)_: If_ \((X_{i},X_{k}),(X_{j},X_{k})\in\mathrm{E}\) _with_ \(X_{i}\neq X_{j}\) _and_ \(t\in\mathrm{L}(X_{i},X_{k})\cap\mathrm{L}(X_{j},X_{k})\)_, then_ \((\mathsf{tr})^{[i,j,k,t]}\mathsf{\Sigma}(G_{\delta}):=(\mathrm{V},\mathrm{E},\mathrm{At},\mathrm{L}^{\prime})\) _where_ \(\mathrm{L}^{\prime}\) _is obtained from_ \(\mathrm{L}\) _by removing_ \(t\) _from_ \(\mathrm{L}(X_{j},X_{k})\)_._ 3. _Cycle Removal_ \((\mathsf{cr})^{[i,j,k,\ell]}\)_: If_ \((X_{i},X_{k}),(X_{j},X_{k})\in\mathrm{E}\) _and there exists a node_ \(X_{\ell}\in\mathrm{V}\) _with_ \(\ell<k\) _such that_ \(\mathrm{L}(X_{i},X_{k})\cup\mathrm{L}(X_{j},X_{k})\subseteq\mathbf{Ter}(X_{ \ell})\) _then,_ \((\mathsf{cr})^{[i,j,k,\ell]}\mathsf{\Sigma}(G_{\delta}):=(\mathrm{V},\mathrm{E} ^{\prime},\mathrm{At},\mathrm{L}^{\prime})\) _where_ \[\mathrm{E}^{\prime}:=\left(\mathrm{E}\setminus\{(X_{i},X_{k}),(X_{j},X_{k})\} \right)\cup\{(X_{\ell},X_{k})\}\] _and_ \(\mathrm{L}^{\prime}\) _is obtained from_ \(\mathrm{L}\upharpoonright\mathrm{E}^{\prime}\) _by setting_ \(L(X_{\ell},X_{k})\) _to_ \(\mathrm{L}(X_{i},X_{k})\cup\mathrm{L}(X_{j},X_{k})\)_._ Last, we say that a reduction sequence \(\Sigma\in\mathsf{RO}^{*}\) is a _complete reduction sequence relative to a derivation graph \(G_{\delta}\) iff \(\Sigma(G_{\delta})\) is cycle-free._ Remark 2: When there is no danger of confusion, we will take the liberty to write (\(\mathsf{tr}\)), (\(\mathsf{ar}\)), and (\(\mathsf{cr}\)) without superscript parameters. That is, given a derivation graph \(G_{\delta}\), the (reduced) derivation graph \((\mathsf{cr})(\mathsf{tr})(G_{\delta})\) is obtained by applying an instance of (\(\mathsf{tr}\)) followed by an instance of (\(\mathsf{cr}\)) to \(G_{\delta}\). When applying a reduction operation we always explain _how_ it is applied, so the exact operation is known. We now describe the functionality of each reduction operation and illustrate each by means of an example. We will apply each to transform the derivation graph \(G_{\delta}\) (shown in Figure 2) into a tree decomposition of \(\mathcal{I}_{4}\) (which was defined above). The (\(\mathsf{tr}\)) operation deletes a term \(t\) within the intersection of the sets labeling two converging arcs. For example, we may apply (\(\mathsf{tr}\)) to the derivation graph \(G_{\delta}\) from Figure 2, deleting the term \(z_{0}\) from the label of the arc \((X_{1},X_{3})\), and yielding the reduced derivation graph \((\mathsf{tr})(G_{\delta})\), which is shown first in Figure 3. We may then apply (\(\mathsf{ar}\)) to (\(\mathsf{tr}\))(\(G_{\delta}\)), deleting the arc \((X_{1},X_{3})\), which is labeled with the empty set, to obtain the reduced derivation graph \((\mathsf{ar})(\mathsf{tr})(G_{\delta})\) shown middle in Figure 3. The (\(\mathsf{cr}\)) operation is more complex and works by considering two converging arcs \((X_{i},X_{k})\) and \((X_{j},X_{k})\) in a (reduced) derivation graph. If there exists a node \(X_{\ell}\) whose index \(\ell\) is less than the index \(k\) of the child node \(X_{k}\) and \(\mathrm{L}(X_{i},X_{k})\cup\mathrm{L}(X_{j},X_{k})\subseteq\mathbf{Ter}(X_{ \ell})\), then the converging arcs \((X_{i},X_{k})\) and \((X_{j},X_{k})\) may be deleted and the arc \((X_{\ell},X_{k})\) introduced and labeled with \(\mathrm{L}(X_{i},X_{k})\cup\mathrm{L}(X_{j},X_{k})\). As an example, the reduced derivation graph \((\mathsf{cr})(\mathsf{ar})(\mathsf{tr})(G_{\delta})\) (shown third in Figure 3) is obtained from \((\mathsf{ar})(\mathsf{tr})(G_{\delta})\) (shown middle in Figure 3) by applying (\(\mathsf{cr}\)) in the following manner to the convergent arcs \((X_{2},X_{4})\) and \((X_{3},X_{4})\): since for \(X_{2}\) (whose index 2 is less than the index 4 of \(X_{4}\)) \(\mathrm{L}(X_{2},X_{4})\cup\mathrm{L}(X_{3},X_{4})\subseteq\mathbf{Ter}(X_{2})\), we may delete the arcs \((X_{2},X_{4})\) and \((X_{3},X_{4})\) and introduce the arc \((X_{2},X_{4})\) labeled with \(\mathrm{L}(X_{2},X_{4})\cup\mathrm{L}(X_{3},X_{4})=\{z_{0}\}\cup\{z_{1}\}=\{z_ {0},z_{1}\}\). Observe that the reduced derivation graph \((\mathsf{cr})(\mathsf{ar})(\mathsf{tr})(G_{\delta})\) is free of cycles, witnessing that \(\Sigma=(\mathsf{cr})(\mathsf{ar})(\mathsf{tr})\) is a complete reduction sequence relative to \(G_{\delta}\). Moreover, if we replace each node by the set of its terms and disregard the labels on arcs, then \(\Sigma(G_{\delta})\) can be read as a tree decomposition of \(\mathcal{I}_{4}\). In fact, one can show that every reduced derivation graph satisfies the decomposition properties mentioned in Lemma 2 above. Lemma 3: _Let \(\mathcal{D}\) be a database and \(\mathcal{R}\) be a rule set. If \(\mathcal{D}\xrightarrow{\mathcal{R}}_{\delta}\mathcal{I}\), then for any reduction sequence \(\Sigma\), \(\Sigma(G_{\delta})=(\mathrm{V},\mathrm{E},\mathrm{At},\mathrm{L})\) satisfies the decomposition properties 1-4 in Lemma 2._ As illustrated above, derivation graphs can be used to derive tree decompositions of \(\mathcal{R}\)-derivable instances. By the fourth decomposition property (see Lemma 2 above), the width of such a tree decomposition is bounded by a constant that depends only on the given knowledge base. Thus, if a rule set \(\mathcal{R}\) always yields derivation graphs that are reducible to _cycle-free_ graphs - meaning that (un)directed cycles do not occur within the graph - then all \(\mathcal{R}\)-derivable instances have tree decompositions that are uniformly bounded by a constant. This establishes that the rule set \(\mathcal{R}\) falls within the **bts** class, confirming that query entailment is decidable with \(\mathcal{R}\). We define two classes of rule sets by means of reducible derivation graphs: Definition 5 ((Weakly) Cycle-free Derivation Graph Set): A rule set \(\mathcal{R}\) is a _cycle-free derivation graph set_\((\mathbf{cdgs})\) iff if \(\mathcal{D}\)-\(\delta\)\(\mathcal{R}\), then \(G_{\delta}\) can be reduced to a cycle-free graph by the reduction operations. \(\mathcal{R}\) is a _weakly cycle-free derivation graph set_\((\mathbf{wcdgs})\) iff if \(\mathcal{D}\)-\(\delta\)\(\triangleright_{\mathcal{R}}\)\(\mathcal{I}\), then there is a derivation \(\delta^{\prime}\) where \(\mathcal{D}\)-\(\delta^{\prime}\)\(\triangleright_{\mathcal{R}}\)\(\mathcal{I}\) and \(G_{\delta^{\prime}}\) can be reduced to a cycle-free graph by the reduction operations. It is straightforward to confirm that \(\mathbf{wcdgs}\) subsumes \(\mathbf{cdgs}\), and that both classes are subsumed by **bts**. Proposition 1: _Every \(\mathbf{cdgs}\) rule set is \(\mathbf{wcdgs}\) and every \(\mathbf{wcdgs}\) rule set is \(\mathbf{bts}\)._ Furthermore, as mentioned above, \(\mathbf{gbts}\) and \(\mathbf{wgbts}\) coincide with \(\mathbf{cdgs}\) and \(\mathbf{wcdgs}\), respectively. By making use of the \((\mathsf{cr})\) operation, one can show that the derivation graph of any greedy derivation is reducible to a cycle-free graph, thus establishing that \(\mathbf{gbts}\subseteq\mathbf{cdgs}\) and \(\mathbf{wgbts}\subseteq\mathbf{wcdgs}\). To show the converse (i.e. that \(\mathbf{cdgs}\subseteq\mathbf{gbts}\) and \(\mathbf{wcdgs}\subseteq\mathbf{wgbts}\)) however, requires more work. In essence, one shows that for every (non-source) node \(X_{i}\) in a cycle-free (reduced) derivation graph there exists another node \(X_{j}\) such that \(j<i\) and the frontier of the atoms in \(\mathrm{At}(X_{i})\) only consist of constants and/or nulls introduced by the atoms in \(\mathrm{At}(X_{j})\). This property is preserved under _reverse_ applications of the reduction operations, and thus, one can show that if a derivation graph is reducible to a cycle-free graph, then the above property holds for the original derivation graph, implying that the derivation graph encodes a greedy derivation. Based on such arguments, one can prove the following: Theorem 5.2: \(\mathbf{gbts}\) _coincides with \(\mathbf{cdgs}\) and \(\mathbf{wgbts}\) coincides with \(\mathbf{wcdgs}\). Membership in \(\mathbf{cdgs}\), \(\mathbf{gbts}\), \(\mathbf{wcdgs}\), or \(\mathbf{wgbts}\) warrants decidable BCQ entailment._ Note that by Theorem 5.1, this also implies that \(\mathbf{wcdgs}\) properly contains \(\mathbf{cdgs}\). An interesting consequence of the above theorem concerns the redundancy of \((\mathsf{ar})\) and \((\mathsf{tr})\) in the presence of \((\mathsf{cr})\). In particular, since we know that (i) if a derivation graph can be reduced to a cycle-free graph, then the derivation graph encodes a greedy derivation, and (ii) the derivation graph of any greedy derivation can be reduced to an cycle-free graph by means of applying the (\(\mathsf{cr}\)) operation only, it follows that if a derivation graph can be reduced to a cycle-free graph, then it can be reduced by only applying the (\(\mathsf{cr}\)) operation. We refer to this phenomenon as _reduction-admissibility_, which is defined below. Definition 6 (Reduction-admissible): Suppose \(S_{1}=\{(\mathsf{r}_{i})\ |\ 1\leq i\leq n\}\) and \(S_{2}=\{(\mathsf{r}_{j})\ |\ n+1\leq j\leq k\}\) are two sets of reduction operations. We say that \(S_{1}\) is _reduction-admissible_ relative to \(S_{2}\) iff for any rule set \(\mathcal{R}\) and \(\mathcal{R}\)-derivation \(\delta\), if \(G_{\delta}\) is reducible to a cycle-free graph with \(S_{1}\cup S_{2}\), then \(G_{\delta}\) is reducible to a cycle-free graph with just \(S_{2}\). Corollary 1: \(\{(\mathsf{tr}),(\mathsf{ar})\}\) _is reduction-admissible relative to \((\mathsf{cr})\)._ ## 5 Conclusion In this paper, we revisited the concept of a _greedy_ derivation, which immediately gives rise to a bounded-width tree decomposition of the constructed instance. This well-established notion allows us to categorize rule sets as being _(weakly) greedy bounded treewidth sets_ (\((\mathbf{w})\mathbf{gbts}\)), if all (some) derivations of a derivable instance are guaranteed to be greedy, irrespective of the underlying database. By virtue of being subsumed by \(\mathbf{bts}\), these classes warrant decidability of BCQ entailment, while at the same time subsuming various popular rule languages, in particular from the guarded family. By means of an example together with a proof-theoretic argument, we exposed that \(\mathbf{wgbts}\) strictly generalizes \(\mathbf{gbts}\). In pursuit of a better understanding and more workable methods to detect and analyze \((\mathbf{w})\mathbf{gbts}\) rule sets, we resorted to the previously proposed notion of _derivation graphs_. Through a refinement of the set of reduction methods for derivation graphs, we were able to make more advanced use of this tool, leading to the definition of _(weakly) cycle-free derivation graph sets_ (\((\mathbf{w})\mathbf{cdgs}\)) of rules, of which we were then able to show the respective coincidences with \((\mathbf{w})\mathbf{gbts}\). This way, we were able to establish alternative characterizations of \(\mathbf{gbts}\) and \(\mathbf{wgbts}\) by means of derivation graphs. En passant, we found that the newly introduced _cycle removal_ reduction operation over derivation graphs is sufficient by itself and makes the other operations redundant. For future work, we plan to put our newly found characterizations to use. In particular, we aim to investigate if a rule set's membership in \(\mathbf{gbts}\) or \(\mathbf{wgbts}\) is decidable. For \(\mathbf{gbts}\), this has been widely conjectured, but never formally established. In the positive case, derivation graphs might also be leveraged to pinpoint the precise complexity of the membership problem. We are also confident that the tools and insights in this paper - partially revived, partially upgraded, partially newly developed - will prove useful in the greater area of static analysis of existential rule sets. On a general note, we feel that the field of proof theory has a lot to offer for knowledge representation, whereas the cross-fertilization between these disciplines still appears to be underdeveloped.
2309.00217
Topological chiral kagome lattice
Chirality, a fundamental structural property of crystals, can induce many unique topological quantum phenomena. In kagome lattice, unconventional transports have been reported under tantalizing chiral charge order. Here, we show how by deforming the kagome lattice to obtain a three-dimensional (3D) chiral kagome lattice in which the key band features of the non-chiral 2D kagome lattice - flat energy bands, van Hove singularities (VHSs), and degeneracies - remain robust in both the $k_z$ = 0 and $\pi$ planes in momentum space. Given the handedness of our kagome lattice, degenerate momentum points possess quantized Chern numbers, ushering in the realization of Weyl fermions. Our 3D chiral kagome lattice surprisingly exhibits 1D behavior on its surface, where topological surface Fermi arc states connecting Weyl fermions are dispersive in one momentum direction and flat in the other direction. These 1D Fermi arcs open up unique possibilities for generating unconventional non-local transport phenomena at the interfaces of domains with different handedness, and the associated enhanced conductance as the separation of the leads on the surface is increased. Employing first-principles calculations, we investigate in-depth the electronic and phononic structures of representative materials within the ten space groups that can support topological chiral kagome lattices. Our study opens a new research direction that integrates the advantages of structural chirality with those of a kagome lattice and thus provides a new materials platform for exploring unique aspects of correlated topological physics in chiral lattices.
Jing-Yang You, Xiaoting Zhou, Tao Hou, Mohammad Yahyavi, Yuanjun Jin, Yi-Chun Hung, Bahadur Singh, Chun Zhang, Jia-Xin Yin, Arun Bansil, Guoqing Chang
2023-09-01T02:31:55Z
http://arxiv.org/abs/2309.00217v1
# Topological chiral kagome lattice ###### Abstract Chirality, a fundamental structural property of crystals, can induce many unique topological quantum phenomena. In kagome lattice, unconventional transports have been reported under tantalizing chiral charge order. Here, we show how by deforming the kagome lattice to obtain a three-dimensional (3D) chiral kagome lattice in which the key band features of the non-chiral 2D kagome lattice - flat energy bands, van Hove singularities (VHSs), and degeneracies - remain robust in both the \(k_{z}=0\) and \(\pi\) planes in momentum space. Given the handedness of our kagome lattice, degenerate momentum points possess quantized Chern numbers, ushering in the realization of Weyl fermions. Our 3D chiral kagome lattice surprisingly exhibits 1D behavior on its surface, where topological surface Fermi arc states connecting Weyl fermions are dispersive in one momentum direction and flat in the other direction. These 1D Fermi arcs open up unique possibilities for generating unconventional non-local transport phenomena at the interfaces of domains with different handedness, and the associated enhanced conductance as the separation of the leads on the surface is increased. Employing first-principles calculations, we investigate in-depth the electronic and phononic structures of representative materials within the ten space groups that can support topological chiral kagome lattices. Our study opens a new research direction that integrates the advantages of structural chirality with those of a kagome lattice and thus provides a new materials platform for exploring unique aspects of correlated topological physics in chiral lattices. In the past decade, there has been a surge of research interest in topological materials characterized by non-trivial geometric phases [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. Another parallel area of focus has revolved around correlated systems with strong interactions [15; 16; 17; 18; 19; 20]. The convergence of these two fields presents a cutting-edge frontier for unraveling the study of interacting topological phases of quantum matter [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. However, a challenge in advancing this field is that electron-electron correlations are generally negligible in many well-known topological materials. One effective strategy for enhancing electron-electron correlations involves introducing a high density of states in the vicinity of the Fermi level. Here the kagome lattice stands out as particularly promising candidate due to its unique electronic structures that features flat bands and van Hove singularities (VHSs). Although the initial focus was on two-dimensional (2D) lattices, recent work considers three-dimensional (3D) kagome systems in which the kagome layers coexist with other atomic layers in the unit cell [37; 38; 39; 40; 41; 42]. The focus has been a scenario in which the coupling along the stacking direction remains sufficiently weak so that the flat bands and VHSs of the kagome layers are preserved. Unfortunately, the complex interlayer couplings in these systems often lead to the disappearance of the distinctive kagome band characteristics. The high symmetry of the kagome lattice also limits its potential applications. Recall that the breaking of symmetries has dramatically contributed to the emergence of numerous topological phases of quantum matter. For instance, chiral crystals that break inversion and mirror symmetries have been found to exhibit unique topological properties, including multi-fold chiral fermions and related quantized nonlinear optical responses [43; 44; 45; 46; 47; 48; 49; 50; 51]. Kagome KV\({}_{3}\)Sb\({}_{5}\), with chiral charge order at low temperatures, has shown promise in realizing large orbital magnetization and unconventional superconductivity [52; 53; 29]. However, prior to inducing strong correlation effects, KV\({}_{3}\)Sb\({}_{5}\) exhibits non-chiral behavior at high temperatures. Novel correlated topological phenomena can thus be expected to arise from symmetry breaking in chiral crystals [54; 55; 56; 57; 58; 59]. The 2D kagome lattice comprises three atoms in the primitive cell represented by red, green, and blue spheres in Fig. 1(a). The band structure of a single-orbital kagome model with only the nearest-neighboring (NN) hoppings exhibits a flat band, VHSs, and Dirac degeneracies, where one branch of the dispersive Dirac bands intersects the flat band at the \(\Gamma\) point [39]. This is attributed to the wave function localization that forms compact localized states within the hexagon due to destructive interference. To extend this model to 3D, we can maintain the NN hopping but translate two atoms in 2D kagome lattice along the \(z\) axis (out-of-plane direction) by \(c/3\) and \(2c/3\), respectively, as depicted in Fig. 1(b). In this configuration, it can be inferred that the system should preserve the characteristic kagome band features. For the 2D kagome lattice, the green and blue atoms can be linked through mirror symmetry (\(\mathcal{M}\)). However, this symmetry is naturally broken when translating green and blue atoms along the \(z\) axis at varying distances. Furthermore, unlike the 2D kagome lattice with inversion symmetry, the inversion centers for the three atoms in the 3D kagome lattice are not at the same height, thus resulting in inversion-symmetry-breaking. The 3D kagome lattice lacks both mirror and center inversion symmetries, classifying it as a chiral lattice possessing the screw symmetry \(S_{6/3}\) with the combination of a six/three-fold rotation \(C_{6/3}\) and a one-third fractional translation along the \(z\) axis. We now undertake an analytical verification to ascertain whether the kagome band characteristics remain preserved within the new lattice by constructing the Hamiltonian including only the NN hopping: \[\begin{array}{c}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! at the \(A\) point and \(C=1\) at the \(K\) point [Fig. 2(c)]. Furthermore, we observe that one of the branches of the double Weyl cone at both \(\Gamma\) and \(A\) points comprises flat bands [Fig. 2(b), top panel]. Applying a vertical mirror operation to the right-handed 3D kagome lattice [Fig. 2(a)] results in the lattice with left-handedness [Fig. 2(d)], wherein the atoms rotate clockwise along the \(z\) axis. Interestingly, the electronic dispersions of the left-handed crystal exactly mirror those of the right-handed counterpart, with both structures representing ground states. However, a significant distinction arises in the flow of Berry curvature in momentum space, which undergoes a complete reversal due to the change in structural chirality in real space [Figs. 2(b),(e)]. This reversal, in turn, affects the sign of the Chern numbers associated with the massless Weyl fermions [Figs. 2(c),(f)]. Our analysis demonstrates that the topological Chern number in the kagome lattice is inherently dictated by its structural chirality. Consequently, by tuning the handedness of the lattice, one gains the ability to manipulate the Chern numbers and thus the related topological quantum responses, including the direction of photocurrents [43]. We now turn to discuss the unique topological surface responses of our 3D chiral kagome lattice. Typically, the surfaces of a 3D material exhibit 2D electronic structures. However, intriguingly, despite the inherent 3D nature of our chiral kagome lattice, its surfaces manifest distinct one-dimensional (1D) features. This is evident on the (001) surface of the lattice, wherein conducting electrons from the top layer are bound to the inner layer through NN hopping along a 1D chain [Fig. 3(a)]. Consequently, the emergence of 1D Fermi arc surface states is anticipated. Indeed, on the constant energy contours of the (001) surface, we observe the presence of two 1D Fermi arc surface states [Fig. 3(b), top panel]. This distinctive 1D nature of Fermi arcs also extends to other terminations, such as the (010) surface, signifying the general occurrence of 1D surface states in the 3D chiral kagome lattice [Fig. 3(b), bottom panel]. A detailed analysis of the energy dispersion of the Fermi arc surface states reveals that they exhibit dispersion in only one direction [Fig. 3(c), top panel], while remaining completely flat in the perpendicular direction [Fig. 3(c), bottom panel]. The surface states on the (001) plane also display dispersion in one direction while remaining flat in the orthogonal direction [Fig. S2]. This further substantiates the 1D nature of these topological Fermi arcs in our 3D chiral kagome lattice. The presence of 1D surface states characterized by parallel Fermi arcs suggests a pronounced Fermi nesting effect, hinting at potential strong correlation effects on the surface of the chiral kagome lattice. It is essential to recognize that while both the bulk and surface states can exhibit strong correlations, the underlying mechanisms driving these effects are fundamentally distinct. The correlation of bulk states originates from VHSs and flat bands, whereas for surface states, it arises from the specific 1D configurations on the surface. As a consequence, it is plausible that two independent correlation effects are induced, operating separately in the bulk and on the surface of the chiral kagome lattice. For instance, considering the CDW phenomenon as an illustration, the VHSs in bulk may lead to a 2\(\times\)2\(\times\)1 or 2\(\times\)2\(\times\)2 CDW order [Fig. 1(d)] [29; 32]. In contrast, on the surface, the \(\mathbf{Q}\)-vector characterizing the charge order could acquire a relatively arbitrary value, which is determined by the separation of Fermi arcs [56; 58; 59]. The distinct behaviors of bulk and surface correlations open new possibilities for correlated topological effects on the chiral kagome lattice. In addition to the potential strong correlations induced on the surface, the presence of 1D Fermi arcs in the 3D chiral kagome lattice offers opportunities for realizing exotic nonlocal transport phenomena. Unlike its 2D counterpart with inversion symmetry, the 3D chiral kagome lattice can readily form left- and right-handed chiral domains that are mirror symmetric on either side of the domain wall [Fig. 3(d)]. The connectivity of the Fermi arcs is also reversed on the two sides of the domain wall, as shown by the (010) surface states in Fig. 3(e). Consider the following example, when a current originates from the Figure 2: (a) Right-handed Crystal structure, (b) 3D energy dispersion of the degeneracy at the \(\Gamma\) point, along with Berry curvature \(\Omega_{x}\) around the \(\Gamma\) points on the (001) plane with flow direction indicated, and (c) distribution of Weyl points in the Brillouin zone (BZ), where colored spheres with different sizes label Weyl points with different topological charges, and associated projected surface BZs for the 3D chiral kagome lattice with right-handedness. In (a), the upward arrow illustrates the evolution process of atoms moving from the plane along the \(c\) direction, while the spiral arrow projected onto the bottom surface indicates chirality. (d) Right-handed Crystal structure, and its (e) Berry curvature \(\Omega_{x}\) around the \(\Gamma\) and \(A\) points, and (f) distribution of Weyl points for the 3D chiral kagome lattice with left-handedness. Fermi arc of the left domain and moves toward the right domain [Fig. 3(e)], it encounters a unique situation. Due to the inherent absence of reflection surface modes, this current cannot be reflected back. Additionally, it cannot pass through the domain wall to reach the (010) surface of the right domain, as the group velocity of states on the right domain opposes the electron's direction of motion. Consequently, the current can only be transmitted along the interface of the domain wall, entering the right domain from the surface on the other side [Fig. 3(f)]. The current transport on the (001) surface can exhibit negative reflection and refraction as discussed in Fig. S3. In conventional diffusion electronics, transport adheres to Ohm's law, where the conductance decreases with increasing separation between two leads. However, in our proposed scenario, we expect the conductance to offer novel perspectives for designing materials with tailored transport properties and potential applications in advanced electronic devices. Surprisingly, we anticipate the conductance between Lead2 and Lead1 to be smaller than the conductance between Lead3 and Lead1, even though Leads1 and 3 are more widely separated from each other [Fig. 3(f)]. To verify this picture, we directly calculate the conductance of two devices of the same size for comparison [Figs. 3(g),(h)], one device has a uniform chirality, while the other consists of two regions with opposite chirality [Fig. 3(d)]. In the 3D kagome device with uniform chirality, the conductance \(G_{21}\) is larger than \(G_{31}\) [Fig. 3(g)], indicating that the incoming current from Lead1 tends to partition more towards Lead2. This preference can be attributed to Lead2 being closer to Lead1 compared to Lead3. In contrast, in the device consisting of two chiralities, the conductance \(G_{31}\) is larger than \(G_{21}\) [Fig. 3(h)]. Our calculations indeed reveal the emergence of unconventional nonlocal transport effects in the 3D chiral kagome lattice because of the lattice's chirality and unique 1D Fermi arcs. These findings open up new avenues for engineering materials with intriguing transport properties, suggesting the potential for exciting applications in future electronic devices. The most basic symmetry operation in the 3D chiral kagome lattice is the screw symmetry \(S_{6}\) (or \(S_{3}\)). Space groups (SGs) that satisfy this symmetry include SGs 144, 145, 151-154, 171, 172, 180 and 181. By consulting the Inorganic Crystal Structure Database (ICSD) [60], we discover a large number of experimentally synthesized materials that hold the 3D chiral kagome lattice with the 3D kagome band characteristics [Fig. 1(c)] in their electronic or phonon band structures, such as CeO\({}_{4}\), TiC\({}_{4}\)H\({}_{8}\)NO\({}_{10}\), BeF\({}_{2}\), BaCoO\({}_{2}\), BaZnO\({}_{2}\), GeO\({}_{2}\) and CuH\({}_{20}\)N\({}_{6}\)OF\({}_{2}\), InH\({}_{8}\)C\({}_{4}\)NO\({}_{10}\), SiO\({}_{2}\) and KScH\({}_{4}\)(C\({}_{2}\)O\({}_{5}\))\({}_{2}\), and Li(BH)\({}_{5}\) as shown in Figs. S4 and S5. Among these known materials, we find various types, including metals, insulators, nonmagnets, and ferromagnets. This diversity highlights the potential of 3D chiral kagome materials for a wide range of applications and investigations across different material types and properties. Finally, we focus on a representative material candidate for the 3D chiral kagome lattice CeO\({}_{4}\), where the 3D chiral kagome lattice is composed of Ce atoms [Fig. 4(a)]. This material is a ferromagnetic semiconductor with a magnetic moment of about 1 \(\mu_{B}\) per Ce atom. Interestingly, only one spin species (spin up) is distributed near the Fermi level, while the other one remains far away from the Fermi level [Fig. 4(b)]. The three valence bands near the Fermi level in CeO\({}_{4}\) form the ideal 3D kagome bands, characterized by flat bands with a bandwidth of less than 3 meV on \(k_{z}=0\) and Figure 3: (a) Schematic diagram of 1D channels on the (001) surface, where the green arrows represent the 1D channels. (b) The (001) and (010) surface state spectral functions at 0.12 eV, where a path perpendicular to Fermi arcs (\(k_{\mathrm{path1}}\)) and a path parallel to Fermi arcs (\(k_{\mathrm{path2}}\)) are highlighted in (b). (c) The surface spectral functions along the specific \(k\) paths for the (010) surface. (d) The domain wall formed by different-handed structures that are mirror symmetrical about the domain wall. (e) The current transport at the interface (domain wall) on the (010) plane. Solid curves are fermi arcs at 0.12 eV, while dashed curves are fermi arcs at slightly higher energy. (f) Schematic diagram of non-local transport for finite size materials and three-terminal device, where the red arrows indicate the propagation process of surface current. Lead1 and Lead2 are connected to the (010) plane, and Lead3 is connected to the (010) plane. The red and blue regions represent chiral opposite lattices. (g) and (h) The conductance of devices with uniform chirality and opposite chirality (corresponding to (f)) in the energy range of (010) surface states (around 0.125 eV), respectively. \(G_{2(3)1}\) is the conductance between Lead1 and Lead2(3). \(\pi\) planes. By introducing about 0.8 holes through doping, the Fermi level can be effectively tuned to align with the flat band. This can be achieved, for instance, by substituting part of the P atoms in CePO\({}_{4}\) with Si or C atoms. The phonon spectrum of CePO\({}_{4}\) further adds to its unique properties, revealing several groups of 3D chiral kagome bands in the frequency range of 10 to 18 THz [Fig. 4(c)]. It is worth noting that although our model only considers the NN hopping, the robust nature of the flat bands for electrons and phonons in real materials suggests that our model is highly compatible with the behavior exhibited by the real materials. Additionally, the surface states of CePO\({}_{4}\) exhibit 1D Fermi arcs [Fig. 4(d)], which is consistent with our model. This distinctive combination of electronic and phonon properties in CePO\({}_{4}\) positions it as a promising candidate for exploring novel quantum phenomena and potential applications in the realm of 3D chiral kagome materials. ## I Acknowledgement Work at Nanyang Technological University was supported by the National Research Foundation, Singapore under its Fellowship Award (NRF-NRFF13-2021-0010) and the Nanyang Assistant Professorship grant (NTUSUG). Work at the National University of Singapore was supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 3 Award MOE2018-T3-1-002. The work at Northeastern University was supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0322 and benefited from the computational resources of Northeastern University's Advanced Scientific Computation Center (ASCC) and the Discovery Cluster. The work at TIFR Mumbai was supported by the Department of Atomic Energy of the Government of India under Project No. 12-R&D-TFR-5.10-0100 and benefited from the computational resources of TIFR Mumbai. J-Y. Y., X. Z., and T. H. contributed equally to this work.
2310.02563
Practical, Private Assurance of the Value of Collaboration via Fully Homomorphic Encryption
Two parties wish to collaborate on their datasets. However, before they reveal their datasets to each other, the parties want to have the guarantee that the collaboration would be fruitful. We look at this problem from the point of view of machine learning, where one party is promised an improvement on its prediction model by incorporating data from the other party. The parties would only wish to collaborate further if the updated model shows an improvement in accuracy. Before this is ascertained, the two parties would not want to disclose their models and datasets. In this work, we construct an interactive protocol for this problem based on the fully homomorphic encryption scheme over the Torus (TFHE) and label differential privacy, where the underlying machine learning model is a neural network. Label differential privacy is used to ensure that computations are not done entirely in the encrypted domain, which is a significant bottleneck for neural network training according to the current state-of-the-art FHE implementations. We formally prove the security of our scheme assuming honest-but-curious parties, but where one party may not have any expertise in labelling its initial dataset. Experiments show that we can obtain the output, i.e., the accuracy of the updated model, with time many orders of magnitude faster than a protocol using entirely FHE operations.
Hassan Jameel Asghar, Zhigang Lu, Zhongrui Zhao, Dali Kaafar
2023-10-04T03:47:21Z
http://arxiv.org/abs/2310.02563v3
# Practical, Private Assurance of the Value of Collaboration ###### Abstract. Two parties wish to collaborate on their datasets. However, before they reveal their datasets to each other, the parties want to have the guarantee that the collaboration would be fruitful. We look at this problem from the point of view of machine learning, where one party is promised an improvement on its prediction model by incorporating data from the other party. The parties would only wish to collaborate further if the updated model shows an improvement in accuracy. Before this is ascertained, the two parties would not want to disclose their models and datasets. In this work, we construct an interactive protocol for this problem based on the fully homomorphic encryption scheme over the Torus (TFHE) and label differential privacy, where the underlying machine learning model is a neural network. Label differential privacy is used to ensure that computations are not done entirely in the encrypted domain, which is a significant bottleneck for neural network training according to the current state-of-the-art FHE implementations. We prove the security of our scheme in the universal composability framework assuming honest-but-curious parties, but where one party may not have any expertise in labelling its initial dataset. Experiments show that we can obtain the output, i.e., the accuracy of the updated model, with time many orders of magnitude faster than a protocol using entirely FHE operations. neural networks, differential privacy, homomorphic encryption + Footnote †: journal: Computer Science we utilize the (encrypted) labels from party \(P_{2}\)'s dataset. If further computation is done homomorphically, then we would endure the same computational performance bottleneck as previous work. We, therefore, add (label) differentially private noise to the gradients and decrypt them before the backward pass. This ensures that most steps in the neural network training are done in cleartext, albeit with differentially private noise, giving us performance improvements over an end-to-end FHE solution. In what follows, we first formally show that without domain knowledge it is not possible for \(P_{2}\) to improve the model accuracy. We then describe our protocol in detail, followed by its privacy and security analysis. Finally, we evaluate the performance and accuracy of our protocol on multiple datasets. ## 2. Preliminaries and Threat Model ### Notation We follow the notations introduced in (Zhu et al., 2017). The datasets come from the joint domain: \(\mathbb{D}=\mathcal{X}\times\mathbf{y}\), where \(\mathcal{X}\) denotes the domain of features, and \(\mathcal{Y}\) denotes the domain of labels. A dataset \(D\) is a multiset of elements drawn i.i.d. from the domain under the joint distribution \(\mathcal{D}\) over domain points and labels. We shall denote by \(\mathcal{D}_{x}\), the marginal distribution of unlabelled domain points. In some cases, a dataset may be constructed by drawing unlabelled domain points under \(\mathcal{D}_{x}\), and then labelled according to some labelling function, which may not follow the marginal distribution of labels under \(\mathcal{D}\). In such a case, we shall say that the dataset is labelled by the labelling function to distinguish it from typical datasets. Let \(\mathcal{A}\) denote the learning algorithm, e.g., a neural network training algorithm. We should denote by \(M\leftarrow\mathcal{A}(D)\) as the model \(M\) returned by the learning algorithm on dataset \(D\). Given the model \(M\), and a sample \((x,y)\leftarrow\mathcal{D}\), we define a generic loss function \(\ell(M,x,y)\), which outputs a non-negative real number. For instance, \(\ell(M,x,y)\) can be the 0-1 loss function, defined as: \[\ell(M,x,y)=\begin{cases}1,\text{ if }M(x)\neq y\\ 0,\text{ otherwise}\end{cases}\] We define the true error of \(M\) as: \[L_{\mathcal{D}}(M)=\mathbb{E}_{(x,y)\sim\mathcal{D}}[\ell(M,x,y)]\] Notice that for the 0-1 loss function, this means that \[L_{\mathcal{D}}(M)=\Pr_{(x,y)\sim\mathcal{D}}[M(x)\neq y]\] The empirical error of the model \(M\) over the dataset \(D\) having \(n\) elements \((x_{1},y_{1}),\dots,(x_{n},y_{n})\) is defined as: \[L_{D}(M)=\frac{1}{n}\sum_{i\in[n]}\ell(M,x_{i},y_{i})\] ### The Setting **The Scenario.** We consider two parties \(P_{1}\) and \(P_{2}\). For \(i\in\{1,2\}\), party \(P_{i}\)'s dataset is denoted \(D_{i}\). The two datasets come from a joint domain. In particular, the dataset \(D_{i}\) is a multiset of elements drawn i.i.d. from the domain under a distribution \(\mathcal{D}\). Thus, each \(D_{i}\) contains points of the form \((\mathbf{x},y)\). The parties wish to collaborate on their datasets \(D_{i}\). The features \(\mathbf{x}\) are shared in the open; whereas the labels \(y\) for each \(\mathbf{x}\) in \(D_{i}\) are to be kept secret from the other party. This scenario holds in applications where _gathering data_ (\(\mathbf{x}\)) _may be easy, but labelling is expensive_. For example, malware datasets (binaries of malware programs) are generally available to antivirus vendors, and often times features are extracted from these binaries using publicly known feature extraction techniques, such as the LIEF project (Zhu et al., 2017). However, labelling them with appropriate labels requires considerable work from (human) experts. **The Model.** Before the two parties reveal their datasets to each other, the parties want to have the guarantee that the collaboration would be _valuable_. We shall assume that party \(P_{1}\) already has a model \(M_{1}\) trained on data \(D_{1}\). \(P_{1}\) also has a labelled holdout data \(D_{hold}\) against which \(P_{1}\) tests the accuracy of \(M_{1}\). The goal of the interaction is to obtain a new model \(M_{2}\) trained on \(D_{1}\cup D_{2}\). \(M_{2}\)'s accuracy again is tested against \(D_{\text{hold}}\). The collaboration is defined to be _valuable_ for \(P_{1}\) if the accuracy of \(M_{2}\) is higher than the accuracy of \(M_{1}\) against \(D_{\text{hold}}\). In this paper, we only study _value_ from \(P_{1}\)'s perspective. We will consider a neural network trained via stochastic gradient descent as our canonical model. **The Holdout Dataset.** As mentioned above, \(P_{1}\) has a holdout dataset \(D_{\text{hold}}\) against which the accuracy of the models is evaluated. This is kept separate from the usual training-testing split of the dataset \(D_{1}\). It makes sense to keep the same holdout dataset to check how the model trained on the augmented/collaborated data performs. For instance, in many machine learning competitions teams compete by training their machine learning models on a publicly available training dataset, but the final ranking of the models, known as _private leaderboard_, is done on a hidden test dataset (Beng et al., 2017). This ensures that the models are not overfitted by using the test dataset as feedback for re-training. We assume that \(D_{\text{hold}}\) is continually updated by adding new samples (e.g., malware never seen by \(P_{1}\)) labelled by human experts, and is more representative of the population than \(D_{1}\). For instance, the holdout dataset reflects the _concept drift_(Maksh et al., 2017) better than \(D_{1}\). If \(D_{2}\) happens to have the same _concept drift_ as \(D_{\text{hold}}\), then \(M_{2}\) (trained on \(D_{1}\cup D_{2}\)) would have better test accuracy than \(M_{1}\) against \(D_{\text{hold}}\). Alternatively, the holdout dataset could be more balanced than \(D_{1}\), e.g., if \(D_{1}\) has labels heavily skewed towards one class. Again, in this case, if \(D_{2}\) is more balanced than \(D_{1}\), then \(M_{2}\) will show better accuracy. We argue that it is easy for \(P_{1}\) to update \(D_{\text{hold}}\) than \(D_{1}\) as the latter requires more resources due to the difference in size. ### Privacy **Privacy Expectations.** We aim at the following privacy properties: * Datasets \(D_{1}\) and \(D_{\text{hold}}\), and model \(M_{1}\) should be hidden from \(P_{2}\). * The labels of dataset \(D_{2}\) should be hidden from \(P_{1}\). * Neither \(P_{1}\) nor \(P_{2}\) should learn \(M_{2}\), i.e., the model trained on \(D_{1}\cup D_{2}\). * Both parties should learn whether \(L_{\text{hold}}(M_{2})<L_{\text{hold}}(M_{1})\), where \(L_{\text{hold}}\) is the loss evaluated on \(D_{\text{hold}}\). **Threat Model.** We assume that the parties involved, \(P_{1}\) and \(P_{2}\), are honest-but-curious. This is a reasonable assumption since once collaboration is agreed upon, the model trained on clear data should be able to reproduce any tests to assess the quality of data pre-agreement. Why then would \(P_{1}\) not trust the labelling from \(P_{2}\)? This could be due to the low quality of \(P_{2}\)'s labels, for many reasons. For example, \(P_{2}\)'s expertise could in reality be below par. In this case, even though the labelling is done honestly, it may not be of sufficient quality. Furthermore, \(P_{2}\) can in fact lie about its labelling without the fear of being caught. This is due to the fact that technically there is no means available to \(P_{1}\) to assess how \(P_{2}\)'s labels were produced. All \(P_{2}\) needs to do is to provide the same labels before and after the collaborative agreement. As long as labelling is consistent, there is no fear of being caught. ### Background **Feedforward Neural Networks**. A fully connected feedforward neural network is modelled as a graph with a set of vertices (neurons) organised into layers and weighted edges connecting vertices in adjacent layers. The sets of vertices from each layer form a disjoint set. There are at least three layers in a neural network, one input layer, one or more hidden layers and one output layer. The number of neurons in the input layer equals the number of features (dimensions). The number of neurons in the output layer is equal to the number of classes \(K\). The vector of weights \(\mathbf{w}\) of all the weights of the edges constitutes the parameters of the network, to be learnt during training. We let \(R\) denote the number of weights in the last layer, i.e., the number of edges connecting to the neurons in the output layer. For a more detailed description of neural networks, see (Kal \(m\in\mathcal{P}\), the TLWE encryption of \(m\) under \(\mathbf{s}\) is defined as \(\mathbf{c}=(\mathbf{a},b)\in\mathbb{T}_{q}^{n+1}\), where \(\mathbf{a}\) is a vector of \(n\)-elements drawn uniformly at random from \(\mathbb{T}_{q}\), and \[b=\langle\mathbf{s},\mathbf{a}\rangle+m+e.\] To decrypt \(\mathbf{c}\) one computes: \[m^{*}=b-\langle\mathbf{s},\mathbf{a}\rangle,\] and returns the nearest element in \(\mathcal{P}\) to \(m^{*}\). The scheme is secure under the learning with errors (LWE) problem over the discretized torus (Liang et al., 2017; Chen et al., 2018): **Definition 3** (TLWE Assumption).: Let \(q,n\in\mathbb{N}\). Let \(\mathbf{s}=(s_{1},\ldots,s_{n})\) be a binary vector chosen uniformly at random. Let \(\hat{\chi}\) be an error distribution defined above. The learning with errors over the discretized torus (TLWE) problem is to distinguish samples chosen according to the following distributions: \[\mathcal{D}_{0}=\{(\mathbf{a},b)\mid\mathbf{a}\leftarrow\mathbb{T}_{q}^{n},b \leftarrow\mathbb{T}_{q}\},\] and \[\mathcal{D}_{1}=\{(\mathbf{a},b)\mid\mathbf{a}\leftarrow\mathbb{T}_{q}^{n},b= \langle\mathbf{s},\mathbf{a}\rangle+e,e\leftarrow\hat{\chi}\},\] where except for \(e\) which is sample according to the distribution \(\hat{\chi}\), the rest are sampled uniformly at random from the respective sets. ## 3. Lack of Domain Knowledge Before we give a privacy-preserving solution to our problem, we want to show that the problem does indeed have a solution in the clear domain. More precisely, we show that if party \(P_{2}\) lacks domain knowledge, then the party cannot come up with a dataset \(D_{2}\) such that \(L_{\text{hold}}(M_{2})<L_{\text{hold}}(M_{1})\), where \(M_{2}\leftarrow\mathcal{A}(D_{1}\cup D_{2})\) and \(M_{1}\leftarrow\mathcal{A}(D_{1})\). For simplicity we assume binary classification, although the results can be easily extended to multiclass classification. We also assume the 0-1 loss function. How do we define lack of domain knowledge? Since the feature vectors are public, \(P_{2}\) can easily obtain a set of raw inputs to obtain the feature vectors in \(D_{2}\). Thus, domain knowledge should be captured in the labels to the feature vectors in \(D_{2}\). We define lack of domain knowledge as \(P_{2}\) using an arbitrary labeling function \(f\) defined over any point \(x\in D_{2}\) as: \[f(x)=\begin{cases}1,&\text{with probability }p\\ 0,&\text{otherwise}\end{cases}, \tag{1}\] where \(p\in[0,1]\). We shall first show that if \(D\) is labelled according to \(f\), then \(M\) as returned by a learning algorithm \(\mathcal{A}\) (taking \(D\) as the input) will have: \[L_{\text{hold}}(M)=1/2.\] Using this we shall show that we cannot have \[L_{\text{hold}}(M_{2})<L_{\text{hold}}(M_{1}),\] where \(M_{2}\leftarrow\mathcal{A}(D_{1}\cup D_{2})\), and \(M_{1}\leftarrow\mathcal{A}(D_{1})\). **Lemma 1**.: _Let \(D_{\text{hold}}\) be a dataset. Let \(D\) be a dataset labelled according to \(f\) as defined in Equation (1). Let \(\mathcal{A}\) be a learning algorithm. Let \(M\leftarrow\mathcal{A}(D)\). Define \(\Pr[y=1\mid(x,y)\sim\mathcal{D}]=q\). Then if \(q\) is uniformly distributed over the real interval \([0,1]\), we have:_ \[L_{\text{hold}}(M)=1/2.\] Proof.: Consider an arbitrary point \((x,y)\in D_{\text{hold}}\). Since \(D_{\text{hold}}\) is sampled i.i.d. under \(\mathcal{D}\), it follows that \((x,y)\sim\mathcal{D}\). We consider the probability: \[\Pr_{(x,y)-D_{\text{hold}}}[M(x)\neq y]=\Pr_{(x,y)-\mathcal{D}}[M(x)\neq y].\] Averaging the above probability over all \(m\) points gives us \(L_{\text{hold}}(M)\). Fix a \(q\in[0,1]\). Dropping subscripts, we have \[\Pr[M(x)\neq y\mid q] =\Pr[M(x)\neq y\mid y=1]\Pr[y=1]\] \[+\Pr[M(x)\neq y\mid y=0]\Pr[y=0]\] \[=\Pr[M(x)=0\mid y=1]\Pr[y=1]\] \[+\Pr[M(x)=1\mid y=0]\Pr[y=0].\] Now, the learning algorithm \(\mathcal{A}\)'s input, i.e., \(D\), remains unchanged whether \(y\), i.e., the label of \(x\) in \(D_{\text{hold}}\), is equal to 0 and 1. This is because \(D\) is labelled by \(f\) which is independent of \(y\). Therefore, \[\Pr[M(x)=0\mid y=1]=\Pr[M(x)=0]\] and \[\Pr[M(x)=1\mid y=0]=\Pr[M(x)=1]\] We get: \[\Pr[M(x)\neq y\mid q] =\Pr[M(x)=0]\Pr[y=1]\] \[+\Pr[M(x)=1]\Pr[y=0]\] \[=\Pr[M(x)=0]q+\Pr[M(x)=1](1-q)\] \[=(1-\epsilon_{p})q+\epsilon_{p}(1-q)\] \[=\epsilon_{p}+(1-2\epsilon_{p})q,\] where \(\epsilon_{p}\) is the probability that \(M\) outputs 1. We use the subscript \(p\) to denote the dependence on the probability \(p\) in the labelling function \(f\). Therefore, over all \(q\) uniformly distributed over \([0,1]\), we have: \[\Pr[M(x)\neq y] =\int_{0}^{1}(\epsilon_{p}+(1-2\epsilon_{p})q)dq\] \[=\epsilon_{p}q\bigg{|}_{0}^{1}+(1-2\epsilon_{p})\frac{q^{2}}{2} \bigg{|}_{0}^{1}\] \[=\epsilon_{p}+(1-2\epsilon_{p})\frac{1}{2}=\frac{1}{2}\] It is perhaps not realistic to assume that all possible distributions of the labels are equally likely. Fortunately, the data custodian can also ensure that the loss without domain knowledge remains \(1/2\) by choosing a _balanced_ holdout dataset. **Lemma 2**.: _Let \(D_{\text{hold}}\) be a dataset such that \(\Pr[y=1\mid(x,y)\sim D_{\text{hold}}]=\frac{1}{2}\). Let \(D\) be a dataset labelled according to \(f\) as defined in Equation (1). Let \(\mathcal{A}\) be a learning algorithm. Let \(M\leftarrow\mathcal{A}(D)\). Then,_ \[L_{\text{hold}}(M)=1/2.\] Proof.: Consider again an arbitrary point \((x,y)\in D_{\text{hold}}\). We consider the probability: \[\Pr_{(x,y)-D_{\text{hold}}}[M(x)\neq y]\] As before, averaging the above probability over all \(m\) points gives us \(L_{\text{hold}}(M)\). Dropping subscripts, and using the same reasoning behind the independence of the output of \(M\) over the values of \(y\), we have \[\Pr[M(x)\neq y] =\Pr[M(x)\neq y\mid y=1]\Pr[y=1]\] \[+\Pr[M(x)\neq y\mid y=0]\Pr[y=0]\] \[=\Pr[M(x)=0\mid y=1]\Pr[y=1]\] \[+\Pr[M(x)=1\mid y=0]\Pr[y=0]\] \[=\Pr[M(x)=0]\Pr[y=1]\] \[+\Pr[M(x)=1]\Pr[y=0]\] \[=\Pr[M(x)=0]\frac{1}{2}+\Pr[M(x)=1]\frac{1}{2}\] \[=\frac{1}{2}(\Pr[M(x)=0]+\Pr[M(x)=1])=\frac{1}{2}\] What happens if \(D_{\text{hold}}\) is not balanced? Then, we may get a loss less than \(1/2\). To see this, assume that \(\Pr[y=1\mid(x,y)\sim D_{\text{hold}}]=q>\frac{1}{2}\). Consider the labelling function \(f\) which outputs \(1\) with probability \(p=1\), i.e., the constant function \(f(x)=1\). Then if \(M\)"faithfully" learns \(f\), we have: \[\Pr[M(x)\neq y] =\Pr[M(x)=0\mid y=1]\Pr[y=1]\] \[+\Pr[M(x)=1\mid y=0]\Pr[y=0]\] \[=\Pr[M(x)=1\mid y=0]\Pr[y=0]\] \[=\Pr[y=0]=1-q<\frac{1}{2}.\] Thus, it is crucial to test the model over a balanced dataset. Theorem 1 ().: _Let \(D_{1}\) be a dataset labelled arbitrarily. Let \(D_{2}\) be a dataset labelled according to \(f\) as defined in Equation (1). Let \(D_{\text{hold}}\) be a balanced dataset. Let \(\mathcal{A}\) be a learning algorithm. Let \(M_{1}\leftarrow\mathcal{A}(D_{1})\) and \(M_{2}\leftarrow\mathcal{A}(D_{1}\cup D_{2})\). Then,_ \[L_{\text{hold}}(M_{2})\geq L_{\text{hold}}(M_{1})\] Proof.: Assume to the contrary that \[L_{\text{hold}}(M_{2})<L_{\text{hold}}(M_{1}).\] In particular, the statement of the theorem holds if \(D_{1}\) is labelled according to \(f\) as defined in Equation (1). Then, from Lemma 2 it follows that \[L_{\text{hold}}(M_{1})=\frac{1}{2},\] implying: \[L_{\text{hold}}(M_{2})<\frac{1}{2}.\] Since \(D_{1}\) and \(D_{2}\) are both labelled by \(f\), it follows that \(D_{1}\cup D_{2}\) is also labelled by \(f\), contradicting Lemma 2. ## 4. Our Solution ### Intuition Consider the training of the neural network on \(D_{1}\cup D_{2}\), with weights \(\mathbf{w}\). Using the stochastic gradient descent (SGD) algorithm, one samples a batch \(B\), from which we calculate per sample loss \(L_{\mathbf{s}}(\mathbf{w})\), where \(s=(\mathbf{x},\mathbf{y})\in B\). Given this, we can compute the average loss over the batch via: \[L_{B}(\mathbf{w})=\frac{1}{|B|}\sum_{s\in B}L_{\mathbf{s}}(\mathbf{w})=\frac{1 }{|B|}\sum_{\begin{subarray}{c}s\in B\\ s\in D_{1}\end{subarray}}L_{\mathbf{s}}(\mathbf{w})+\frac{1}{|B|}\sum_{ \begin{subarray}{c}s\in B\\ s\in D_{2}\end{subarray}}L_{\mathbf{s}}(\mathbf{w}) \tag{2}\] As shown in Figure 1, everything in this computation is known to \(P_{1}\), except for the labels in \(D_{2}\). Thus, \(P_{1}\) can compute the gradients for samples in \(D_{1}\), but to update the weights, \(P_{1}\) needs the gradients for samples from \(D_{2}\). From Equation (2), we are interested in computing the loss through the samples in a batch \(B\) that belong to the dataset \(D_{2}\). Overloading notation, we still use \(B\) to denote the samples belonging to \(D_{2}\). The algorithm to minimize the loss is the stochastic gradient descent algorithm using backpropagation. This involves calculating the gradient \(\nabla L_{B}(\mathbf{w})\). As noted in (Han et al., 2017), if we are using the backpropagation algorithm, we only need to be concerned about the gradients corresponding to the last layer. Again, to simplify notation, we denote the vector of weights in the last layer by \(\mathbf{w}\). In Appendix A, we show that the gradient of the loss can be computed as: \[\nabla L_{B}(\mathbf{w})=\frac{1}{|B|}\sum_{s\in B}\sum_{i=1}^{K}p_{i}(s)\frac {\partial z_{i}(s)}{\partial\mathbf{w}}-\frac{1}{|B|}\sum_{s\in B}\sum_{i=1}^{ K}y_{i}(s)\frac{\partial z_{i}(s)}{\partial\mathbf{w}}, \tag{3}\] where \(y_{i}(s)\) is the \(i\)th label of the sample \(s\), \(p_{i}(s)\) is the probability of the \(i\)th label of sample \(s\), \(z_{i}(s)\) is the \(i\)th input to the softmax function for the sample \(s\), and \(K\) denotes the number of classes. The LHS term of this equation can be computed by \(P_{1}\) as this is in the clear. However, the RHS term requires access to the labels. If we encrypt the labels, the gradients calculated in Equation (3) will be encrypted, using the homomorphic property of the encryption scheme. This means that the gradient of the batch, as well as the weight updates will be encrypted as well. Thus, the entire training process after the first forward pass of the first epoch will be in the encrypted domain. While this presents one solution to our problem, i.e., obtaining an encrypted trained model, which could then be decrypted once the two parties wish to collaborate, existing line of works (Han et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2020; Wang et al., 2020) show that neural network training entirely in the encrypted domain is highly inefficient. For instance, a single mini-batch of 60 samples can take anywhere from more than 30 seconds to several days with dedicated memory ranging from 16GB to 250GB using functional encryption or homomorphic encryption (Kumar et al., 2018). In some of our neural network implementations, Figure 1. A batch will contain some points from \(D_{1}\) and some from \(D_{2}\). Only the labels from the points in \(D_{2}\) needs to be kept private (shaded black) from \(P_{1}\). we use a batch size of 128 with 100 epochs, which means that the time consumed for an end-to-end training entirely in the encrypted domain would be prohibitive. Our idea is to take advantage of the fact that the feature vectors are in the clear, and hence it may be possible to decrypt the labels in each batch so that backpropagation can be carried out in cleartext, giving us computational advantage over an all encrypted solution. This is where we employ labelled differential privacy. A straightforward way to accomplish this is to let \(P_{2}\) add differentially private noise to all its labels and simply handover its noisy dataset to \(P_{1}\), playing no further part (except receiving the accuracy result). However, this is less desirable from the utility point of view. Instead, we add noise to the gradients in each batch with \(P_{2}\) interactively adding noise to the average gradient computed in each epoch, similar to what is done in (Zhu et al., 2019). ### Proposed Protocol Our solution is as follows. 1. To start, \(P_{2}\) sends only the encrypted form of its dataset \(D_{2}\) where each sample is \((\mathbf{x},\llbracket\mathbf{y}\rrbracket_{\mathbf{k}})\), where \(\mathbf{k}\) is \(P_{2}\)'s encryption key of a homomorphic encryption scheme. In particular, \(\llbracket\mathbf{y}\rrbracket_{\mathbf{k}}\) is a vector of \(K\) elements, each element of which is encrypted under \(\mathbf{k}\). 2. For each sample \(s\in B\) and \(1\leq i\leq K\), \(P_{1}\) computes \(\frac{\sigma_{2i}(s)}{\partial\mathbf{w}}\). This results in \(K\times|B|\) vectors of \(R\)-elements each, where \(R\) is the number of weights in the last layer. 3. For each sample \(s\in B\) and \(1\leq i\leq K\), \(P_{1}\) does element-wise homomorphic scalar multiplication: \(\frac{\partial\sigma_{2i}(s)}{\partial\mathbf{w}}\llbracket y_{i}(s) \rrbracket_{\mathbf{k}}=\llbracket y_{i}(s)\frac{\partial\sigma_{2i}(s)}{ \partial\mathbf{w}}\rrbracket_{\mathbf{k}}\). This amounts to a total of \(K\times|B|\times R\) homomorphic scalar multiplications. 4. \(P_{1}\) homomorphically adds: \[\sum_{s\in B}\sum_{i=1}^{K}\left\llbracket y_{i}(s)\frac{ \partial z_{i}(s)}{\partial\mathbf{w}}\right\rVert_{\mathbf{k}}\] (4) \[=\left\llbracket\sum_{s\in B}\sum_{i=1}^{K}y_{i}(s)\frac{ \partial z_{i}(s)}{\partial\mathbf{w}}\right\rVert_{\mathbf{k}}\] \[=\llbracket N_{B}(\mathbf{w})\rrbracket_{\mathbf{k}}\,,\] which amounts to a total of \(K\times|B|\times R\) homomorphic additions. This results in an \(L\)-element vector encrypted under \(\mathbf{k}\). 5. \(P_{1}\) computes the sensitivity of the gradients of the loss function for the current batch. As shown in Appendix B this is: \[\Delta S_{B}(\mathbf{w})=\frac{2}{|B|}\max_{i,s}\left\lVert\frac{\partial z _{i}(s)}{\partial\mathbf{w}}\right\rVert_{2}\] (5) 6. \(P_{2}\) computes \(\mathcal{N}(0,\sigma^{2}I_{R})\), where \(\sigma=1/e\) and \(I_{R}\) is the identity matrix of size \(R\times R\). \(P_{2}\) encrypts this under \(\mathsf{k}\), and sends \(\llbracket N(0,\sigma^{2}I_{R})\rrbracket_{\mathbf{k}}\) to \(P_{1}\). 7. \(P_{1}\) scalar multiplies (homomorphically), the sensitivity with the encrypted noise to obtain: \[\llbracket N(0,(\Delta S_{B}(\mathbf{w})\sigma)^{2}I_{R})\rrbracket_{\mathbf{ k}}\] (6) 8. \(P_{1}\) "blinds" the encrypted quantity \(\llbracket N_{B}(\mathbf{w})\rrbracket_{\mathbf{k}}\) resulting in \(\llbracket N_{B}(\mathbf{w})+\boldsymbol{\mu}\rrbracket_{\mathbf{k}}\) (see below). \(P_{1}\) homomorphically adds the scaled noise vector and sends the following to \(P_{2}\): \[\llbracket N_{B}(\mathbf{w})+\boldsymbol{\mu}+\mathcal{N}(0,(\Delta S_{B}( \mathbf{w})\sigma)^{2}I_{R})\rrbracket_{\mathbf{k}}\] (7) 9. \(P_{2}\) decrypts the ciphertext and sends the following to \(P_{1}\) \[N_{B}(\mathbf{w})+\boldsymbol{\mu}+\mathcal{N}(0,(\Delta S_{B}(\mathbf{w}) \sigma)^{2}I_{R})\] (8) 10. \(P_{1}\) substracts \(\mu\) and obtains \(N_{B}(\mathbf{w})+\mathcal{N}(0,(\Delta S_{B}(\mathbf{w})\sigma)^{2}I_{R})\). \(P_{1}\) can now plug this into Equation (3) and proceed with backgpropagation in the unencrypted domain. **Adding a Random Blind.** The labels are encrypted under party \(P_{2}\)'s key. Therefore, the computation of the quantity \[\llbracket N_{B}(\mathbf{w})+\mathcal{N}(0,(\Delta S_{B}(\mathbf{w})\sigma)^{2 }I_{R})\rrbracket_{\mathbf{k}}\] (without the blind) can be done homomorphically in the encrypted domain. At some point, this needs to be decrypted. However, decrypting this quantity will leak information about the model parameters to \(P_{2}\). That's why \(P_{1}\) uses the following construct to "blind" the plaintext model parameters. Namely, assume that encryption is done under TLWE. Then before decryption, the ciphertext will be of the form: \[b=\langle\mathbf{s},\mathbf{a}\rangle+m+e.\] \(P_{1}\) samples an element \(\mu\) uniformly at random from \(\mathcal{P}\) and adds it to \(b\). This then serves as a one-time pad, as the original message \(m\) can be any of the \(p\) possible messages in \(\mathcal{P}\). Once the ciphertext has been decrypted, player \(P_{1}\) receives \(m+\mu\), from which \(\mu\) can be subtracted to obtain \(m\). Remark 1 ().: _The parameter \(R\), i.e., the number of weights in the last layer is being leaked here. We assume that this quantity, along with the batch size and the number of epochs are known by party \(P_{2}\)._ Remark 2 ().: _The multiplication in Step 7 works for Gaussian noise because we can multiply a constant times a Gaussian distribution and still obtain a Gaussian with the scaled variance._ ## 5. Privacy and Security Analysis ### Proving Privacy In each epoch, we add Gaussian noise scaled to the sensitivity of the components of the gradient corresponding to the last layer of the neural network having \(R\) weights in the last layer. Thus each batch is \(\epsilon\)-GLDP (Definition 1). Since the batches are disjoint in each epoch, we retain \(\epsilon\)-GLDP by invoking parallel composition (Definition 2). Over \(n\) epochs the mechanism is \(\sqrt{n}e\)-GLDP invoking sequential composition (Definition 2). ### Proving Security We will prove security in the universal composability framework (Becker and Scholkopf, 1998; Scholkopf, 1998),(Becker and Scholkopf, 1998). Under this framework, we need to consider how we can define ideal functionality for \(P_{2}\). More specifically, \(P_{2}\) applies differentially private (DP) noise at places in the protocol. One way around this is to assume that the ideal functionality applies DP noise to the labels of \(P_{2}\)'s dataset at the start, and uses these noisy labels to train the dataset. However, this may cause issues with the amount of DP noise added in the ideal world vs the real world. To get around this, we assume that the ideal functionality does the same as what happens in the real-world, i.e., in each batch, the ideal functionality adds noise according to the sensitivity of the batch. We can then argue that the random variables representing the output in both settings will be similarly distributed. We assume \(R\), the number of weights in the last layer, \(|B|\), the batch size, and the number of epochs to be publicly known. **The ideal world.** In the ideal setting, the simulator \(\mathcal{S}\) replaces the real-world adversary \(\mathcal{B}\). The ideal functionality \(\mathcal{F}\) for our problem is defined as follows. The environment \(\mathcal{Z}\) hands to \(\mathcal{F}\) inputs \(D_{1}\), hold-out set \(D_{\text{hold}}\), learning algorithm \(\mathcal{A}\), \(L_{\text{hold}}(M_{1})\), where \(M_{1}\leftarrow\mathcal{A}(D_{1})\), and the features of the dataset \(D_{2}\), i.e., without the labels, which is the input to party \(P_{1}\). The environment \(\mathcal{Z}\) also gives dataset \(D_{2}\) to \(\mathcal{F}\) which is the input to party \(P_{2}\). The ideal functionality sets \(D\gets D_{1}\cup D_{2}\). Then for each epoch, it samples a random batch \(B\) of size \(|B|\) from \(D\). If there is at least one element in \(B\) from \(D_{2}\), it computes the sensitivity of the gradients of the loss function for the current batch. It then adds DP-noise to the quantity \(N_{B}(\mathbf{w})\), and continues with backpropogation. At the end, the functionality outputs \(1\) if \(L_{\text{hold}}(M_{2})<L_{\text{hold}}(M_{1})\), where \(M_{2}\) is the resulting training model on dataset \(D\) using algorithm \(\mathcal{A}\). **Simulation for \(P_{1}\).** The simulation from \(\mathcal{S}\) is as follows. * At some point \(\mathcal{Z}\) will notify \(\mathcal{S}\) about \(P_{1}\) receiving the encrypted dataset \(D_{2}\). \(\mathcal{S}\) generates \(|D_{2}|\) fresh samples from the distribution \(\mathcal{D}_{0}\) from Definition 3, and adds one of them to each row of \(D_{2}\) as the purported encrypted label. \(\mathcal{S}\) reports this encrypted dataset to \(\mathcal{Z}\). * At some point \(\mathcal{Z}\) notifies \(\mathcal{S}\) about \(P_{1}\) receiving the encrypted \(R\)-element noise vector. \(\mathcal{S}\) again generates \(R\) fresh samples from \(\mathcal{D}_{0}\), and reports the resulting encrypted \(R\)-element vector as the supposed noise vector to \(\mathcal{Z}\). Notice that, the length of this vector does not depend on the number of elements in the batch belonging to \(D_{2}\), as long as there is at least one. If none are from \(D_{2}\), then \(P_{1}\) will not ask \(P_{2}\) to send a noise vector. * To simulate sending the encrypted, blinded and noise-added \(R\)-element vector \(N_{B}(\mathbf{w})\), \(\mathcal{S}\) generates \(R\) fresh samples from \(\mathcal{D}_{0}\), and reports the resulting encrypted \(R\)-element vector to \(\mathcal{Z}\). * Lastly, \(\mathcal{Z}\) notifies \(\mathcal{S}\) that \(P_{1}\) has received the decrypted version of the quantity sent in the previous step. \(\mathcal{S}\) generates \(R\) random elements from \(\mathcal{P}\) (the plaintext space), and reports this to \(\mathcal{Z}\). This simulates the blinds added in Step 8 of the protocol. * If this is the last epoch, \(\mathcal{S}\) sends the output returned by \(\mathcal{F}\) to \(P_{1}\) dutifully to \(\mathcal{Z}\). In this case the output of the environment is statistically indistinguishable from the ideal case under the TLWE assumption over the torus (Definition 3). **Simulation for \(P_{2}\).** The simulation from \(\mathcal{S}\) is as follows. * When \(\mathcal{Z}\) sends the initial input to \(P_{2}\), \(\mathcal{S}\) queries \(\mathcal{F}\) to obtain this input. At the same time, \(\mathcal{S}\) generates fresh coins and uses them to generate the key k for the TLWE scheme. These coins will also be used later to generate differentially private noise. \(\mathcal{S}\) reports these coins to \(\mathcal{Z}\). * The \(R\)-element noise vector is generated by \(\mathcal{S}\) in a straightforward manner using the coins generated in the first step. * At some point \(\mathcal{Z}\) notifies \(\mathcal{S}\) that \(P_{2}\) received the encrypted, blinded and noise-added \(R\)-element vector \(N_{B}(\mathbf{w})\). \(\mathcal{S}\) generates \(R\) elements uniformly at random from \(\mathcal{P}\), and then encrypts the resulting \(R\)-element vector under k. \(\mathcal{S}\) hands this to \(\mathcal{Z}\) as the purported received vector. * If this is the last epoch, \(\mathcal{S}\) sends the output returned by \(\mathcal{F}\) to \(P_{2}\) dutifully to \(\mathcal{Z}\). In this case, the simulation is perfect. We are using the fact that the blinds used completely hide the underlying plaintext. ## 6. Experimental Evaluation In this section, we evaluate our protocol over several datasets. We first show that joining two datasets does indeed improve the accuracy of the model. We then implement our protocol on Zama's concrete TFHE library (Zam and others, 2018), show the training time and accuracy of the trained model and the regimes of \(\epsilon\) where the model shows accuracy improvement over \(P_{1}\)'s dataset but less than the accuracy if the model is to be trained without differential privacy. This setting is ideal for \(P_{2}\) as this would persuade \(P_{1}\) to go ahead with the collaboration without already revealing the fully improved model. **Configurations and Datasets.** For all the experiments, we used a 64-bit Ubuntu 22.04.2 LTS with 32G RAM and the 12th Gen Intel(R) Core(TM) i7-12700 CPU. We did not use GPUs for our experiments. Table 1 shows the common hyperparameters used in our experiments. When using different values (e.g., number of neurons in the hidden layer), we attach them with the experimental results. Table 2 illustrates the brief statistics (number of records, number of features, number of classes and number of samples in each class) of the datasets. All datasets contain (almost) balance classes, except for Drebin, where the benign class has twice the number of samples as the malware class. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline **Hyperparameters** & **Configuration** \\ \hline activation function & sigmoid \\ output function & softmax \\ loss function & cross entropy \\ optimiser & SGD \\ \(L2\) regulariser & 0.01 \\ \hline \hline \end{tabular} \end{table} Table 1. Common Hyperparameters for All Models. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **Dataset** & **\#Rec.** & **\#Feat.** & **\#Classes** & **Class distribution** \\ \hline Iris (Krishnam and others, 2018) & 150 & 4 & 3 & (50, 50, 50) \\ Seeds (Krishnam and others, 2018) & 210 & 7 & 3 & (70, 70, 70) \\ Wine (Krishnam and others, 2018) & 178 & 13 & 3 & (59, 71, 58) \\ Abrupto (Abrupto, 2018) & 10,000 & 4 & 2 & (5,000, 5,000) \\ Drebin (Abrupto, 2018) & 16,680 & 3,506 & 2 & (55,60, 11,120) \\ \hline \hline \end{tabular} \end{table} Table 2. Datasets Statistics. ### Model Improvement by Joining Datasets In Section 3, we showed that if \(P_{2}\) does not have domain knowledge, then training the model \(M_{2}\) on the dataset \(D_{1}\cup D_{2}\) will not increase the accuracy of \(M_{2}\) on \(D_{\text{hold}}\), as long as \(D_{\text{hold}}\) is a balanced dataset. We are now interested in knowing the other side of the coin: if \(P_{2}\)'s dataset \(D_{2}\) is indeed of better quality, does this result in an improved performance on \(M_{2}\). To demonstrate this, we will use the scenario where \(D_{1}\) is small and imbalanced, i.e., for one of the labels, it has under-representative samples. On the other hand, \(D_{2}\) is larger and more balanced. Intuitively, joining the two should show substantial improvement in accuracy. We use the first 10,000 samples from the mixed_1010_abrupto of the Abrupto dataset (D **Accuracy.** Figure 2 depicts the ROC curve from our model versus the PyTorch implementation on the Iris dataset (Section 6.4). As can be seen, our model faithfully reproduces the results from PyTorch. We are therefore convinced that our implementation from scratch is an accurate representation of the model from PyTorch. **Implementation Time.** Table 6 compares the average (of 10 runs) training time (in seconds) of our model implemented from scratch (Model \(M_{2}\) Plaintext, no DP) and the PyTorch baseline model (Model \(M_{2}\) Baseline) on all datasets in Table 2. It is observed that, training on the plaintext, the training time of our implementation is about 100 to 300 times slower than that of the PyTorch baseline. We analysed the source code of our implementation and the PyTorch documentation and then concluded three points caused the observation. * Due to the need to incorporate Gaussian-distributed noise, smaller negative numbers may occur, which can lead to data overflow when the activation function sigmoid is entered. Therefore the program is designed in such a way that for all model training, the sigmoid will be adjusted to Equation (9). \[\text{sigmoid}(x)=\begin{cases}\frac{1}{1+\epsilon x},&\text{if }x\geq 0,\\ \frac{\epsilon x}{1+\epsilon x^{\prime}},&\text{otherwise}.\end{cases}\] (9) * Pytorch optimised the low-level implementation for training, i.e., dynamic computational graphs, which means the network behaviour can be changed programmatically at training time to accelerate the training process. * We implemented more functions to accommodate the ZAMA library (Zama, 2017) that we had to run even in the plaintext case. One of the most different points from the general neural network operation process is the way we treat the gradients. As discussed in Section 4.1, we calculate the two items in Equation (3) separately due to the nature of partially encrypted labels. Hence, in our code, we defined a separate function for preparing a batch of sample forward propagation results, which are forced to be divided into two items for computation according to the requirements. **Encrypted Domain.** In order to use Zama's current Concrete framework for TFHE (Zama, 2017), it is a must-have to convert decimals to integers. For our experiments, we chose a precision level of six decimal places. Additionally, in order to verify that the encryption operations do not affect the accuracy of our implementation, we use the Iris dataset for a simple experiment. After setting the same initialisation of the weights and the order of reading the datasets, we train the model from scratch trained on the plaintext and the model from scratch trained on the ciphertext on Iris dataset and compare the trained model parameters. From Table 5 we can see that the weights of the two models are almost the same up to four decimal places. ### Protocol Implementation We use the Concrete framework from Zama (Zama, 2017) to implement the TFHE components of our protocol. Note that not all the operations in our protocol require homomorphic operations. The circuit to \begin{table} \begin{tabular}{c|c c c c|c} \hline \hline \multicolumn{6}{c}{**Input Layer to Hidden Layer 1**} \\ \hline Type & \multicolumn{4}{c|}{Weight} & Bias \\ \hline \multirow{4}{*}{Model Trained on Plaintext} & 0.6511 & 0.0704 & 0.2909 & 1.0547 & \(-\)0.3171 \\ & 0.4029 & \(-\)0.4721 & 1.2437 & 0.6597 & 0.472 \\ & \(-\)0.374 & 0.5 & \(-\)0.9141 & \(-\)0.512 & 0.0603 \\ & \(-\)0.3569 & 0.3970 & \(-\)0.9173 & \(-\)0.8551 & \(-\)0.0134 \\ \hline \multirow{4}{*}{Model Trained on Cintext} & 0.6511 & 0.0704 & 0.2909 & 1.0546 & \(-\)0.3171 \\ & \(0.4029\) & \(-\)0.4721 & 1.2437 & 0.6597 & 0.472 \\ & \(-\)0.374 & 0.4999 & \(-\)0.9141 & \(-\)0.5119 & 0.0603 \\ & \(-\)0.3569 & 0.3907 & \(-\)0.9173 & \(-\)0.8551 & \(-\)0.0135 \\ \hline \multicolumn{6}{c}{**Hidden layer 1 to Output Layer**} \\ \hline Type & \multicolumn{4}{c|}{Weight} & Bias \\ \hline \multirow{4}{*}{Model Trained on Plaintext} & \(-\)1.0483 & \(-\)1.8553 & 1.194 & 1.3057 & - \\ & \(-\)0.4617 & 0.8366 & 0.2772 & \(-\)0.0569 & - \\ & \(0.9567\) & 0.822 & \(-\)0.9533 & \(-\)1.4343 & - \\ \hline \multirow{4}{*}{Model Trained on Cintext} & \(-\)1.0483 & \(-\)1.8553 & 1.194 & 1.3057 & - \\ & \(-\)0.4617 & 0.8366 & 0.2772 & \(-\)0.0569 & - \\ \cline{1-1} & \(0.9566\) & 0.822 & \(-\)0.9533 & \(-\)1.4343 & - \\ \hline \hline \end{tabular} \end{table} Table 5. Weights and Biases of Models Trained on Plaintext and Ciphertext (hidden neurons: 4, batch size: 16, learning rate: 0.1, epoch: 50). Figure 2. ROC Curves of Model from Scratch and PyTorch Model on Iris Dataset. compute the protocol operations for one sample is given in Figure 3. Given a sample \(s\) of the batch \(B\), from Equation (4) in Step 3 of the protocol, we first need to multiply the derivatives of the inputs to the softmax function with the encrypted labels. Dropping subscripts and abbreviating notation, these are shown as \(z_{\mathbf{w}}\) and \(y\) respectively in the figure. Next, Step 6 of the protocol samples noise of scale \(\mathcal{N}(0,\sigma^{2}I_{R})\), which for a single entry with Gaussian differential privacy is \(\eta=\mathcal{N}(0,1/\epsilon^{2})\). The current implementation of Concrete does not allow negative numbers as input. We therefore homomorphically subtract \(|\eta|\) from \(\eta+|\eta|\) to equivalently obtain \(\eta\). Note that the two quantities are greater than or equal to \(0\). The noise is then multiplied by the sensitivity of the batch \(\Delta S(\mathbf{w})\) (dropping subscript), according to Equation (6) in Step 7. On the other side of the circuit, once decrypted, we get the blinded and differentially private gradients (Equation (8)), which result in the differentially private gradients after subtracting the blind. **Implementation Notes.** As Concrete only accepts integer inputs, we need to convert the respective inputs to integer equivalents. To do so, we compute \(z_{\mathbf{w}}\) and the blind \(\mu\) to six decimal places and multiply it by a factor of \(10^{6}\). On the other hand, we preserve \(\Delta S(\mathbf{w})\) and \(\eta\) to three decimal places and multiply it by a factor of \(10^{3}\) to obtain an integer equivalent. The reason for setting a different precision level for these two quantities is mainly because of the magnitude of \(\Delta S_{B}(\mathbf{w})\). In our experiments, after converting inputs to integers, \(\Delta S_{B}(\mathbf{w})\) is in the order of \(10^{3}\) for some data sets. In Figure 3, we see that \(\Delta S_{B}(\mathbf{w})\) is multiplied with the noise \(\eta\). Thus, we also retain \(\eta\) to three decimal places and multiple it by \(10^{3}\). The quantity \(\Delta S_{B}(\mathbf{w})\times\eta\) therefore expands to the same multiple as the rest of the integer quantities, i.e., \(10^{6}\), except the label \(y\) which itself is an integer. ### Ciphertext vs Plaintext Protocol To ensure that the protocol replicates the scenario of Section 6.1, we evaluate our protocol against the unencrypted setting. Namely, we evaluate model \(M_{1}\) on dataset \(D_{1}\cup D_{2}\), both unencrypted and without differential privacy, and model \(\tilde{M}_{2}\) on dataset \(D_{1}\cup D_{2}\) computed through our protocol (with encryption and differential privacy). To evaluate this, we use datasets 2. What's more, We retain 3, 506 features from Drebin dataset which is a subset of all available features. The feature classes retained include 'api_call', 'call', 'feature', 'intent', 'permission', 'provider', and'real_permission'. For each dataset and each value of \(\epsilon\), we report the average (of 10 runs) results. Each time, the dataset is re-partitioned and 30% of the data is randomly used as \(D_{\text{hold}}\). Due to the different sizes of the datasets, we divide \(D_{1}\) and \(D_{2}\) differently for different datasets. See details below. * For Iris, Seeds and Wine, \(D_{1}\) is 10% of the total data and \(D_{2}\) is 60% of the total data. * For Abrupto and Drebin, \(D_{1}\) is 1% of the total data and \(D_{2}\) is 69% of the total data. Note that we do not artificially induce a skewed distribution of samples in \(D_{1}\) for these experiments, as this was done in Section 6.1. Table 6 shows the results of our experiments. The column \(\epsilon\) contains the overall privacy budget. For datasets Iris, Seeds and Wine, training our protocol (\(\tilde{M}_{2}\)) is about 100 times slower than training the same model in the clear (\(M_{2}\)). However, this is still in a reasonable time of between 40 to 60 seconds for these three datasets. The Seeds dataset takes the longest training time, followed by Wine and then Iris, consistent with the sizes of these datasets. The \(\epsilon=100\) regime gives little to no privacy. Looking at the numbers corresponding to those rows, we see that the accuracy of \(\tilde{M}_{2}\) is almost identical to \(M_{2}\). This shows that our encrypted portion of the protocol does not incur any accuracy loss. For each dataset, we also show values of \(\epsilon\) between 0.25 and 0.5, where the accuracy of \(\tilde{M}_{2}\) lies nicely in between \(M_{1}\) and \(M_{2}\). Thus, this value of \(\epsilon\) can be used by \(P_{2}\) such that \(\tilde{M}_{2}\) shows an improvement over \(M_{1}\), yet at Figure 3. Our Homomorphic Encryption Circuit in Zama’s Concrete. the same time, training over the noiseless data promises even more improvement. The appropriate choice of \(\epsilon\) depends on the size of the dataset. To conclude our experiments, we use two larger datasets. One is a synthetic dataset Abrupto and the other one is a real-world Android malware dataset Drebin. Table 6 also shows the results of these two larger datasets. In terms of training time, unlike small datasets, there is a significant difference in their training time. This is due to the large number of features in the Drebin dataset, which is around 876 times the input feature count of the Abrupto dataset. When \(\epsilon=100\), the accuracy of \(\widetilde{M_{2}}\) is almost the same or even better than that of \(M_{2}\). For Abrupto datasets, at \(\epsilon\in[0.15,\,0.5]\) we find the spot between the accuracy of model \(M_{1}\) and \(M_{2}\). For another, with \(\epsilon\in[0.3,0.5]\) we find the spot between the accuracy of model \(M_{1}\) and \(M_{2}\) based on the Drebin dataset. ### Discussion - Privacy Budget, Training Time and Limitations **Setting \(\epsilon\) for Data Collaboration.** The value of \(\epsilon\) used depends on the size of the dataset, with a smaller \(\epsilon\) required for larger datasets. However, for all the datasets used in the experiment, \(\epsilon\) in the range \((0.3,0.5)\) the accuracy of the model \(\widetilde{M_{2}}\) is higher than that of \(M_{1}\), and lower than that of \(M_{2}\). Thus, party \(P_{2}\) can set an \(\epsilon\) within this range, which provides visible benefits for \(P_{1}\) to assist the data collaboration agreement while not releasing too much about \(P_{2}\)'s own model performance/privacy. **Fast Training of Our Protocol.** As observed in Table 6, the training time of our protocol (over ciphertext) shows many orders of magnitude faster than protocols using entirely FHE operation reported in (Friedman et al., 2017; Dosovitskiy et al., 2018; Dosovitskiy et al., 2018). This is a direct result of the intuition of our protocol. That is, we managed to keep the forward and backward propagations in the clear even though some labels from the training data are encrypted. To compare the training time of our protocol against an end-to-end encrypted solution for neural network training, we look at the work from (Friedman et al., 2017). They use polynomial approximations for the activation functions (e.g., sigmoid) to allow homomorphic operations. In one set of experiments, they train a neural network with one hidden layer over the Crab dataset, which has 200 rows and two classes. This dataset is comparable in size and number of classes to the Iris dataset used by us. They implement the homomorphically encrypted training of the neural network using HELib (Friedman et al., 2017). The time required for training one batch per epoch is 217 seconds (Table 3a in (Friedman et al., 2017)). For 50 epochs, and processing all batches in parallel, this amounts to 10,850 seconds. We note that after each round the ciphertext is sent to the client to re-encrypt and send fresh ciphertexts back to the server in order to reduce noise due to homomorphic operations. Thus, this time is a crude lower bound on the total time. In comparison, our protocol takes a total time of around 42 seconds for the entire 50 epochs over the comparable Iris dataset with one hidden layer (Table 6). With a more optimised implementation (say via PyTorch), this time can be reduced even further. **Implementation Limitations.** The minimum value of the total budget \(\epsilon\) tested by us is 0.1. This is because of the limitation of Concrete in handling high precision real numbers (as they need to be converted into integers). When we inject differentially private noise using a (too) small \(\epsilon\), it is highly likely to generate noise as numbers of large absolute value, which leads to the float overflow problem for sigmoid. In addition to that, our implementation of a neural network is many orders of magnitude slower than the PyTorch benchmark even though our accuracy matches that of the PyTorch baseline. If we are able to access gradients in a batch from the PyTorch implementation, then we would significantly accelerate the proposed protocol. For instance, the model \(\widetilde{M_{2}}\) through our protocol takes 5-6 times more than \(M_{2}\) (via our implementation) on the Drebin dataset (see Table 6. This means that we can potentially run our protocol between 14 to 18 seconds if the protocol is run over PyTorch's implementation. ## 7. Related Work The closest work to ours is that of Yuan et al (Yuan et al., 2019). They assume a scenario where one of the two parties holds the feature vectors and the other holds the labels. The goal is to jointly train a neural network on this dataset. At the end of the protocol, the first party learns the trained model whereas the second party does not learn anything. Like us, they use label differential privacy to obtain a more computationally efficient solution. Our scenario deals with enabling party \(P_{1}\) to assess the quality of the resulting dataset without knowing the model, where \(P_{1}\) does not initially trust the labelling from \(P_{2}\). Another major difference in our case is that we use FHE instead of secret sharing to compute model parameters (Yuan et al., 2019). Apart from Yuan et al (Yuan et al., 2019), several works have investigated training neural network models entirely in the encrypted domain, encompassing both features and labels (Friedman et al., 2017; Dosovitskiy et al., 2018; Dosovitskiy et al., 2018; Dosovitskiy et al., 2018; Dosovitskiy et al., 2018). Hesamifard et al (Friedman et al., 2017) propose the CryptoDL framework, which employs Somewhat Homomorphic Encryption (SWHE) and Leveled Homomorphic Encryption (LHE) on approximated activation functions to facilitate interactive deep neural network training over encrypted training sets. However, the training time is prolonged even on small datasets (e.g., on the Crab dataset with dimensions of \(200\times 6\) cells, it takes over 200 seconds per epoch/iteration during the training phase). Furthermore, the omission of details regarding the CPU clock speed (frequency) and the number of hidden layer neurons in their experiments renders the reported training time less equitable. Nandakumar et al (Nandakumar et al., 2018) introduce the first fully homomorphic encryption (FHE)-based stochastic gradient descent technique (FHE-SGD). As pioneers in this field, FHE-SGD investigates the feasibility of training a DNN in the fixed-point domain. Nevertheless, it encounters substantial training time challenges due to the utilisation of BGV-lookup-table-based sigmoid activation functions. Lou et al (Lou et al., 2017) present the Glyph framework, which expedites training for deep neural networks by alternation between TFHE (Fast Fully Homomorphic Encryption over the Torus) and BGV (Cowel et al., 2018) cryptosystems. However, Glyph heavily relies on transfer learning to curtail the required training epochs/iterations, significantly reducing the overall training time for neural networks. It should be noted that Glyph's applicability is limited in scenarios where a pre-trained teacher model is unavailable. Xu et al propose CryptoNN (Xu et al., 2019), which employs functional encryption for inner-product (Bogor et al., 2018) to achieve secure matrix computation. However, the realisation of secure computation in CryptoNN necessitates the presence of a trusted authority for the generation and distribution of both public and private keys, a dependency that potentially compromises the security of the approach. NN-emd (Kumar et al., 2017) extends CryptoNN's capabilities to support training a secure DNN over vertically partitioned data distributed across multiple users. A related line of work looks at techniques to check the validity of inputs without revealing them. For instance, one can use zero-knowledge range proofs (Kumar et al., 2017) to check if an input is within an allowable range without revealing the input, e.g., age. This has, for instance, been used in privacy-preserving joint data analysis schemes such as Drynva (Dynka, 2017) and Prio (Prio, 2017) to ensure that attribute values of datasets are within the allowable range. However, in our case, we do not assume that the party \(P_{2}\) submits any label that is out of range. Instead, the party may not have the domain expertise to label feature vectors correctly. This cannot be determined through input validity checking. ## 8. Conclusion We have shown how two parties can assess the value of their potential machine learning collaboration without revealing their models and respective datasets. With the use of label differential privacy and fully homomorphic encryption over the torus, we are able to construct a protocol for this use case which is many orders of magnitude more efficient than an end-to-end homomorphic encryption solution. Our work can be improved in a number of ways. One direction is to go beyond the honest-but-curious model and assume malicious parties. A key challenge is to maintain efficiency as several constructs used by us cannot be used in the malicious setting, e.g., the use of the random blind. Due to several limitations in accessing components in PyTorch's neural network implementation and the integer input requirement in Zama's Concrete TFHE framework, our implementation is short of speedups that can potentially be achieved. As a result, we are also not able to test our protocol on larger datasets (say, 100k or more rows). Future versions of this framework may remove these drawbacks. Finally, there \begin{table} \begin{tabular}{c|c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{\(\epsilon\)} & \multicolumn{2}{c|}{Model \(M_{1}\)} & \multicolumn{2}{c|}{Model \(M_{2}\)} & \multicolumn{2}{c|}{Model \(M_{2}\)} & \multicolumn{2}{c}{Model \(M_{2}\)} \\ & & Plaintest, no DP & Plaintest, no DP & Our protocol & Baseline & Training time \\ \cline{2-10} & & Time(s) & Test Acc. & Time(s) & Test Acc. & Time(s) & Test Acc. & Time(s) & Test Acc. \\ \hline \multirow{8}{*}{Iris} & 0.1 & 0.0619 & 0.7289 & 0.4244 & 0.8467 & 40.4792 & 0.3289 & 0.0078 & 0.8467 \\ & 1 & 0.0624 & 0.7289 & 0.4268 & 0.8467 & 41.9663 & 0.8111 & 0.0079 & 0.8467 \\ & 10 & 0.0616 & 0.7289 & 0.4248 & 0.8467 & 41.9859 & 0.8311 & 0.0079 & 0.8467 \\ & 100 & 0.0619 & 0.7222 & 0.4236 & 0.8444 & 41.6622 & 0.8311 & 0.0079 & 0.8444 \\ \cline{2-10} & 0.3 & 0.0618 & 0.7289 & 0.4247 & 0.8467 & 40.4927 & 0.7311 & 0.0079 & 0.8467 \\ & 0.5 & 0.0619 & 0.7289 & 0.4255 & 0.8467 & 41.2886 & 0.7778 & 0.0078 & 0.8467 \\ \hline \multirow{8}{*}{Seeds} & 0.1 & 0.0896 & 0.8079 & 0.6216 & 0.8762 & 55.7665 & 0.3429 & 0.0084 & 0.8762 \\ & 1 & 0.0889 & 0.8079 & 0.6124 & 0.8762 & 54.9405 & 0.8429 & 0.0084 & 0.8762 \\ & 10 & 0.0884 & 0.8079 & 0.6120 & 0.8762 & 55.8102 & 0.8651 & 0.0083 & 0.8762 \\ & 100 & 0.0902 & 0.8079 & 0.6219 & 0.8762 & 57.3169 & 0.8783 & 0.0083 & 0.8762 \\ \cline{2-10} & 0.3 & 0.0897 & 0.8079 & 0.6204 & 0.8762 & 55.9801 & 0.8057 & 0.0083 & 0.8762 \\ & 0.5 & 0.0885 & 0.8079 & 0.6126 & 0.8762 & 54.2402 & 0.8444 & 0.0083 & 0.8762 \\ \hline \multirow{8}{*}{Wine} & 0.1 & 0.1161 & 0.7981 & 0.5265 & 0.9302 & 51.0531 & 0.3679 & 0.0105 & 0.9302 \\ & 1 & 0.1046 & 0.7925 & 0.5311 & 0.9472 & 52.0632 & 0.9253 & 0.0106 & 0.9472 \\ & 10 & 0.0745 & 0.8283 & 0.5289 & 0.9283 & 50.3593 & 0.9296 & 0.0089 & 0.9283 \\ & 100 & 0.0747 & 0.8283 & 0.5313 & 0.9283 & 49.9735 & 0.9315 & 0.0089 & 0.9283 \\ \cline{2-10} & 0.25 & 0.0747 & 0.8283 & 0.5322 & 0.9283 & 50.4241 & 0.8358 & 0.0081 & 0.9283 \\ & 0.3 & 0.1155 & 0.8604 & 0.5205 & 0.9660 & 51.9729 & 0.9226 & 0.0104 & 0.9660 \\ \hline \multirow{8}{*}{Abrupto} & 0.1 & 0.4212 & 0.8399 & 28.3430 & 0.9077 & 1973.8709 & 0.6377 & 0.2687 & 0.9077 \\ & 1 & 0.4108 & 0.8331 & 27.8466 & 0.9057 & 1917.7900 & 0.9067 & 0.2247 & 0.9057 \\ & 10 & 0.4065 & 0.8331 & 27.7893 & 0.9055 & 1981.4878 & 0.9051 & 0.2226 & 0.9055 \\ & 100 & 0.4082 & 0.8331 & 28.0041 & 0.9067 & 1922.9672 & 0.9089 & 0.2238 & 0.9067 \\ \cline{2-10} & 0.15 & 0.4214 & 0.8299 & 28.3819 & 0.9067 & 1994.0644 & 0.8579 & 0.2681 & 0.9067 \\ & 0.5 & 0.4198 & 0.8331 & 28.4917 & 0.9067 & 1953.7888 & 0.9038 & 0.2240 & 0.9067 \\ \hline \multirow{8}{*}{Drebin} & 0.1 & 8.5758 & 0.8571 & 628.0991 & 0.9470 & 3523.7918 & 0.6397 & 2.8677 & 0.9470 \\ & 1 & 8.4595 & 0.8680 & 635.2497 & 0.9507 & 3501.8622 & 0.9379 & 2.9562 & 0.9507 \\ \cline{1-1} \cline{2-10} & 10 & 8.4930 & 0.8530 & 623.5984 & 0.9492 & 3506.4661 & 0.9426 & 2.9583 & 0.9492 \\ \cline{1-1} & 100 & 8.5634 & 0.8871 & 628.6070 & 0.9489 & 3544.4972 & 0.9492 & 2.4226 & 0.9489 \\ \cline{1-1} \cline{2-10} & 0.3 & 8.6314 & 0.8571 & 638.8455 & 0.9470 & 3525.2417 & 0.8904 & 2.7122 & 0.9470 \\ \cline{1-1} & 0.5 & 8.4300 & 0.8743 & 632.0590 & 0.9496 & 3505.2984 & 0.9252 & 2.5718 & 0.9496 \\ \hline \hline \end{tabular} \end{table} Table 6. Training Time and Accuracy of \(M_{1}\) (dataset \(D_{1}\)), \(M_{2}\) (dataset \(D_{1}\)\(\cup\)\(D_{2}\)) in the clear, \(\widetilde{M}_{2}\) (dataset \(D_{1}\)\(\cup\)\(D_{2}\)) and a PyTorch baseline model (hidden neurons: 20, batch size: 256, learning rate: 0.1, epoch: 50 could be other ways in which two parties can check the quality of their datasets. We have opted for the improvement in the model as a proxy for determining the quality of the combined dataset.
2308.07651
Impurity and vortex States in the bilayer high-temperature superconductor $\mathrm{La}_3\mathrm{Ni}_2\mathrm{O}_7$
We perform a theoretical examination of the local electronic structure in the recently discovered bilayer high-temperature superconductor ${\mathrm{La}_3\mathrm{Ni}_2\mathrm{O}_7}$. Our method begins with a bilayer two-orbital tight-binding model, incorporating various pairing interaction channels. We determine superconducting order parameters by self-consistently solving the real-space Bogoliubov-de Gennes (BdG) equations, revealing a robust and stable extended s-wave pairing symmetry. We investigate the single impurity effect using both self-consistent BdG equations and non-self-consistent T-matrix methods, uncovering low-energy in-gap states that can be explained with the T-matrix approach. Additionally, we analyze magnetic vortex states using a self-consistent BdG technique, which shows a peak-hump structure in the local density of states at the vortex center. Our results provide identifiable features that can be used to determine the pairing symmetry of the superconducting ${\mathrm{La}_3\mathrm{Ni}_2\mathrm{O}_7}$ material.
Junkang Huang, Z. D. Wang, Tao Zhou
2023-08-15T08:58:36Z
http://arxiv.org/abs/2308.07651v2
Impurity and vortex States in the bilayer high-temperature superconductor La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) ###### Abstract We perform a theoretical examination of the local electronic structure in the recently discovered bilayer high-temperature superconductor La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\). Our method begins with a bilayer two-orbital tight-binding model, incorporating various pairing interaction channels. We determine superconducting order parameters by self-consistently solving the real-space Bogoliubov-de Gennes (BdG) equations, revealing a robust and stable extended s-wave pairing symmetry. We investigate the single impurity effect using both self-consistent BdG equations and non-self-consistent T-matrix methods, uncovering low-energy in-gap states that can be explained with the T-matrix approach. Additionally, we analyze magnetic vortex states using a self-consistent BdG technique, which shows a peak-hump structure in the local density of states at the vortex center. Our results provide identifiable features that can be used to determine the pairing symmetry of the superconducting La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) material. ## I Introduction Superconductivity with a critical temperature (T\({}_{c}\)) of approximately 80 K has been recently discovered in the bulk bilayer nickelate compound La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressures between 14.0-43.5 GPa [1]. Notably, the average valence of the nickel ions in this compound is +2.5, associated with the average \(3d^{7.5}\) configuration--a significant departure from the traditionally observed \(3d^{9}\) configuration in previous infinite-layer nickelate superconductors [2]. This discovery positions the material as a novel platform for exploring high-T\({}_{c}\) superconductivity, consequently instigating a wave of both theoretical and experimental investigations [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. The low energy electronic structures of this compound, as determined through density-functional-theory calculation, are majorly contributed to by Ni-\(3d_{z^{2}}\) and Ni-\(3d_{x^{2}-y^{2}}\) orbitals. Consequently, the bilayer two-orbital model (incorporating four energy bands) is widely employed to obtain normal state energy bands [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. It was indicated experimentally [27] and proposed theoretically [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24] that strong electronic correlations exist in the La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) compound, implicating the likelihood of an unconventional pairing mechanism. Strategically identifying the pairing symmetry could be instrumental in studying the superconducting pairing mechanism. To date, the pairing symmetry has been investigated theoretically through diverse theoretical methods, leading to the prediction of multiple potential pairing symmetries, including several different \(s_{\pm}\)-wave pairing [5; 6; 7; 8; 9; 10; 11; 12; 13], the \(d\)-wave pairing [15; 24], and the coexistence/competition of the \(s\)-wave and \(d\)-wave pairing symemtries [21; 22]. This indicates that the chosen pairing symmetry could depend on the particular model and approximation considered. Hence, further experimental evidence is critical for definitive determination of the pairing symmetry. Theoretically, the Bogoliubov-de Gennes (BdG) equation based on real-space self-consistent calculations is a powerful method for obtaining the pairing symmetry of unconventional superconductors. In the past, this method was widely used to study the local electronic structure of high-temperature superconductors, and the pairing symmetry was correctly obtained through self-consistent calculations [30; 31; 32; 33]. The local electronic structure is also a valuable method for studying the pairing symmetry of superconductors. It can be explored using scanning tunneling microscopy experiments, which are effective in discerning the pairing symmetry by measuring the local electronic structure in proximity to an impurity or magnetic vortex [34]. Theoretically, such local electronic structures can be examined through the calculation of the local density of states (LDOS). Past efforts to investigate the impurity and vortex-induced bound states have indeed provided valuable insights into the electronic structure and pairing symmetry of various unconventional superconductors [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. In this paper, we provide a theoretical investigation into the LDOS in the presence of a single impurity or magnetic vortices in the superconducting La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) material, utilizing the BdG equations and a bilayer two-orbital model. Notably, our self-consistent calculations predict an extended \(s\)-wave pairing symmetry as the most favorable outcome, based on which, we calculate the LDOS and discuss the impurity and vortex induced low energy states. Furthermore, the impact of impuri ties is analyzed using the \(T\)-matrix method, offering a numerical analysis of the origin of the impurity-induced low energy states. We propose that different pairing symmetries can be distinguished through the impurity and vortex-induced bound states. The remainder of this paper is structured as follows: In section II, we introduce the model and outline the pertinent formalism. Section III discusses the numerical results obtained from the self-consistent calculation, focusing on the impurity and vortex states in the La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) compound. Section IV presents the numerical results from non-self-consistent calculations, offering a comparison to the self-consistent results from Section III. Finally, a concise summary is provided in Section V. ## II Model and Formalism We start with a bilayer two-orbital model to represent the normal state bands. Then we introduce the superconducting pairing term, derived from a phenomenological model that considers both nearest-neighbor intra-layer and onsite inter-layer attractive interactions, qualitatively consistent with the bilayer \(t-J\)-type model [7]. As a result, the comprehensive model can be represented as follows: \[H = \sum_{l,l^{\prime}}\sum_{ij\tau^{\prime}\sigma}l^{l,l^{\prime}}_{ ij\tau\tau^{\prime}}c^{l\dagger}_{i\tau\sigma}c^{l^{\prime}}_{j\tau^{\prime} \sigma}+V_{i}\sum_{\tau\sigma}c^{1\dagger}_{i_{0}\tau\sigma}c^{1}_{i_{0}\tau\sigma} \tag{1}\] \[+\sum_{l,l^{\prime}}\sum_{ij\tau}(\Delta^{l,l^{\prime}}_{ij\tau} c^{l\dagger}_{i\tau\uparrow}c^{l^{\prime}\dagger}_{j\tau\downarrow}+H.C.).\] Here \(l\) and \(l^{\prime}\) are layer indices. \(\tau\) and \(\tau^{\prime}\) are orbital indices. \(i\) and \(j\) are site indices. \(\sigma\) is the spin index. The normal state band parameters are taken from Ref. [7]. To study the point impurity effect, we consider a point impurity at the layer 1 and the site \(i_{0}\), with \(V_{i}\) being the impurity scattering strength. In the presence of the magnetic field, the hopping constants \(l^{l,l^{\prime}}_{ij\tau^{\prime}}\) are replaced as \(l^{l,l^{\prime}}_{ij\tau^{\prime}}\Rightarrow l^{l,l^{\prime}}_{ij\tau^{\prime }}\mathrm{exp}(i\phi_{ij})\), where \(\phi_{ij}=\frac{\pi}{\Phi_{0}}\int_{\hat{J}}^{4}A(\mathbf{r})\cdot d\mathbf{r}\). \(\Phi_{0}\) is superconducting flux quantum and \(A\) is vector potential in the Landau gauge with \(A=(-By,0,0)\). The Hamiltonian can be diagonalized by solving the BdG equations, \[\sum_{l^{\prime}j\tau^{\prime}}\left(\begin{array}{cc}H^{l,l^{\prime}}_{ij \tau^{\prime},\sigma}&\Delta^{l,l^{\prime}}_{ij\tau}\delta_{\tau,\tau^{\prime }}\\ \Delta^{l,l^{\prime}\ast}_{ij\tau}\delta_{\tau,\tau^{\prime}}&-H^{l,l^{\prime} \ast}_{ij\tau^{\prime},\delta}\end{array}\right)\left(\begin{array}{c}u^{l^ {\prime}}_{ij\tau^{\prime}}\\ v^{l^{\prime}}_{nj\tau^{\prime}}\end{array}\right)=E_{n}\left(\begin{array}{c }u^{l^{\prime}}_{nj\tau^{\prime}}\\ v^{l^{\prime}}_{nj\tau^{\prime}}\end{array}\right), \tag{2}\] with \(H^{l,l^{\prime}}_{ij\tau^{\prime},\sigma}=t^{l,l^{\prime}}_{ij\tau\tau^{ \prime}}+V_{i}\delta_{ij}\delta_{i,i_{0}}\delta_{l,1}\). The pairing order parameter \(\Delta^{l,l^{\prime}}_{ij\tau}\) are determined self-consistently, \[\Delta^{l,l^{\prime}}_{ij\tau}=\frac{V^{l,l^{\prime}}_{ij\tau}}{4}\sum_{n} \left(u^{l}_{n\tau i}v^{l^{\prime}\ast}_{nj\tau}+u^{l^{\prime}}_{nj\tau}v^{l \ast}_{n\tau}\right)\tanh\left(\frac{\beta E_{n}}{2}\right), \tag{3}\] where \(l=l^{\prime}\) and \(l\neq l^{\prime}\) represent the intralayer and inter-layer pairing, respectively. The LDOS at the site \(i\) of the layer \(l\) can be calculated as, \[\rho^{l}_{i}=\sum_{n\tau}\left[\left|u^{l}_{n\tau i}\right|^{2}\delta\left( \omega-E_{n}\right)+\left|v^{l}_{n\tau i}\right|^{2}\delta\left(\omega+E_{n} \right)\right], \tag{4}\] where the \(\delta\)-function is expressed as \(\delta(x)=\Gamma/[\pi(x^{2}+\Gamma^{2})]\) with \(\Gamma=0.005\). The effect of a single impurity can be studied theoretically via an alternative method, namely, the \(T\)-matrix method [35; 36; 37; 38]. When overlooking the suppression of the superconducting pairing amplitude by the impurity, the superconducting Hamiltonian [Eq. (1)] can be transformed into the momentum space with \(H=\sum_{\mathbf{k}}\Psi^{\dagger}(\mathbf{k})\hat{M}(\mathbf{k})\Psi(\mathbf{ k})\). Here, \(\hat{M}(\mathbf{k})\) represents an \(8\times 8\) matrix while the column vector \(\Psi(\mathbf{k})\) is given by: \[\Psi(\mathbf{k})=\left(c^{1}_{\mathbf{k}\uparrow\uparrow},c^{1}_{\mathbf{k} \mathbf{2}\uparrow},c^{2}_{\mathbf{k}\downarrow\uparrow},c^{2}_{\mathbf{k} \mathbf{2}\uparrow},c^{1\dagger}_{-\mathbf{k}\downarrow\downarrow},c^{1\dagger} _{-\mathbf{k}\mathbf{2}\downarrow},c^{2\dagger}_{-\mathbf{k}\downarrow},c^{2 \dagger}_{-\mathbf{k}\downarrow},c^{2\dagger}_{-\mathbf{k}\downarrow}\right)^{T}. \tag{5}\] We define the bare Green's function matrix for a clean system, with the elements being defined as, \[G_{ij0}(\mathbf{k},\omega)=\sum_{n}\frac{u_{in}(\mathbf{k})u^{\ast}_{jn}( \mathbf{k})}{\omega-E_{n}(\mathbf{k})+i\Gamma}, \tag{6}\] where \(u_{n}(\mathbf{k})\) and \(E_{n}(\mathbf{k})\) are eigenvectors and eigenvalues of the matrix \(\hat{M}(\mathbf{k})\), respectively. Considering a single impurity at the site \((0,0)\) of layer 1, the \(T\)-matrix can be expressed as, \[\hat{T}\left(\omega\right)=\frac{\hat{U}}{\hat{I}-\hat{U}\hat{G}_{0}\left( \omega\right)}, \tag{7}\] where \(\hat{I}\) is an \(8\times 8\) identity matrix, and \(U\) is a diagonal matrix with four nonzero elements: \(U_{11}=U_{22}=V_{i}\) and \(U_{55}=U_{66}=-V_{i}\). The full Green's function can be expressed as \[\hat{G}\left(\mathbf{r},\mathbf{r}^{\prime},\omega\right)=\hat{G_{0}}\left( \mathbf{r},\mathbf{r}^{\prime},\omega\right)+\hat{G_{0}}\left(\mathbf{r},0, \omega\right)\hat{T}\left(\omega\right)\hat{G_{0}}\left(0,\mathbf{r}^{\prime}, \omega\right), \tag{8}\] where \(G_{0}\left(\mathbf{r},\mathbf{r}^{\prime},\omega\right)\) is the Fourier transform of \(G_{0}\left(\mathbf{k},\omega\right)\) with \(G_{0}\left(\mathbf{r},\mathbf{r}^{\prime},\omega\right)=\frac{1}{N}\sum_{k}G_{0} \left(\mathbf{k},\omega\right)e^{i\mathbf{k}\cdot\left(\mathbf{r}-\mathbf{r}^{ \prime}\right)}\). The LDOS at the layer \(l\) and the site \(\mathbf{r}\) can be calculated by the full Green's function \[\rho^{l}(\mathbf{r},\omega) = -\frac{1}{\pi}\mathrm{Im}\sum_{p=1}^{2}[G_{m+p,m+p}(\mathbf{r}, \mathbf{r},\omega)+ \tag{9}\] \[G_{m+p+4,m+p+4}(\mathbf{r},\mathbf{r},-\omega)],\] with \(m=2(l-1)\). In the present work, with the BdG method, the real space system size of each layer is chosen as \(N=40\times 40\). For the exploration of the magnetic vortex effect, we set \(B=2\Phi_{0}/N\) and \(V_{i}=0\). In studying the single impurity effect, we designate \(B=0\) and \(V_{i}=10\). The calculation of LDOS involves the consideration of a \(30\times 30\) supercell. We have performed numerical tests to confirm our general conclusions. The results remain qualitatively consistent across various system sizes (ranging from \(20\times 20\) to \(40\times 40\)) and impurity strengths (with \(3\leq V_{i}\leq 50\)). ## III Self-consistent calculation results We begin with a discussion of the most viable pairing symmetry. This is determined based on a self-consistent calculation, taking into account an attractive intra-layer interaction among nearest neighbors, denoted as \(V_{ijr}^{l,l}\), and an onsite attractive inter-layer interaction represented by \(V_{iir}^{1,2}\). These terms contribute to the generation of superconducting pairing. Numerical validation confirms the robustness and stability of the \(s\)-wave pairing symmetry solution with \(\Delta_{i,i+\hat{x}\tau}^{l,l}=\Delta_{i,i+\hat{y}\tau}^{l,l}\), regardless of the parameters of \(V_{iir}^{1,2}\), \(V_{ijr}^{l,l}\) and the initial input parameters. The parameters chosen for the ensuing results are \(V_{ii1}^{1,2}=V_{ii2}^{1,2}=0.4\) and \(V_{ij1}^{l,l}=V_{ij2}^{l,l}=0.8\), with \(i\) and \(j\) denoting nearest-neighbor sites. Note that the choice of pairing potentials (\(V_{ijr}^{l,l^{\prime}}\)) is for illustration purposes only. We have numerically verified that the presented results remain qualitatively similar when considering different pairing potentials. The conclusions are robust and applicable to different situations, as they are not limited to a specific set of parameters. Our numerical results also reveal that superconducting pairing within the \(d_{z^{2}}\) orbital is dominant, underpinned by a significant interlayer pairing term. The interlayer order parameters exhibit an inverse sign compared to the intralayer parameters. Interlayer order parameters in the \(d_{x^{2}-y2}\) channel are relatively minimal. These findings align qualitatively with preceding mean-field self-consistent calculations based on the \(t-J\)-type model [7]. ### Impurity states We now study the single impurity effect and discuss the possible impurity induced in-gap states with considering a point impurity at the site \((20,20)\) of layer 1. To perform this analysis, we solve the BdG equations [Eq.(2)], followed by the attainment of superconducting order parameters through a self-consistent method [Eq.(3)]. The site-dependent intralayer superconducting gap amplitudes, \(\Delta_{\tau}^{l,l}(i)\), are specified as \(\Delta_{\tau}^{l,l}(i)=\frac{1}{4}\mid\sum_{j}\Delta_{ij\tau}^{l,l}\mid\), with \(j\) representing four nearest-neighbor sites of the site \(i\). The spatial distribution of the order parameter amplitudes are plotted in Fig. 1. As is seen, the pairing amplitudes in the \(d_{z^{2}}\) channel significantly exceed those within the \(d_{x^{2}-y^{2}}\) channel. The amplitudes are suppressed almost to zero at the impurity site itself, but quickly escalate back to the bulk value as they move away from the impurity site. Fig. 2 illustrates the effect of an impurity on LDOS spectra. For a strong point impurity, the electronic structure at the impurity site is significantly suppressed, resulting in zero LDOS. A closer examination of the LDOS spectra near the point impurity is presented in Figs. 2(a) and 2(b). These depict the spectra at site (20,20) of layer 2 and site (20,21) of layer 1, respectively. As a point of comparison, we also include the LDOS spectrum in the system bulk, where \(V_{i}=0\), represented as dashed lines. In the system bulk, the LDOS spectrum manifests a two-gap structure, consisting of two distinct gaps, with superconducting coherent peaks appearing at energies \(\pm 0.05\) and \(\pm 0.15\). This two-gap feature is due to the multi-band effect. Upon introducing an impurity, there is a noticeable upsurge in the low energy LDOS at the site above the impurity [site (20,20) on layer 2], indicating impurity-induced in-gap states. A peak structure emerges at the positive energy, close to the first gap edge. At site (20,21) on layer 1, the effect of the impurity has been significantly mitigated with no evident peak structure resulting from the impurity. We have checked numerically that when moving further away from the impurity site, the LDOS spectrum aligns almost precisely with the system bulk spectrum. By neglecting the impurity-induced suppression of the order parameter and assuming that the pairing order parameters are uniform, the point impurity effect can be studied with the \(T\)-matrix method [35; 36; 37; 38]. The \(T\)-matrix Figure 1: The site-dependent gap amplitudes [\(\Delta_{\tau}^{l,l^{\prime}}(i)\)] with a point impurity at the site (20, 20) of layer 1. The upper and lower panels depict the pairing amplitudes within the \(d_{x^{2}-y^{2}}\) channel and \(d_{z^{2}}\) channel, respectively. The panels on the left represent the intralayer pairing amplitudes for layer 1, while the middle panels depict the same for layer 2. On the right, the panels illustrate the interlayer pairing amplitudes. Figure 2: The LDOS as a function of the energy \(\omega\) near the impurity site with the impurity strength \(V_{i}=10\) (the solid lines) and \(V_{i}=0\) (the dashed lines). (a) The LDOS at the upper site of the impurity site [site \((20,20)\) of layer 2]. (b) The LDOS at the right site of the impurity site [site \((21,20)\) of layer 1]. method is broadly accepted for its ability to capture the fundamental physics and accurately depict the single impurity effect on a qualitative level. We also employ the \(T\)-matrix method to examine the impurity effect and contrast the results with those obtained from the BdG technique. Numerical results of the LDOS spectra in the proximity to an impurity site, generated through the \(T\)-matrix method, are presented in Fig. 3. These results are qualitatively consistent with those generated through the BdG technique. As previously discussed [37], the primary distinction between the self-consistent BdG technique and the \(T\)-matrix method, in terms of the impurity effect, lies in the inclusion or exclusion of self-energy corrections induced by the impurity. Generally, these corrections are proportional to the impurity concentration and can be reasonably disregarded when studying the point impurity effect. The advantage of the \(T\)-matrix method lies in its ability to offer an in-depth understanding of in-gap states in LDOS spectra through analysis of the denominator of the \(T\)-matrix. This can be achieved by defining the complex function, \(A(\omega)\), as \(A(\omega)=\text{Det}[\hat{I}-\hat{U}\hat{G}_{0}\left(\omega\right)]\), where \(\text{Det}(\hat{M})\) denotes the determinant of the matrix \(\hat{M}\). From \(A(\omega)\), the bare Green's function \(\hat{G}_{0}\) contributes to the imaginary part [\(\text{Im}A(\omega)\)]. In the superconducting state, due to the existence of the superconducting gap, \(\text{Im}A(\omega)\) is typically small at low energies. If the real part of \(A(\omega)\) is minimal or crosses the zero axis, it results in a large \(T\)-matrix. This subsequently causes low-energy in-gap states. The real and imaginary parts of \(A(\omega)\), depicted as a function of energy \(\omega\), are displayed in Fig. 4. These provide a comprehensive understanding of the LDOS spectra. Specifically, when the real and imaginary parts of \(A(\omega)\) are small at low energies, it leads to impurity-induced, low-energy states. Furthermore, the real part of \(A(\omega)\) reaches its minimum at a specific low energy of approximately \(\pm 0.05\). At this energy, the LDOS spectra near the impurity will significantly amplify, and a peak structure will emerge, as seen in Figs. 2 and 3. ### Magnetic vortex states We turn to study the magnetic vortex states. Fig. 5 shows the self-consistent results of the order parameter amplitudes in the presence of a magnetic field. It can be seen that the superconducting gap decreases as the magnetic field intensifies. In this situation, the interlayer pairing in the \(d_{x^{2}-y^{2}}\) channel is insignificantly small and is therefore not showcased in this study. In both the intralayer and interlayer pairing, two vortices are seen to emerge, with their centers located at sites \((10,11)\) and \((30,29)\). The order parameter ceases at the vortex center, gradually increases outward, and returns to the standard values at the vortex edges. The radius of the vortex represents the scale of the superconducting coherence length which depends heavily on the gap amplitudes. As the superconducting gap amplifies, the coherence length reciprocally diminishes. As previously mentioned, in the system's bulk, the order parameter magnitudes in the \(d_{z^{2}}\) orbital significantly overweight those in the \(d_{x^{2}-y^{2}}\) channel. Therefore, the superconducting coherence length should be smaller in the \(d_{z^{2}}\) channel. As a result, and as seen in Figs. 5(a)-(c), the vortex region in the \(d_{x^{2}-y^{2}}\) channel is more extensive than that of the \(d_{z^{2}}\) channel. Now let us discuss the electronic structure in the presence of the vortices. The LDOS spectra, as a function of energy \(\omega\) from the bulk to the vortex center, are visualized in Fig. 6(a). As evident, a notable in-gap resonant peak at a specific negative energy (approximately -0.09) stands out. In addition to the strong in-gap peak at negative energy, the LDOS features a broad zero-energy bump structure, which subsequ Figure 3: Similar to Fig. 2 but based on the T-matrix method. Figure 5: Intensity plots of the order parameter amplitudes in the presence of the vortices. (a) The intralayer pairing amplitude in the \(d_{x^{2}-y^{2}}\) orbital channel. (b) The intralayer pairing amplitude in the \(d_{z^{2}}\) orbital channel. (c) The interlayer pairing amplitude in the \(d_{z^{2}}\) orbital channel. configuration at the vortex center. When moving away from the vortex center, the peak intensity dramatically decreases. Ultimately, the peak and hump structure capitulate, allowing the LDOS to revert to its standard bulk feature upon reaching a site that is three lattice constants distant from the vortex center. Further evidence of the vortex states can be demonstrated through the exploration of the spatial variation of the LDOS spectra at the hump energy (\(\omega=0\)) and the peak energy (\(\omega=-0.09\)), as depicted in Figs. 6(b) and 6(c). For both energies, the greatest LDOS is observed at the vortex center site. At zero energy, the LDOS spectrum presents a broad structure, which is mirrored with a broad spatial distribution of the zero-energy states. Contrarily, at the resonant peak energy, the LDOS spectrum exhibits a sharp peak. Consequently, the spatial distribution of the vortex states at this energy is also sharp, predominantly emerging at the vortex center site. ## IV Non-self-consistent calculation results Previously, different pairing symmetries were proposed by other groups using different methods. In Ref. [5], an on-site \(s_{\pm}\) pairing was suggested based on the functional renormalization group calculation, where the order parameter phases are different for different orbitals. The \(d_{x^{2}-y^{2}}\)-wave pairing symmetry was also proposed based on the spin fluctuation scenario [15]. However, these two pairing symmetries cannot be obtained with the self-consistent mean-field method. To study the local electronic structure for these two pairing symmetries, a non-self-consistent method is used. For investigating the magnetic vortex effect, the superconducting phases with two vortices are set by hand. These can be obtained by solving a conventional superconductor in the presence of a magnetic field self-consistently using the BdG technique. The \(T\)-matrix method is employed to study the single impurity effect. By exploring the local electronic structure for the \(s_{\pm}\) and \(d_{x^{2}-y^{2}}\) pairing symmetries in a non-self-consistent way, we can gain insights into the characteristics of these symmetries and their influence on the superconducting properties. The magnetic vortex effect and single impurity effect can provide valuable information about the pairing symmetries that may not be directly obtainable through self-consistent mean-field methods. We first present the numerical results of the LDOS spectra for the \(d_{x^{2}-y^{2}}\) pairing symmetry. The impurity effect is displayed in Fig. 7(a). In the system bulk, the bare LDOS spectrum also has a two-gap structure, similar to the case of the extended \(s\)-wave pairing symmetry shown in Fig. 2. In the presence of the impurity, a sharp zero energy resonance state emerges. This result is consistent with the single impurity effect for the \(d\)-wave pairing symmetry reported previously [35]. The LDOS spectra in the presence of a magnetic field are plotted in Fig. 7(b). At the vortex center, which is the site (10,30), a broad peak appears near the zero energy. This finding provides further insights into the behavior of the \(d_{x^{2}-y^{2}}\) pairing symmetry and its potential influence on the superconducting properties of the material. We next present the numerical results of the LDOS spectra for the on-site \(s_{\pm}\) pairing symmetry. Following the guidelines from Ref. [5], both intralayer and interlayer pairing are considered. In this case, the signs of intralayer pairing in the \(d_{z^{2}}\) and \(d_{x^{2}-y^{2}}\) channels are opposite. Pairing amplitudes for distinct channels are sourced from Ref. [5]. Fig. 8(a) shows the LDOS spectrum near an impurity. A slight increase in the LDOS at lower energies indicates the induction of low-energy states by the impurity, a characteristic significantly different from the extended \(s\)-wave pairing. Notably, here the impurity does not induce a peak. Near the gap edge, the intensity of the LDOS spectrum decreases. In the presence of a magnetic field, Fig. 8(b) depicts LDOS spectra ranging from the bulk to the vortex center. A pair of in-gap peaks are induced at the vortex center [site (10,11)], with their positions symmetrically surrounding the Fermi energy. As the distance from the vortex center increases, the peak intensities gradually reduce, and the spectrum eventually evolves to resemble the bulk LDOS. Figure 6: Numerical results of the LDOS in the presence of the magnetic field. (a) The LDOS as a function of the energy \(\omega\). (b) The intensity plot of the LDOS in the real space at the constant energy with \(\omega=0\). (c) The intensity plot of the LDOS in the real space at the constant energy with \(\omega=-0.09\). Figure 7: The LDOS as a function of the energy \(\omega\) for the \(d_{x^{2}-y^{2}}\) pairing symmetry. (a) The solid line is the LDOS at the site \((0,0)\) of layer 2 by putting an impurity at the site \((0,0)\) of layer 1 with the impurity strength \(V_{i}=10\). The dashed line is the bare LDOS without the impurity. (b) The LDOS spectrum from the bulk to the vortex center (from the bottom to the up). The characteristics of the local electronic structure near an impurity or crossing a vortex for both \(d\)-wave pairing symmetry and on-site \(s_{\pm}\) pairing symmetry are significantly different from those of the previously discussed extended \(s\)-wave pairing symmetry. Therefore, it can be inferred that the effects of a single impurity and the magnetic vortex states can indeed provide valuable insights for identifying the pairing symmetry of the superconducting La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) material. ## V Summary In summary, a two-orbital (\(d_{x^{2}-y^{2}}\) and \(d_{z^{2}}\) orbitals) model featuring various channels of pairing potentials was employed to evaluate the physical attributes of the recently discovered high-T\({}_{c}\) superconducting compound La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\). Based on real space self-consistent BdG equations, we suggest that the extended \(s\)-wave pairing should be the most advantageous pairing symmetry. Furthermore, the calculations reveal that the superconducting gap magnitude within the \(d_{z^{2}}\) orbital is significantly larger than that within the \(d_{x^{2}-y^{2}}\) orbital. This indicates that the \(d_{z^{2}}\) orbital plays a crucial role in determining the pairing symmetry and has a notable impact on the stability of the superconducting state in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\). Both the BdG technique and the \(T\)-matrix method were utilized to evaluate the impurity effect. The computed LDOS spectra near the impurity site exhibited qualitative similarities. Additionally, the impurity propagated the in-gap low-energy states and led to the emergence of an in-gap peak at finite energy. The magnetic vortex effect was also explored via the self-consistent BdG equations. The LDOS spectrum at the vortex center spotlights a peak-hump structure, with a pronounced peak at the negative energy dimensions and an amplified hump at zero energy. We also extend our study by taking into account two different pairing symmetries utilizing a non-selfconsistent method to investigate the point impurity effect and vortex states. The results obtained show significant qualitative differences compared to those based on the self-consistent calculation. Consequently, our numerical results suggest that the impurity effect and vortex states can be effectively utilized as a tool for investigating the pairing symmetry of the superconducting La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) material. ###### Acknowledgements. This work was supported by the NSFC (Grant No.12074130), the Natural Science Foundation of Guangdong Province (Grant No. 2021A1515012340), the Key-Area Research and Development Program of Guangdong Province (Grant No.2019B030330001), and the CRF of Hong Kong (C6009-20G).
2301.04653
Optirank: classification for RNA-Seq data with optimal ranking reference genes
Classification algorithms using RNA-Sequencing (RNA-Seq) data as input are used in a variety of biological applications. By nature, RNA-Seq data is subject to uncontrolled fluctuations both within and especially across datasets, which presents a major difficulty for a trained classifier to generalize to an external dataset. Replacing raw gene counts with the rank of gene counts inside an observation has proven effective to mitigate this problem. However, the rank of a feature is by definition relative to all other features, including highly variable features that introduce noise in the ranking. To address this problem and obtain more robust ranks, we propose a logistic regression model, optirank, which learns simultaneously the parameters of the model and the genes to use as a reference set in the ranking. We show the effectiveness of this method on simulated data. We also consider real classification tasks, which present different kinds of distribution shifts between train and test data. Those tasks concern a variety of applications, such as cancer of unknown primary classification, identification of specific gene signatures, and determination of cell type in single-cell RNA-Seq datasets. On those real tasks, optirank performs at least as well as the vanilla logistic regression on classical ranks, while producing sparser solutions. In addition, to increase the robustness against dataset shifts, we propose a multi-source learning scheme and demonstrate its effectiveness when used in combination with rank-based classifiers.
Paola Malsot, Filipe Martins, Didier Trono, Guillaume Obozinski
2023-01-11T10:49:06Z
http://arxiv.org/abs/2301.04653v1
# Optirank: classification for RNA-Seq data with optimal ranking reference genes ###### Abstract Classification algorithms using RNA-Sequencing (RNA-Seq) data as input are used in a variety of biological applications. By nature, RNA-Seq data is subject to uncontrolled fluctuations both within and especially across datasets, which presents a major difficulty for a trained classifier to generalize to an external dataset. Replacing raw gene counts with the rank of gene counts inside an observation has proven effective to mitigate this problem. However, the rank of a feature is by definition relative to all other features, including highly variable features that introduce noise in the ranking. To address this problem and obtain more robust ranks, we propose a logistic regression model, optirank, which learns simultaneously the parameters of the model and the genes to use as a _reference set_ in the ranking. We show the effectiveness of this method on simulated data. We also consider real classification tasks, which present different kinds of distribution shifts between train and test data. Those tasks concern a variety of applications, such as cancer of unknown primary classification, identification of specific gene signatures, and determination of cell type in single-cell RNA-Seq datasets. On those real tasks, optirank performs at least as well as the vanilla logistic regression on classical ranks, while producing sparser solutions. In addition, to increase the robustness against dataset shifts, we propose a multi-source learning scheme and demonstrate its effectiveness when used in combination with rank-based classifiers. ## 1 Introduction RNA-Sequencing provides a way to probe the state of cells and tissues, by measuring the level of expression of thousands of genes. Since its introduction, RNA-Seq data has been used in differential expression analysis to highlight genes that are differentially expressed in two contrasting conditions (stereotypically healthy versus diseased), pointing towards potentially actionable drug targets and molecular mechanisms. In this context, normalization of RNA-Seq data has been extensively studied: we provide in the subsequent section an overview of common methods. However there is still a lack of consensus on normalization for classification tasks, which is crucial given the recent emergence of machine-learning assisted diagnosis based on RNA-Seq data (for instance Cascianelli et al., 2020; Shen et al., 2020; Tan and Cahan, 2019). In practice, among other normalization techniques also used in differential expression analysis, ranking normalization seems to have had particular success in combination with classification algorithms (Shen et al., 2020; Scialdone et al., 2015). Ranking normalization consists simply in replacing the raw read count of genes by their ranks amongst the read count of other genes for the same observation. Lausser et al. (2016) show a consistent improvement of score when ranking normalization is used. A potential weakness of ranking normalization is that the rank of an otherwise informative gene could be perturbed by genes whose expression fluctuate independently from the variable of interest. An obvious solution is to rank gene expressions only relative to a set of stable genes, which we call _reference set_. The difficulty is, however, in choosing this set. With this motivation, and to solve binary classification problems based on robust and adaptive ranks, we propose optirank, a logistic regression model based on ranks relative to a _reference set_, where the latter is learned at the same time as the weights of the logistic model. ### Overview of Normalization Techniques Multiple factors alter the number of reads obtained for a gene beyond the number of corresponding RNA molecules in the biological sample, the quantity of interest. For instance, the preservation technique of a biological sample and its temperature influence the natural degradation process of RNA; the length and the GC-content of an RNA molecule will affect its reading rate. Normalization aims at obtaining a representation of the data invariant to those aforementioned nuisance factors. Ideally, the remaining variation after normalization should be attributable solely to the phenomenon of interest (stereotypically, disease versus normal). In this way, the analyst avoids the risk of confounding the effect of the variable of interest with the effect of the nuisance variables, and the learned model is applicable to an external dataset obtained with a different configuration of nuisance factors. The simplest normalization, total count normalization, divides the raw read count on each gene by the total number of reads in the corresponding observation. However, highly expressed genes can induce a misleadingly low proportion of reads on other genes which are normally expressed. More refined techniques, such as TMM and DESeq counteract this artifact by using a more robust normalization factor. Quantile normalization matches the distribution of gene expressions to a reference distribution. Other methods correct the expression thanks to controls, such as housekeeping genes hypothesized to be constant across conditions or RNA spike-ins whose quantities are controlled during library preparation. For the interested reader, Dillies et al. (2012) and Evans et al. (2016) evaluate these methods in differential expression analysis with real and simulated data. To our knowledge, there is no similar review on the use of RNA-Seq normalization as a pre-processing step prior to classification. ## 2 Learning from ranks ### Ranking with respect to a _reference set_ Given a vector of gene expression levels \(\mathbf{x}\in\mathbb{R}^{d}\), the rank of gene \(j\), as understood in the common sense, can be obtained by counting the number of genes \(k\) (among all \(d\) genes) that are less expressed than gene \(j\) (i.e. \(x_{k}<x_{j}\)). We introduce a more general notion of rank, the rank \(r_{j}^{\Gamma}\) of \(x_{j}\)_relative to a reference set_\(\Gamma\), by: \[r_{j}^{\Gamma}=\sum_{k=1}^{d}\mathbf{1}[k\in\Gamma]\,\mathbf{1}[x_{j}>x_{k}] \tag{1}\] Note that we clearly recover the classical rank in the particular case where \(\Gamma=\{1,\ldots,d\}\). In this paper, we will encode the set \(\Gamma\) by a vector \(\boldsymbol{\gamma}\in\{0,1\}^{d}\) with \(\gamma_{j}=\mathbf{1}[j\in\Gamma]\,.\) We can thus express \(r_{j}^{\Gamma}\) as \(r_{j}^{\Gamma}=\sum_{k=1}^{d}\gamma_{k}\,\mathbf{1}[x_{j}>x_{k}].\) To simplify notations we will also drop the exponent \(\Gamma\), which will be defined from the context. Now, to define simultaneously all the ranks associated with the elements of a vector \(\mathbf{x}_{i}\) we introduce the _matrix of binary comparisons_\(\mathbf{C}^{i}\in\mathbb{R}^{d\times d}\), with \(\mathbf{C}^{i}_{jk}=\mathbf{1}[x_{ij}>x_{ik}]\,.\) The vector of ranks \(\mathbf{r}_{i}\) associated with a reference set encoded by the indicator vector \(\boldsymbol{\gamma}\) is computed as \(\mathbf{r}_{i}=\mathbf{C}^{i}\boldsymbol{\gamma}.\) We will use the index \(i\in\{1,\ldots,n\}\) to index the \(n\) observations in the data.1 Footnote 1: Note that the introduced notations adopt the convention of using a strict inequality in the definition of the rank. It is possible to obtain two other similar definitions of ranks by replacing \(\mathbf{1}[x_{j}>x_{k}]\) by \(\mathbf{1}[x_{j}\geq x_{k}]\) or by \(\mathbf{1}[x_{j}>x_{k}]+0.5\,\mathbf{1}[x_{j}=x_{k}]\). All definitions are obviously equivalent when there are no ties and when \(\Gamma=\{1,\ldots,d\}\). The case where there are ties and where these other notions of ranks become relevant is discussed in Appendix A. ### Classification: learning the reference set along with a linear model on adaptive ranks We consider a classical supervised learning problem in which input data is encoded as ranks \(\mathbf{r}_{i}\) of the form above, and the output data is a label \(y_{i}\in\mathcal{Y}\). For a loss function \(\ell:\mathcal{Y}\times\mathbb{R}\to\mathbb{R}\), we consider a regularized empirical risk minimization of the form \[\min_{\mathbf{w}\in\mathbb{R}^{d},\,b\in\mathbb{R}}\tfrac{1}{n}\sum_{i=1}^{n} \ell(y_{i},\mathbf{w}^{\top}\mathbf{r}_{i}+b)+\Omega(\mathbf{w}), \tag{2}\] where \(\Omega\) is a (typically convex) regularizer. By expressing \(\mathbf{r}_{i}\) as a function of \(\boldsymbol{\gamma}\) and of \(\mathbf{C}^{i}\) and minimizing the empirical risk with respect to \(\boldsymbol{\gamma}\) as well, we propose to learn the reference set \(\Gamma\). This leads to the following optimization problem: \[\min_{\boldsymbol{\gamma}\in\{0,1\}^{d},\,\mathbf{w}\in\mathbb{R}^{d},\,b\in \mathbb{R}}\tfrac{1}{n}\sum_{i=1}^{n}\ell(y_{i},\mathbf{w}^{\top}\mathbf{C}^{ i}\boldsymbol{\gamma}+b)+\Omega(\mathbf{w}). \tag{3}\] Given that the integrality constraint on \(\boldsymbol{\gamma}\) makes the optimization problem combinatorial, we propose to relax the constraint \(\boldsymbol{\gamma}\in\{0,1\}^{d}\) to \(\boldsymbol{\gamma}\in[0,1]^{d}\). We empirically found that adding a cardinality constraint on the reference set, \(\boldsymbol{\gamma}^{\top}\mathbf{1}=s\) instead of penalizing with a sparsity inducing regularization, such as the \(\ell_{1}\)-norm, was useful to obtain fast convergence. We suppose that this constraint removes an indeterminacy of scale between \(\mathbf{w}\) and \(\boldsymbol{\gamma}\) which appears once the integrality constraint is removed, given that \((\alpha\,\mathbf{w},\tfrac{1}{\alpha}\,\boldsymbol{\gamma})\) yields identical losses values for any scaling factor \(\alpha\); it would be implicitely removed as well by regularizers on \(\mathbf{w}\) and \(\boldsymbol{\gamma}\) but only at convergence. Given the relaxed constraint \(\boldsymbol{\gamma}\in[0,1]^{d}\), and in order to nonetheless obtain solutions with \(\boldsymbol{\gamma}\in\{0,1\}^{d}\), we propose to solve a sequence of problems of the form \[\min_{\boldsymbol{\gamma}\in[0,1]^{d},\,\mathbf{w}\in\mathbb{R}^{d},\,b\in \mathbb{R}}\tfrac{1}{n}\sum_{i=1}^{n}\ell(y_{i},\mathbf{w}^{\top}\mathbf{C}^{ i}\boldsymbol{\gamma}+b)+\Omega(\mathbf{w})+\lambda_{p}\,\rho(\boldsymbol{ \gamma})\qquad\text{s.t.}\quad\boldsymbol{\gamma}^{\top}\mathbf{1}=s, \tag{4}\] Figure 1: Example of ranking gene expression data with respect to a _reference set_. for an increasing sequence of regularization coefficients \(\lambda_{p}\), where \(\rho\) is the concave "push" penalty defined by \[\rho(\mathbf{\gamma})=\sum_{j=1}^{d}\gamma_{j}(1-\gamma_{j}), \tag{5}\] which effectively "pushes" the entries of \(\mathbf{\gamma}\) towards the extreme points of the hypercube \([0,1]^{d}\). More precisely, starting from \(\lambda_{p}=0\), the solution of each problem in the sequence is used as initialization to warm-start the next one, and the sequence is terminated when the solution satisfies the constraint \(\mathbf{\gamma}\in\{0,1\}^{d}\). It is obviously possible to only solve the above problem for \(\lambda_{p}=0\) and renounce the integrality constraints. Actually, the presence of the capped-simplex constraints \(\mathbf{\gamma}\in[0,1]^{d}\) and \(\mathbf{\gamma}^{\top}\mathbf{1}=s\) are themselves sufficient to obtain that, at the optimum, \(\mathbf{\gamma}^{*}\) tends to lie on a lower dimensional face of the capped-simplex, so that a significant fraction of its entries are exactly equal to \(0\) or \(1\). In preliminary experiments, we also did not observe significant differences whether integrality constraints are strictly enforced or not. The main motivations to nonetheless enforce them, are that (a) the additional computational effort is small compared to the cost of solving the problem with \(\lambda_{p}=0\), (b) the interpretability of the obtained ranks is otherwise lost, and (c) that it tends to produce slightly sparser solutions. ## 3 Optimization procedure Block proximal coordinate descent.When \(\lambda_{p}=0\), problem (4) is bi-convex. More precisely, the objective function to minimize, which we denote by \(\mathcal{O}(\lambda_{p};\mathbf{\gamma},\mathbf{w},b)\), is convex w.r.t. \((\mathbf{\mathrm{w}},b)\) when \(\mathbf{\gamma}\) is fixed and convex w.r.t. \((\mathbf{\gamma},b)\) when \(\mathbf{\mathrm{w}}\) is fixed. This suggests that a form of alternating descent algorithm can be used, such as block coordinate descent, in which blocks of variables, here \((\mathbf{\gamma},b)\) and \((\mathbf{w},b)\), are alternatively updated (see for example Tseng and Yun, 2009; Xu and Yin, 2013). In addition, since the regularizer \(\Omega\) is convex and potentially non-differentiable (e.g., elastic net regularization), descent w.r.t. \(\mathbf{w}\) can be suitably realized with proximal gradient steps, provided that the proximal operator for \(\Omega\) can be computed efficiently. For \(\gamma\), the optimization step satisfying the constraint \(\mathbf{\gamma}\in[0,1]^{d}\mid\mathbf{\gamma}^{\top}\mathbf{1}=s\) also involves a proximal operator: the projection on this constraint set called capped-simplex. We derive this proximal operator in Appendix B.1. Therefore, to solve each instance of problem (4), we use a block proximal coordinate descent algorithm (BPCD). Shi et al. (2014) propose a BPCD algorithm to solve bilinear logistic regression problems with convex regularizers. Our implementation is similar to theirs, except that we use different blocks and a simpler stopping criterion, which is better suited to the non-convexity of the push-penalty and to the implementation of the path-following algorithm described next. We detail our implementation in Appendix B.3. Initialization.Given that the optimization problem is non-convex, the initialization matters: for reasons of symmetry we set \(\mathbf{w}=\mathbf{0}\) and \(\mathbf{\gamma}=\frac{s}{d}\mathbf{1}\), i.e., the center of the capped-simplex. Path-following algorithm.Concerning the sequence of values of \(\lambda_{p}\) used for the problems of the form (4), given that the term \(\lambda_{p}\rho\) eventually creates local minima at all vertices of the capped-simplex, it is important not to increase \(\lambda_{p}\) too quickly, which could produce suboptimal solutions. We use the approach proposed by Zaslavskiy et al. (2009). In essence, we adjust the next \(\lambda_{p}\) to ensure a sufficiently small increase of the objective value \(\mathcal{O}\) for the previously found solution. The strategy is detailed in Appendix B.4. To summarize, we propose to solve each instance of problem (4) with a block proximal coordinate descent algorithm (BPCD), and to increase \(\lambda_{p}\) according to a rule inspired by the path-following algorithm in Zaslavskiy et al. (2009). This scheme is summarized in Algorithm 1. ``` \(\gamma\leftarrow\frac{s}{d}\mathbf{1},\ \mathbf{w}\leftarrow\mathbf{0},\ \ \lambda_{p}=0\)\(\triangleright\) Initialization while\(\boldsymbol{\gamma}\notin\{0,1\}^{d}\)do\((\boldsymbol{\gamma},\mathbf{w},b)\leftarrow\operatorname*{argmin}_{\boldsymbol{\gamma}\in[0,1]^{d}, \boldsymbol{\gamma}^{\top}\mathbf{1}=s,\,\mathbf{w}\in\mathbb{R}^{d},\,b\in \mathbb{R}}\mathcal{O}(\lambda_{p};\boldsymbol{\gamma},\mathbf{w},b)\)\(\triangleright\) solved with BPCD \(\lambda_{p}\leftarrow\lambda_{p}^{\prime}\), with \(\mathcal{O}(\lambda_{p}^{\prime};\boldsymbol{\gamma},\mathbf{w},b)-\mathcal{O }(\lambda_{p};\boldsymbol{\gamma},\mathbf{w},b)=\epsilon\). endwhile return\(\boldsymbol{\gamma},\mathbf{w},b\) ``` **Algorithm 1** Optimization Procedure Note.When \(\lambda_{p}>0\), although \(\rho\) is non-convex, problem (4) can still be formulated as a multi-convex problem, suitable to the block coordinate descent (see Appendix B.2 for a derivation). ### Computing the product \(\mathbf{C}^{i}\boldsymbol{\gamma}\) with complexity \(O(d)\). A priori, the computation of the matrix vector product \(\mathbf{C}^{i}\boldsymbol{\gamma}\) involves \(d^{2}\) multiplications. But it is clear that to compute classical ranks it is sufficient to sort the data, which can be done with a complexity of \(O(d\log d)\). Since \(\mathbf{C}^{i}\boldsymbol{\gamma}\) is none else than the vector of ranks with respect to the reference set \(\Gamma\), it seems reasonable to think that the same complexity can be achieved, and this is indeed the case. Assuming that there are no ties, and if \(\sigma_{i}\) is a permutation sorting the entries of \(\mathbf{x}_{i}\), i.e. such that \(x_{i,\sigma_{i}(1)}<\cdots<x_{i,\sigma_{i}(d)}\), the inner-sum can be calculated recursively by applying the same permutation to \(\boldsymbol{\gamma}\) and applying, from \(j=1\) to \(d\), \[r_{i,\sigma_{i}(j)}\gets r_{i,\sigma_{i}(j-1)}+\gamma_{\sigma_{i}(j-1)}, \tag{6}\] with, by convention, \(r_{i,\sigma_{i}(0)}=0\). The complexity is therefore dominated by the sorting operation and is thus \(O(d\log d)\). Moreover, sorting the data needs to be done only once at the beginning of the optimization, so that effectively the number of operation needed to compute \(\mathbf{C}^{i}\boldsymbol{\gamma}\) each time \(\boldsymbol{\gamma}\) is updated is \(O(d).\) The exact same reasoning applies to the computation of \(\mathbf{w}^{\top}\mathbf{C}^{i}\) which is therefore also \(O(d)\), once the inverse of \(\sigma_{i}\) is computed. With the alternative definitions of rank and in the presence of ties, the calculations are more subtle, but the same complexity can be obtained. They are detailed in Appendix A.2. ## 4 Benchmark: competing classification algorithms We will apply the proposed methodology to solve a number of binary classification problems on first synthetic and then real RNA-Seq data. To serve as a basis of comparison, we choose standard logistic regression or random forest classifiers that rely either on a rank representation or not. Optirank: a sparse rank-based logistic regression with learnable reference set.To solve binary classification tasks, we propose optirank, a logistic regression model on rank-transformed data, with ranks computed with respect to a learnable reference set. Our model optirank is fitted within the framework introduced in section 2.2, by solving the optimization problem (3) with a logistic loss \(\ell(y,a)=-y\log\left(S(a)\right)-(1-y)\log\left(1-S(a)\right)\), where \(S(x)\) denotes the sigmoid function \(S(x)=1/(1+e^{-x})\), \(y\in\mathcal{Y}=\{0,1\}\) being the binary label, and with an elastic net regularization \[\Omega(\mathbf{w})=\lambda_{1}\ \left\|\mathbf{w}\right\|_{1}+\lambda_{2}\ \left\|\mathbf{w}\right\|_{2}^{2},\] to induce sparsity in the set of features whose rank is relevant to the classification task. In fact, given that we consider RNA-Seq data, and that there are potentially significant correlations between genes, the use of the elastic net, with a Euclidean regularization on top of the Lasso terms, aims at stabilizing feature selection (see Zou and Hastie, 2005). Competing algorithms.We will compare our optirank algorithm with classical logistic regression (lr) equipped with the same elastic net regularization \(\Omega\), and logistic regression on rank-transformed data (rank-lr), still with the same regularization. In addition, in tasks involving real data, we will also compare our method to the random forest (rf) and to the SingleCellNet algorithm (SCN) proposed by Tan and Cahan (2019) for cell-typing tasks that we consider in our benchmark (see Section 6). The SCN algorithm consists of a pre-processing pipeline which identifies gene pairs with informative differential expression and transforms the RNA-Seq data into a binary matrix indicating the order of gene pairs followed by a random forest classifier. Implementation details.The stopping criterion in the scikit-learn (Pedregosa et al., 2011) implementation of logistic regression being different from the one in optirank, to ensure that this discrepancy does not affect the comparison on real datasets, we re-optimize the weights \(\mathbf{w}\) learned by optirank with the logistic regression of scikit-learn. Additional details about the classifiers can be found in Appendix C. ## 5 A synthetic data distribution model with unstable ranks In order to illustrate the potential and limitations of optirank, we present in the following a synthetic example in which the robustness of the rank normalization is challenged. We test whether optirank is effectively able to overcome the difficulty of the task. ### Model As mentioned in Lausser et al. (2016), the strength of rank-normalization can be linked to the fact that ranks are invariant to observation-wise monotone perturbations of the gene expressions. Those perturbations can be easily envisioned, for example by considering that counts depend in a quadratic (and observation-dependent) fashion on the RNA content in the observation. By contrast, the following example focuses on a weakness of rank normalization that arises in the presence of a non-monotone, nonetheless simple, perturbation. Those perturbations occur in real data; for instance, Leek et al. (2010) report a case in which a group of genes shifts between different batches of samples. In our example, we consider a similar perturbation: we suppose that there is a group of perturbed genes, uninformative for the classification task at hand, that introduces noise in the ranks of relevant features by fluctuating in a coordinated and observation-wise manner. More precisely, the expression levels of the genes in this group, called \(P\), are all assumed to shift by an additive amount close to \(\Delta_{i}\) (unique to the observation \(i\)). This introduces noise in the ranking of the other stable genes: indeed, since the perturbed genes in \(P\) shift in a coordinated fashion, the rank of a stable gene is increased or decreased by an amount proportional to the number of genes in \(P\) that cross it. We propose the following synthetic model: the non-perturbed expression of a gene \(j\) in observation \(i\), \(\widetilde{X}_{ij}\), follows a Gaussian distribution centered on the typical expression value of gene \(j\), \(\mu_{j}\): \[\widetilde{X}_{ij}\sim\mathcal{N}(\mu_{j},\sigma^{2})\,, \tag{7}\] where \(\sigma\) defines the magnitude of the baseline noise in the data. We generate values for \(\mu_{j}\) by sampling uniformly on the expression interval, which we set to \([0,1]\). The expression of a perturbed gene in \(P\) is generated by adding to the unperturbed expression \(\widetilde{X}_{ij}\) an observation-wise shift \(\Delta_{i}\) that we sample from a centered Gaussian distribution \(\mathcal{N}(0,\tau^{2})\), with \(\tau\) defining the typical magnitude of the perturbation. Summing all contributions, the expression of a gene \(j\) in observation \(i\), \(X_{ij}\), is generated as: \[X_{ij}=\begin{cases}\widetilde{X}_{ij}+\Delta_{i}&\text{if }j\in P\\ \widetilde{X}_{ij}&\text{otherwise.}\end{cases} \tag{8}\] Finally, we assign to each observation \(i\) a label generated from a simple logistic model on the ranks within the stable (non-perturbed) genes (forming the set \(S\)): \[\mathbb{P}(Y=1|X=\mathbf{x})=\sigma(\mathbf{w}^{\top}\mathbf{r}^{\Gamma}+b), \quad\text{with}\quad w_{j}=0,\;\forall j\notin S\quad\text{and}\quad\Gamma=S. \tag{9}\] The generation of the parameters \(\mathbf{w}\), \(\boldsymbol{\gamma}\), and \(b\) is detailed in Appendix D. ### Results We benchmarked on this synthetic data three different classifiers based on logistic regression: the simple logistic regression (lr), the logistic regression on ranked data (rank-lr), and optirank, which can produce the rank-lr model as a particular case but offers the additional flexibility of restraining the reference set. Concerning the choice of regularization hyperparameters, note that we tuned only the ridge regularization coefficient \(\lambda_{2}\) (and \(s\) for optirank), setting the lasso penalty \(\lambda_{1}\) to zero for all three classifiers, given that we consider sample sizes \(n\) that are large compared to the number of variables \(d\). With default simulation parameters, optirank outperforms both rank-lr and lr (see Table 1). Moreover, optirank is empirically able to recover the true reference set. Indeed, the cosine similarity \(S_{C}\) between the vector \(\boldsymbol{\gamma}\) used to generate the data (see equation 9) and the one found by optirank is high: \(S_{C}=0.95\pm 0.03\). We investigated how this comparison evolves when we change the number of perturbed genes \(d_{P}\) or the dimension of the gene expression profile \(d\), while maintaining the ratio between the number of observations and the dimension \((n/d)\) fixed. Not surprisingly, the superiority of optirank over lr and rank-lr fades when the number of perturbed genes \(d_{P}\) becomes small relative to the dimension of the gene expression profile. Indeed, Figure 2 shows that when \(d\) increases while keeping the number of perturbed genes, \(d_{P}\), equal to \(40\), rank-lr and lr scores rise to the level of optirank (whose performance degrades slightly). In accordance, when the number of perturbed genes is increased while keeping the dimension of the gene expression to \(50\), the performance of rank-lr and lr degrades, while the score of optirank remains high. This outlines the fact that the perturbation on the usual ranks of informative genes becomes smaller as the ratio \(d_{P}/d\) decreases. Concerning the cosine similarity between the ground truth reference set and the reference set found by optirank, its dependence on the simulation parameters \(d_{P}\) and \(d\) is small: Figure 4 in Appendix D.3 shows that the overlap remains high. For the dependence of scores on \(\tau\), which defines the magnitude of the observation-wise shift \(\Delta_{i}\), see Figure 3 in Appendix D.3. In summary, this synthetic task exemplifies (a) how a non-monotone perturbation can effectively degrade the performance of the rank normalization, and (b) that optirank is robust to the kind of perturbation introduced thanks to its learnable ranking reference set. ## 6 Experiments on real RNA-Seq data In this section, we benchmark optirank on multiple biologically relevant classification tasks with real RNA-Seq data. These tasks present qualitatively different dataset shifts between train and test data. In Subsection 6.1, we evaluate how robust different algorithms are to these dataset shifts. To enhance robustness, \begin{table} \begin{tabular}{|l|c|} \hline & Test Balanced Accuracy (\%) \\ \hline Logistic Regression (lr) & \(78\pm 2\) \\ \hline Logistic Regression on Full Ranks (rank-lr) & \(80\pm 3\) \\ \hline optirank (our model) & \(96\pm 0.4\) \\ \hline \end{tabular} \end{table} Table 1: Test balanced accuracy (in %) for classifiers on the synthetic example of Section 5. Default simulation parameters were set to \(d=50\), \(d_{P}=40\), \(n=1000\), \(\tau=0.2\), and \(\sigma=0.05\). we then investigate in Subsection 6.2, an alternative learning scenario in which multiple _source_ datasets are merged in the training data. In this manner, we hope that the algorithm learns better to be robust to the kind of perturbation it will encounter in the test data. ### Classification with different dataset shifts #### 6.1.1 Classification tasks CUP-related tasks.Cancer of unknown primary (CUP) occurs when a patient has a metastatic tumor whose organ of origin (where the primary tumor was located) cannot be determined. A lot of effort has been dedicated to develop classifiers to predict the organ of the primary tumor based on RNA-Seq data of the metastatic tissue, in the hope of personalizing and enhancing the treatment given to CUP patients (Laprovitera et al., 2021). However, an obstacle in building efficiently such classifiers is the scarsity of RNA-Seq data of metastatic tumors. As a result, classifiers are often trained and tuned on datasets of primary tumors, which are biologically different from metastasis, and metastatic samples are reserved to the external classifier validation. This is precisely what we do in the three tasks TCGA, PCAWG, met500. In the task TCGA, classifiers are trained on the TCGA dataset comprising primary tumors (The Cancer Genome Atlas Research Network et al., 2008), and tested on a held out portion of the same dataset. In the task PCAWG, those same classifiers trained on TCGA were tested on an external dataset PCAWG also with primary tumors (The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium, 2020). Finally, the task met500 evaluates those classifiers on the met500 dataset (published by Leiserson et al., 2015), which comprises metastatic tumors of various origins. The two latter tasks represent different challenges in terms of dataset-shift between train and test. The task PCAWG is subject to technical variation between two separately obtained datasets, which we call _batch-effect_. An additional difficulty in the met500 task is that a metastasis differs biologically from its primary tumor, resulting in a so-called _biologically induced dataset shift_. Cell-typing single-cell data.Single-cell RNA-Seq (scRNA-Seq) provides a way to probe gene expression at the cell-level resolution. A preliminary step in single cell data analysis resides in the cell-type identification of each observation/cell, i.e. cell typing. To achieve this automatically, Tan and Cahan (2019) propose to train a classifier on a labeled dataset comprising many cell types --commonly known as a cell-atlas, and use it to infer the cell-types in an unlabelled dataset. One difficulty is that the train and test scRNA-Seq data Figure 2: Dependence of the test-accuracy with respect to the dimension of the gene expression profile \(d\) when \(d_{P}=40\), and with respect to the number of perturbed genes \(d_{P}\) for \(d=50\). The shaded area shows the standard error across runs. are potentially generated by different sequencing platforms. Tan and Cahan (2019) evaluate the robustness of their classifier SCN for various cross-platform train and test data combinations (Tasks Baron-Murano, Baron-Segerstolpe, MWS-TM10x, MWS-TMfaces, TM10x-MWS, TM10x-TMfaces and TMfaces-MWS). We use the same tasks to investigate the usefulness of optirank in counteracting _dataset shifts induced by different sequencing platforms_ and compare with the original method SCN. BRCA task.The task BRCA consists in predicting the presence of the BRCA mutation from the RNA-Seq data in breast primary tumors from the TCGA dataset. This task does not directly answer a real-life classification problem. However, the classifier coefficients could be used to obtain a (sparse) transcriptional signature for BRCA cancer. We list all tasks while grouping them by type of dataset shift (if any) between the train and test data in Table 2. Additional details about the data sources are in Appendix E.1. #### 6.1.2 Data pre-processing For fairness of comparison and as first step of dimensionality reduction, the data was reduced to include the 1000 genes occuring in informative pairs identified by SCN. Logged raw-cpm values were used as input for each classifier (see Appendix E.2 for additional details). #### 6.1.3 Hyperparameter selection The \(\ell_{1}\) and \(\ell_{2}\) regularization coefficients of optirank, lr and rank-lr, and the \(s/d\) ratio for optirank were tuned via internal cross-validation (i.e., using held out data from the same source as the training data; see Appendix E.3.1, E.3.2 and E.4 for details). SingleCellNet was used with parameters suggested by Tan and Cahan (2019), and random forest (rf) was trained with 300 trees. Optimal hyperparameters were chosen with the one-standard error rule (Hastie et al., 2015) which selects the sparsest model with a score within one standard error of the best one. #### 6.1.4 Results and discussion The performance in terms of balanced accuracy of all classifiers on the different classification tasks presented in Section 6.1.1 are summarized in the following table (Table 6.1.4). We highlighted in bold the scores of classifiers that did not score significantly worse than the winning method, according to a paired Student's t-test. Interestingly, the advantage of logistic regression-based classifiers relying on a rank-representation (optirank, rank-lr) over their non-ranked counterpart (lr) is not consistent, but rather depends on the task considered. Indeed, on the tasks TM10x-MWS and TMfaces-MWS, lr clearly surpasses its ranked counterparts, while on the tasks met500, Baron-Murano and TM10x-TMfaces we notice the opposite trend. \begin{table} \begin{tabular}{|l|l|} \hline **Dataset-shift** & **Tasks** \\ \hline None (same distribution) & BRCA, TCGA \\ \hline Batch-effects & PCAWG \\ \hline Technical dataset-shift (Different sequencing platforms) & Baron-Murano, Baron-Segerstolpe, MWS-TM10x, MWS-TMfaces, TM10x-MWS, TM10x-TMfaces and TMfaces-MWS \\ \hline Biologically induced dataset-shift & met500 \\ \hline \end{tabular} \end{table} Table 2: Classification tasks by type of dataset-shift between train and test set. This indicates that the rank representation confers additional robustness against dataset shifts only in some instances. A burning question is whether there is an advantage of ranking relative to a subset of genes compared to ranking among all. At first sight, this doesn't seem to be the case: the performances of optirank and rank-lr are similar. In a more thorough analysis in which we carried paired Student's t-tests for every task and every pair of classifiers (see Appendix E.5), only the task TM10x-MWS showed a significant difference between rank-lr and optirank, in favor of optirank. In summary, in these tasks, the ranking reference set found by optirank is not more robust than the classical full reference set. A possible explanation is that an optimal restricted reference set does not necessarily exist. Contrarily to the synthetic example in Section 5 and to certain observations made on real data (see Leek et al., 2010), where a group of genes shift in one direction and perturb the ranking, in the tasks we consider, the dataset-shift could either be a monotone transformation or could shift genes in opposite directions. In both these scenarios, the ranks of certain stable genes would not be affected by the dataset shift. Alternatively, one could argue that even if such an optimal restricted reference set existed, the only way to discover it would be by inspecting the test dataset. We address this question in an additional experiment presented in the next section. Aside from the performance aspect, it is important to note that by definition, the classical ranking normalization is computed with the measurement of all (reference) genes. In contrast, optirank can find models that require only a small number of genes to be sequenced, which can be a decisive advantage in some medical applications. In the tasks we consider, solutions found by optirank require around 500 genes, half of the thousand used by rank-lr. However, it is worth noting that when the logistic regression performs well, there is no advantage of using optirank, as the latter tends to produce less sparse solutions (see Appendix E.5.6). It is worth noting that in general, the random forest rf performs worse than other classifiers, and that SCN does not provide a competitive advantage on single-cell typing tasks. \begin{table} \begin{tabular}{|l|c c c c c|} \hline & SCN & lr & optirank & rank-lr & rf \\ \hline BRCA & \(50.1\pm 0.3\) (4) & \(\mathbf{58\pm 2\) (1)} & \(52\pm 2\) (3) & \(\mathbf{53\pm 1\) (2)} & \(50.0\pm 0\) (5) \\ TCGA & \(92\pm 1\) (5) & \(\mathbf{99.14\pm 0.23\) (2)} & \(\mathbf{99.06\pm 0.26\) (3)} & \(\mathbf{99.3\pm 0.3\) (1)} & \(95.7\pm 0.5\) (4) \\ \hline PCAWG & \(\mathbf{76.0\pm 4.5\) (2)} & \(\mathbf{76.2\pm 5.4\) (1)} & \(\mathbf{74.3\pm 5.4\) (4)} & \(\mathbf{74.4\pm 5.8\) (3)} & \(60\pm 4\) (5) \\ \hline met-500 & \(67\pm 4\) (4) & \(71\pm 4\) (3) & \(\mathbf{77\pm 4\) (1)} & \(\mathbf{75\pm 4\) (2)} & \(60\pm 4\) (5) \\ \hline Baron-Murano & \(89\pm 2\) (3) & \(87\pm 3\) (4) & \(\mathbf{93.0\pm 1.9\) (2)} & \(\mathbf{93.2\pm 1.9\) (1)} & \(62\pm 3\) (5) \\ Baron-Segerstolpe & \(\mathbf{93.5\pm 1.6\) (3)} & \(\mathbf{93.4\pm 2.2\) (4)} & \(\mathbf{93.6\pm 2.2\) (2)} & \(\mathbf{94.0\pm 1.8\) (1)} & \(60\pm 3\) (5) \\ MWS-TM10x & \(72\pm 2\) (4) & \(\mathbf{86\pm 2\) (1)} & \(\mathbf{84\pm 2\) (3)} & \(\mathbf{85\pm 2\) (2)} & \(53\pm 1\) (5) \\ MWS-TMfacs & \(70\pm 2\) (4) & \(\mathbf{86\pm 1\) (3)} & \(\mathbf{87.3\pm 1.6\) (2)} & \(\mathbf{87.4\pm 1.7\) (1)} & \(50.01\pm 0.01\) (5) \\ TM10x-MWS & \(51.5\pm 0.3\) (4) & \(\mathbf{72\pm 2\) (1)} & \(65\pm 2\) (2) & \(63\pm 2\) (3) & \(51.0\pm 0.2\) (5) \\ TM10x-TMfacs & \(80\pm 1\) (4) & \(91\pm 1\) (3) & \(\mathbf{92.3\pm 0.9\) (2)} & \(\mathbf{92.7\pm 0.9\) (1)} & \(58\pm 1\) (5) \\ TMfacs-MWS & \(51.0\pm 0.2\) (4) & \(\mathbf{71\pm 2\) (1)} & \(64\pm 1\) (3) & \(66\pm 2\) (2) & \(50.2\pm 0.1\) (5) \\ \hline \end{tabular} \end{table} Table 3: Average balanced accuracies in % (across folds and classes) of competing classifiers on the different tasks detailed in Section 4. Horizontal lines separate tasks with different types of dataset shift, from top to bottom: generalization to the same distribution, robustness to batch-effects, robustness to biologically-induced dataset shifts and robustness across sequencing platforms. The integer in parenthesis denotes the rank of the classifiers in terms of average balanced accuracy (lower is better). Classifiers which did not score significantly worse than the best classifier according to a paired Student’s t-test (with a level of 5 %) are highlighted in bold (see Appendix E.5 for additional details). ### Enhancing robustness with a multi-source learning scenario In this experiment, we investigate whether in the presence of dataset shifts, merging two _source_ datasets in the training set increases the classification accuracy on the third external _target dataset_. The rationale is that the algorithm could learn to be robust to the kind of perturbation it will encounter in the _target dataset_2. Footnote 2: The art of combining multiple labeled source datasets in order to classify a target dataset under a dataset-shift is referred as _multi-source domain adaptation_ in the literature (See for example the review by Sun et al. (2015)). To achieve this, we constructed three tasks: TCGA-PCAWG-met500, Baron-Segerstolpe-Murano and MWS-TMfacs-TM10x. The tasks are named after the datasets that compose them: the first is the main _source_ dataset, the second the _auxiliary source_ dataset and the third is the _target_ dataset. We compare the performance in the _multi-source_ scenario (where the _main_ and _auxiliary_ source datasets are merged into a training set) to a baseline scenario in which the _auxiliary source_ dataset is not used (_single-source_ scenario). Appendix E.3.2 provides additional details about the construction of those tasks. ANrank-lr.One could ask if, with the help of the _auxiliary source_ dataset, robust ranking reference genes can be identified in a simpler manner than in optirank, in particular, with a selection step decoupled from the fitting process. To answer this question, we constructed an additional logistic regression classifier based on adaptive ranks, ANrank-lr, that selects the ranking reference genes based on a simple ANOVA test. For each gene, the _vulnerability_ to dataset-shift is assessed with a two-way ANOVA which determines the effect of label and dataset jointly. The \(s\) most robust genes are selected as ranking reference (See Appendix C.6 for additional details). For completeness, we evaluate ANrank-lr both in the _multi-source_ and in the _single-source_ scenario. In the _single-source_ setting, ANrank-lr uses the _auxiliary source_ dataset only for the ANOVA test (and not during fitting nor validation). Data preprocessing and hyperparameter selection.Data preprocessing was done as described in the previous section. Appendix E.3.2 details the procedure to obtain the cross-validation splits: special care was taken to have similar training dataset sizes in the _multi-source_ and _single-source_ scenarios. The hyperparameter grids used for cross-validation are the same as in the previous experiment. Concerning ANrank-lr, the number of ranking reference genes \(s\) and the elastic net regularization coefficients are tuned over the same grid as for optirank. #### 6.2.1 Results and Discussion. There is a clear benefit brought by merging two source datasets in the training phase (_multi-source_ scenario). Indeed, for nearly all classifiers and tasks, the average balanced accuracy is greatly increased in the _multi-source_ scenario (Table 6.2.1) compared to the _single-source_ scenario in which the _auxiliary source_ dataset is not used (see Table E.5.8 in Appendix E.5.8). Accordingly, for both single-cell typing tasks, the leading score is greater in the _multi-source_ scenario than in the _single-source_ scenario. Moreover, the regression classifiers based on ranks (optirank, ANrank-lr, and rank-lr) outperform the simple logistic regression (lr). However, as in the previous section, the performances of optirank and rank-lr seem comparable: in the single-cell tasks, the paired t-tests do not reveal any significant difference between the two classifiers (see Appendix E.5). It is worth noting that despite the simplicity of its method for restricting the reference set, ANrank-lr reaches a level of performance comparable with the other ranked-based algorithms, in particular optirank, and likewise outperforms the simple logistic regression. This is particularly interesting since, by definition, ANrank-lr can produce sparse solutions. Indeed, in Appendix E.5.7, we note that optirank and ANrank-lr find solutions involving a similar number of genes. In accordance with the results of the previous section, the simple logistic regression produces substantially sparser solutions. Runtime comparison.The runtime for competing classifiers was measured in the previous _single-source_ scenario. Table 26 in Appendix E.5.9 attests that the fitting time of both optirank and ANrank-lr are reasonable and in some instances lower than the one of their competitors. ## Conclusion According to the literature, rank normalization confers increased robustness against distribution shifts that occur in RNA-Seq data. This success is linked to the fact that rank normalization is invariant to all perturbing monotone transformations that occur between different datasets and/or samples. However, a potential weakness of using rank normalization is that the rank of genes that might be biologically relevant can be perturbed by fluctuations of irrelevant ones. To counteract this problem, we proposed optirank, an algorithm that learns a ranking relative to an optimal reference genes set while learning a classification or regression model. We showed on a synthetic example, inspired by observations on real data, how rank-normalization can suffer from collective fluctuations of an ensemble of genes that perturb the ranks, and demonstrated the ability of optirank to eliminate those genes from the ranking reference set, thereby allowing it to solve successfully the classification task. We then assessed the performance of optirank on 11 real classification tasks, presenting different challenges in terms of distribution shifts occurring between train and test data. Indeed, we hypothesize that our model is able in some instances to remove from the ranking reference set genes that have the propensity to shift in the test distribution, thereby perturbing the ranks learned on the training data. Firstly, we observed that the advantage of the rank transformation is not systematic. Moreover, contrary to our hypothesis, restricting the reference set, as is done by optirank, does not seem to provide increased robustness compared to ranking relative to the full set of genes. As an additional way to tackle distribution shifts occurring between train and test data, we propose a _multi-source_ learning scheme. In this scheme, we train a classifier on a union of two different datasets in which a dataset shift occurs, hoping to make it more robust and efficient on a third external dataset. We show that this scenario is particularly useful in the cell-typing tasks, in particular when used in synergy with rank-based classifiers. We also explored an alternative way of restricting the reference set, with a simple ANOVA test that exploits the multiple sources in the training data. Despite its simplicity, the resulting classifier, which we call ANrank-lr, achieves a level of performance similar to optirank. Finally, it is important to mention that restricting the reference set reduces the number of genes needed to be sequenced, while maintaining the level of robustness and accuracy of the rank normalization. Therefore, in certain medical applications where sparsity is desired, it can be worth considering the classifiers optirank and ANrank-lr. ## Acknowledgements This work was funded under the Swiss Data Science Center collaborative project grant C19-02. \begin{table} \begin{tabular}{|l|c c c c c c c|} \hline & ANrank-lr & SCN & lr & optirank & rank-lr & rf \\ \hline TCGA-PCAWG-met500 & \(\mathbf{81\pm 4\ (1)}\) & \(69\pm 5\ (4)\) & \(\mathbf{80\pm 4\ (2)}\) & \(68\pm 4\ (5)\) & \(71\pm 4\ (3)\) & \(65\pm 4\ (6)\) \\ Baron-Segerstolpe-Murano & \(98.0\pm 0.3\ (3)\) & \(93.2\pm 0.8\ (5)\) & \(97.2\pm 0.4\ (4)\) & \(\mathbf{98.1\pm 0.3\ (2)}\) & \(\mathbf{98.2\pm 0.3\ (1)}\) & \(67\pm 3\ (6)\) \\ MWS-TMfacs-TM10x & \(\mathbf{95.63\pm 0.40\ (3)}\) & \(83\pm 1\ (5)\) & \(92.5\pm 0.9\ (4)\) & \(\mathbf{95.64\pm 0.41\ (2)}\) & \(\mathbf{95.9\pm 0.4\ (1)}\) & \(72\pm 2\ (6)\) \\ \hline \end{tabular} \end{table} Table 4: _Multi-source scenario._ Average balanced accuracies in % (across folds and classes) of competing classifiers on the tasks detailed in section 6.2 in the case in which the first two _source datasets_ are merged in the training phase. The integer in parenthesis denotes the rank of the classifiers in terms of average balanced accuracy (lower is better). Classifiers which did not score significantly worse than the best classifier according to a paired Student’s t-test are highlighted in bold (see App. E.5 for additional details).
2301.12031
Context Matters: A Strategy to Pre-train Language Model for Science Education
This study aims at improving the performance of scoring student responses in science education automatically. BERT-based language models have shown significant superiority over traditional NLP models in various language-related tasks. However, science writing of students, including argumentation and explanation, is domain-specific. In addition, the language used by students is different from the language in journals and Wikipedia, which are training sources of BERT and its existing variants. All these suggest that a domain-specific model pre-trained using science education data may improve model performance. However, the ideal type of data to contextualize pre-trained language model and improve the performance in automatically scoring student written responses remains unclear. Therefore, we employ different data in this study to contextualize both BERT and SciBERT models and compare their performance on automatic scoring of assessment tasks for scientific argumentation. We use three datasets to pre-train the model: 1) journal articles in science education, 2) a large dataset of students' written responses (sample size over 50,000), and 3) a small dataset of students' written responses of scientific argumentation tasks. Our experimental results show that in-domain training corpora constructed from science questions and responses improve language model performance on a wide variety of downstream tasks. Our study confirms the effectiveness of continual pre-training on domain-specific data in the education domain and demonstrates a generalizable strategy for automating science education tasks with high accuracy. We plan to release our data and SciEdBERT models for public use and community engagement.
Zhengliang Liu, Xinyu He, Lei Liu, Tianming Liu, Xiaoming Zhai
2023-01-27T23:50:16Z
http://arxiv.org/abs/2301.12031v1
# Context Matters: A Strategy to Pre-train Language Model for Science Education ###### Abstract This study aims at improving the performance of scoring student responses in science education automatically. BERT-based language models have shown significant superiority over traditional NLP models in various language-related tasks. However, science writing of students, including argumentation and explanation, is domain-specific. In addition, the language used by students is different from the language in journals and Wikipedia, which are training sources of BERT and its existing variants. All these suggest that a domain-specific model pre-trained using science education data may improve model performance. However, the ideal type of data to contextualize pre-trained language model and improve the performance in automatically scoring student written responses remains unclear. Therefore, we employ different data in this study to contextualize both BERT and SciBERT models and compare their performance on automatic scoring of assessment tasks for scientific argumentation. We use three datasets to pre-train the model: 1) journal articles in science education, 2) a large dataset of students' written responses (sample size over 50,000), and 3) a small dataset of students' written responses of scientific argumentation tasks. Our experimental results show that in-domain training corpora constructed from science questions and responses improve language model performance on a wide variety of downstream tasks. Our study confirms the effectiveness of continual pre-training on domain-specific data in the education domain and demonstrates a generalizable strategy for automating science education tasks with high accuracy. We plan to release our data and SciEdBERT models for public use and community engagement. ## 1 Introduction Writing is critical in science learning because it is the medium for students to express their thought processes. In classroom settings, educators have engaged students in writing explanations of phenomena, design solutions, arguments, etc. [10][15], with which students develop scientific knowledge and competence. However, it is time-consuming for teachers to review and evaluate natural language writing, thus preventing the timely understanding of students' thought processes and academic progress. Recent development in machine learning (ML), especially natural language processing (NLP), has proved to be a promising approach to promoting the use of writing in science teaching and learning [17]. For example, various NLP methods have been employed in science assessment practices that involve constructed responses, essays, simulation, or educational games [14]. In this rapidly developing domain, the state-of-the-art Bidirectional Encoder Representations from Transformers (BERT) model [4], a transformer-based machine learning architecture developed by Google, demonstrates superiority over other machine learning methods in scoring student responses to science assessment tasks [1]. Studies have shown that the performance on NLP tasks can be improved by using domain-specific data to contextualize language models [5]. Several BERT-based language models, such as SciBERT [3], AgriBERT [12], BioBERT [8], and ClinicalRadioBERT [11], have demonstrated significant success on domain Figure 1: The SciEdBERT framework. A student response instance is classified based on the latent representation of word vectors. specific tasks. Therefore, it is reasonable to speculate that ML-based scoring of students' scientific writing can be improved if we have a domain-specific language model for scientific education. In this case, we need to find the proper domain-specific data that are directly relevant to student writing. It is important to note that student responses are preliminary expressions of general science knowledge and lack the rigor of academic journal publications. In addition, their writing is also influenced by the developmental progress of writing skills and the length of the required tasks. These characteristics of student writing are challenges for using NLP tools to score students' writing [9][6]. Therefore, to further improve the application of large pre-trained language models to automatically score students' scientific writing, we use different datasets to train BERT and compare their performance on various downstream tasks. In this work, we make the following contributions: 1. We provide a method to improve model performance on the downstream tasks by contextualizing BERT with the downstream context in advance. 2. We prove the effectiveness of domain-specific data in improving BERT-based model performance. 3. We will release our language models, which can be further tested and used in other science education tasks. ## 2 Methodology ### Architecture/Background The BERT (Bidirectional Encoder Representations from Transformers) language model [4] is based on the transformer architecture [13]. It is trained using the masked language modeling (MLM) objective, which requires the model to predict missing words in a sentence given the context. This training process is called pre-training. The pre-training of BERT is unsupervised and only requires unlabeled text data. During pre-training, word embedding vectors are multiplied with three sets of weights (query, key and value) to obtain three matrices \(\mathbf{Q}\), \(\mathbf{K}\), and \(\mathbf{V}\), respectively. These matrices are then used to calculate attention scores, which are weights that measure the importance among input words. For example, in the example "I love my cats.", the word "I" should (ideally) be strongly associated with the word "my", since they refer to the same subject. Figure 2: An example of BERT’s attention mechanism For each word, the attention scores are then used to weigh intermediate outputs that sum up to the final vector representation of this word. \[Attention(Q,K,V)=softmax(\frac{QK}{\sqrt{d_{k}}})V \tag{1}\] where \(d_{k}\) refers to the dimension of the \(\mathbf{K}\) matrix. BERT takes a sequence of words as the input, and outputs a latent representation of input tokens in the form of word vectors. This latent representation, or embedding, captures the semantics, positional information, and contextual information of the input sentence. It can be further used for downstream NLP tasks. To use BERT for practical natural language understanding applications, it is necessary to fine-tune the model on the target task. BERT can be fine-tuned on a wide variety of tasks, such as topic classification and question answering, by adding task-specific layers on top of this pre-trained transformer. Fine-tuning is a supervised learning process. During this process, BERT is trained on a labeled dataset and the parameters of the model are updated in training to minimize the task-specific loss function. ### Domain-specific training BERT is a fundamental building block for language models. In practice, it has many variants that are tailored to the purposes and peculiarities of specific domains [3, 8, 2, 12, 11]. For example, BioBERT [8] is a large language model trained on biomedical publications (PubMed) and delivers superior performance on biomedical and chemical named entity recognition (NER), since it has a large and contextualized vocabulary of biomedical and chemical terms. Substantial evidence indicates that language models perform better when then target and source domains are aligned [8, 5]. In other words, continual pre-training BERT-based models with in-domain corpora could significantly improve their performance on downstream tasks [5]. In addition, there is much correlation between model performance and the extent of in-domain training. Specifically, training with more relevant in-domain text and training-from-scratch can further improve pre-trained language models [5]. In this work, we incorporate prior experience in NLP, specifically that of domain-specific training, to train our SCiEdBERT models designed specifically for science education tasks. ### Training Design We follow a pyramid-shaped training scheme to maximize our models' utilization of domain-relevant data. In Figure 3, we can see that SciBERT [3] is a science-oriented version of BERT developed through in-domain pre-training. As shown in Table 2, some of the models we developed for this experiment in this study are further extensions of SciBERT through continual pre-training on various science education data. The primary benefit of following the pyramid training scheme is to avoid diluting the relatively scarce in-domain data with the vastly more abundant general text data. If instead a language model is trained on a combined corpus of general text and domain-specific data, the effects of in-domain training will be insignificant. ## 3 Experiment ### Dataset We employ several datasets to train the language models, including the Academic Journal Dataset for Science Education (SciEdJ), a large dataset of students' written Responses (SR1), and a small dataset of students' responses to four argumentation tasks (SR2). Then, we use seven tasks from the large dataset (7T) and the four argumentation tasks (4T) as two datasets to fine-tune the trained language model. Below we briefly introduce these datasets. #### 3.1.1 Training Dataset We use three datasets to train the language model. The SciEdJ is a collection of 2,000 journal articles from journals in science education. We select ten journals in science education with the highest impact factors according to Web of Science, including _Journal of Research in Science Teaching, International Journal of Science Education, Science Education_, etc. For Each journal, we collect the most recent 200 articles. The SR1 dataset is a collection of over 50,000 student short responses to 49 constructed response assessment tasks in Figure 3: The pyramid training scheme science for middle school students. Students are anonymous to researchers and not traceable. The SR2 dataset is a collection of 2,940 student responses from a set of argumentation assessment tasks [7]. #### 3.1.3 Fine-tuning Dataset . We employ two datasets to evaluate the model performance. The 7T dataset includes seven tasks selected from the SR1 dataset, including short-constructed student responses and human expert-assigned labels. Overall, the 7T dataset includes 5,874 labeled student responses (915 for task H4-2, 915 for task H4-3, 834 for task H5-2, 883 for task J2-2, 743 for task J6-2, 739 for tasks J6-3, and 845 for task R1-2). The 4T dataset includes 2940 student responses and their labels from SR2 dataset (e.g., 770 for item G4, 642 for item G6, 765 for item S2, and 763 for item S3). All the samples in the two datasets are written responses from middle school students to explain science phenomena. Trained experts are employed to assign scores to student responses according to scoring rubrics developed by science education researchers, and the inter-rater reliability is at or above satisfactory level (details see [16][15]). ### Baselines Our study aims to examine how the context of training data matters to pre-trained models' (e.g., BERT) performance and explore strategies to further improve model performance. To achieve this goal, we employ various datasets to train and fine-tune the models. First, we use the original BERT as the pre-trained model and 7T as the downstream task. This is the baseline model. We then train a BERT model from SR1 and use 7T as the downstream task. Given that the 7T is grounded in the context of SR1, a comparison between the two fine-tuned models (based on BERT vs. SR1-BERT) can address our goals. Second, we repeat this training and fine-tuning process using BERT with SR2 and 4T datasets. To examine the generalization of the findings, we also employ 4T as the downstream task in other pre-trained models, including SciBERT [3], a BERT model trained on SciEdJ (i.e., SciEJ-BERT), a SciBERT model trained on SciEdJ (i.e., SciEdJ-SciBERT), a BERT model trained on SR2 (i.e., SR2-BERT), and a SciBERT model trained on SR2 (i.e., SR2-SciBERT), with increasingly closer contextualization between the pre-trained models and the downstream tasks. ### Results As Table 1 presents, the average accuracy of SR1-BERT (0.912) is slightly higher than the accuracy of BERT (0.904). Among the seven tasks, SR1-BERT achieves higher accuracy than BERT on four tasks and are on par with BERT on the remaining three tasks. This indicates that the accuracy of automatic scoring can be improved to a certain extent by training the model with in-domain training data. This indication is clearer in our second experiment with the 4T dataset. As Table 2 presents, overall, SR2-SciBERT has the highest average accuracy (0.866), which indicates training the model with the contexts of the downstream tasks can improve the accuracy of automatic scoring. The model with the second highest accuracy (0.852) is SR2-BERT. SR2-BERT has the same performance as SR2-SciBERT on S3 and even higher accuracy (0.821) than SR2-SciBERT (0.815) on G4. On S2, SR2-BERT's performance (0.915) is only second to SR2-BERT. Only on G6, SR2-BERT has a lower accuracy (0.719) than comparison models. Therefore, although the two models share the same average accuracy, based on the accuracy results on each individual task, SR2-BERT performs better than BERT. This is also in line with our previous findings that context matters in improving model performance. SciEdJ-SciBERT and SciEdJ-BERT have the lowest average accuracy scores (0.842) among the models. Only on G4 do these two models perform better than BERT. This indicates that the context of science education publications cannot help BERT learn the language of student responses better. In fact, on the contrary, such context may introduce confusion to the machine learning process. In summary, SR2-SciBERT and SR2-BERT achieve the best results among the models, which indicates that contextualizing the language models with the same language of the downstream tasks can improve the model's performance. \begin{table} \begin{tabular}{c|c c} \hline \hline **Item** & \multicolumn{2}{c}{**Accuracy**} \\ & BERT & SR1-BERT \\ \hline H4-2 & 0.913 & 0.929 \\ H4-3 & 0.831 & 0.831 \\ H5-2 & 0.958 & 0.970 \\ J2-2 & 0.920 & 0.926 \\ J6-2 & 0.959 & 0.973 \\ J6-3 & 0.845 & 0.845 \\ R1-2 & 0.864 & 0.864 \\ Average & 0.904 & 0.912 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparing different model performance on 7T task \begin{table} \begin{tabular}{c|c c c c c} \hline \hline **Item** & \multicolumn{5}{c|}{**Accuracy**} \\ & BERT & SciEdJ-BERT & SciEdJ-SciBERT & SR2-BERT & SR2-SciBERT \\ \hline G4 & 0.792 & 0.804 & 0.815 & 0.821 & 0.815 \\ G6 & 0.766 & 0.727 & 0.742 & 0.719 & 0.766 \\ S2 & 0.895 & 0.882 & 0.889 & 0.915 & 0.928 \\ S3 & 0.934 & 0.954 & 0.921 & 0.954 & 0.954 \\ Average & 0.847 & 0.842 & 0.842 & 0.852 & 0.866 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparing model performance on the 4T tasks ## 4 Conclusions This study investigates training language models with different contextual data and compares their performance on eleven constructed response tasks. The results indicate that using the in-domain data directly related to downstream tasks to contextualize the language model can improve a pre-trained language model's performance. In automatic scoring of students' constructed responses, this means continual pre-training the language model on student responses and then fine-tuning the model with the scoring tasks. In science education, using SciEdBERT can further improve model performance as SciEdBERT is well-versed in scientific vocabulary. Our study confirms the effectiveness of using domain-specific data to pre-train models to improve their performance on downstream tasks and validate a strategy to adapt language models to science education.
2303.13556
ProtoCon: Pseudo-label Refinement via Online Clustering and Prototypical Consistency for Efficient Semi-supervised Learning
Confidence-based pseudo-labeling is among the dominant approaches in semi-supervised learning (SSL). It relies on including high-confidence predictions made on unlabeled data as additional targets to train the model. We propose ProtoCon, a novel SSL method aimed at the less-explored label-scarce SSL where such methods usually underperform. ProtoCon refines the pseudo-labels by leveraging their nearest neighbours' information. The neighbours are identified as the training proceeds using an online clustering approach operating in an embedding space trained via a prototypical loss to encourage well-formed clusters. The online nature of ProtoCon allows it to utilise the label history of the entire dataset in one training cycle to refine labels in the following cycle without the need to store image embeddings. Hence, it can seamlessly scale to larger datasets at a low cost. Finally, ProtoCon addresses the poor training signal in the initial phase of training (due to fewer confident predictions) by introducing an auxiliary self-supervised loss. It delivers significant gains and faster convergence over state-of-the-art across 5 datasets, including CIFARs, ImageNet and DomainNet.
Islam Nassar, Munawar Hayat, Ehsan Abbasnejad, Hamid Rezatofighi, Gholamreza Haffari
2023-03-22T23:51:54Z
http://arxiv.org/abs/2303.13556v1
ProtoCon: Pseudo-label Refinement via Online Clustering and Prototypical Consistency for Efficient Semi-supervised Learning ###### Abstract Confidence-based pseudo-labeling is among the dominant approaches in semi-supervised learning (SSL). It relies on including high-confidence predictions made on unlabeled data as additional targets to train the model. We propose ProtoCon, a novel SSL method aimed at the less-explored label-scarce SSL where such methods usually underperform. ProtoCon refines the pseudo-labels by leveraging their nearest neighbours' information. The neighbours are identified as the training proceeds using an online clustering approach operating in an embedding space trained via a prototypical loss to encourage well-formed clusters. The online nature of ProtoCon allows it to utilise the label history of the entire dataset in one training cycle to refine labels in the following cycle without the need to store image embeddings. Hence, it can seamlessly scale to larger datasets at a low cost. Finally, ProtoCon addresses the poor training signal in the initial phase of training (due to fewer confident predictions) by introducing an auxiliary self-supervised loss. It delivers significant gains and faster convergence over state-of-the-art across 5 datasets, including CIFARs, ImageNet and DomainNet. ## 1 Introduction Semi-supervised Learning (SSL) [10, 40] leverages unlabeled data to guide learning from a small amount of labeled data; thereby, providing a promising alternative to costly human annotations. In recent years, SSL frontiers have seen substantial advances through confidence-based pseudo-labeling [21, 22, 38, 42, 43]. In these methods, a model iteratively generates pseudo-labels for unlabeled samples which are then used as targets to train the model. To overcome confirmation bias [1, 27], the model being biased by training on its own wrong predictions, these methods only retain samples with high confidence predictions for pseudo-labeling; thus ensuring that only reliable samples are used to train the model. While confidence works well in moderately labeled data regimes, it usually struggles in label-scarce settings1. This is primarily because the model becomes over-confident about the more distinguishable classes [17, 28] faster than others, leading to a collapse. Footnote 1: We denote settings with less than 10 images per class as “label-scarce.” In this work, we propose ProtoCon, a novel method which addresses such a limitation in label-scarce SSL. Its key idea is to complement confidence with a label refinement strategy to encourage more accurate pseudo-labels. To that end, we perform the refinement by adopting a co-training [5] framework: for each image, we obtain two different labels and combine them to obtain our final pseudo-label. The first is the model's softmax prediction, whereas the second is an aggregate pseudo-label describing the image's neighbourhood based on the pseudo-labels of other images in its vicinity. However, a key requirement for the success of co-training is to ensure that the two labels are obtained using sufficiently different image representations [40] to allow the model to learn based on their disagreements. As such, we employ a non-linear projection to map our encoder's representation into a different embed Figure 1: ProtoCon refines a pseudo-label of a given sample by knowledge of its neighbours in a prototypical embedding space. Neighbours are identified in an online manner using constrained K-means clustering. Best viewed zoomed in. ding space. We train this projector jointly with the model with a prototypical consistency objective to ensure it learns a different, yet relevant, mapping for our images. Then we define the neighbourhood pseudo-label based on the vicinity in that embedding space. In essence, we minimise a sample bias by smoothing its pseudo-label in class space via knowledge of its neighbours in the prototypical space. Additionally, we design our method to be fully online, enabling us to scale to large datasets at a low cost. We identify neighbours in the embedding space on-the-fly as the training proceeds by leveraging online K-means clustering. This alleviates the need to store expensive image embeddings [22], or to utilise offline nearest neighbour retrieval [23, 48]. However, applying naive K-means risks collapsing to only a few imbalanced clusters making it less useful for our purpose. Hence, we employ a constrained objective [6] lower bounding each cluster size; thereby, ensuring that each sample has enough neighbours in its cluster. We show that the online nature of our method allows it to leverage the entire prediction history in one epoch to refine labels in the subsequent epoch at a fraction of the cost required by other methods and with a better performance. ProtoCon's final ingredient addresses another limitation of confidence-based methods: since the model only retains high confident samples for pseudo-labeling, the initial phase of the training usually suffers from a weak training signal due to fewer confident predictions. In effect, this leads to only learning from the very few labeled samples which destabilises the training potentially due to overfitting [25]. To boost the initial training signal, we adopt a self-supervised instance-consistency [9, 15] loss applied on samples that fall below the threshold. Our choice of loss is more consistent with the classification task as opposed to contrastive instance discrimination losses [11, 16] which treat each image as its own class. This helps our method to converge faster without loss of accuracy. We demonstrate ProtoCon's superior performance against comparable state-of-the-art methods on 5 datasets including CIFAR, ImageNet and DomainNet. Notably, ProtoCon achieves 2.2%, 1% improvement on the SSL ImageNet protocol with 0.2% and 1% of the labeled data, respectively. Additionally, we show that our method exhibits faster convergence and more stable initial training compared to baselines, thanks to our additional self-supervised loss. In summary, our contributions are: * We propose a memory-efficient method addressing confirmation bias in label-scarce SSL via a novel label refinement strategy based on co-training. * We improve training dynamics and convergence of confidence-based methods by adopting self-supervised losses to the SSL objective. * We show state-of-the-art results on 5 SSL benchmarks. ## 2 Background We begin by reviewing existing SSL approaches with a special focus on relevant methods in the low-label regime. **Confidence-based pseudo-labeling** is an integral component in most of recent SSL methods [20, 22, 27, 38, 42]. However, recent research shows that using a fixed threshold underperforms in low-data settings because the model collapses to the few easy-to-learn classes early in the training. Some researchers combat this effect by using class-[47] or instance-based [44] adaptive thresholds, or by aligning [3] or debiasing [42] the pseudo-label distribution by keeping a running average of pseudo-labels to avoid the inherent imbalance in pseudo-labels. Another direction focuses on pseudo-label refinement, whereby the classifier's predictions are adjusted by training another projection head on an auxiliary task such as weak-supervision via language semantics [27], instance-similarity matching [48], or graph-based contrastive learning [22]. Our method follows the refinement approach, where we employ online constrained clustering to leverage nearest neighbours information for refinement. Different from previous methods, our method is fully online and hence allows using the entire prediction history in one training epoch to refine pseudo-labels in the subsequent epoch with minimal memory requirements. **Consistency Regularization** combined with pseudo-labeling underpins many recent state-of-the-art SSL methods [4, 20, 22, 24, 35, 38, 43]; it exploits the smoothness assumption [40] where the model is expected to produce similar pseudo-labels for minor input perturbations. The seminal FixMatch [38] and following work [22, 27, 42] leverage this idea by obtaining pseudo-labels through a weak form of augmentation and applying the loss against the model's prediction for a strong augmentation. Our method utilises a similar approach, but different from previous work, we additionally apply an instance-consistency loss in our projection embedding space. **Semi-supervision via self-supervision** is gaining recent popularity due to the incredible success of self-supervised learning for model pretraining. Two common approaches are: 1) performing self-supervised pretraining followed by supervised fine-tuning on the few labeled samples [9, 11, 12, 15, 26], and 2) including a self-supervised loss to the semi-supervised objective to enhance training [22, 25, 41, 46, 48]. However, the choice of the task is crucial: tasks such as instance discrimination [11, 16], which treats each image as its own class, can hurt semi-supervised image classification as it partially conflicts with it. Instead, we use an instance-consistency loss akin to that of [9] to boost the initial training signal by leveraging samples which are not retained for pseudo-labeling in the early phase of the training. ## 3 ProtoCon **Preliminaries.** We consider a semi-supervised image classification problem, where we train a model using \(M\) labeled samples and \(N\) unlabeled samples, where \(N>>M\). We use mini-batches of labeled instances, \(\mathcal{X}=\{(\mathbf{x}_{j},\mathbf{y}_{j})\}_{j=1}^{B}\) and unlabeled instances, \(\mathcal{U}=\{\mathbf{u}_{i}\}_{i=1}^{\mu\cdot B}\), where the scalar \(\mu\) denotes the ratio between the number of unlabeled and labeled examples in a given batch, and \(\mathbf{y}\) is the one-hot vector of the class label \(c\in\{1,\dots,C\}\). We employ an encoder network \(f\) to get latent representations \(f(.)\). We attach a softmax classifier \(g(\cdot)\), which produces a distribution over classes \(\mathbf{p}=g\circ f\). Moreover, we attach a projector \(h(\cdot)\), an MLP followed by an \(\ell_{2}\) norm layer, to get a normalised embedding \(\mathbf{q}\in\mathbb{R}^{d}=h\circ f\). Following [38], we apply weak augmentations \(\mathcal{A}_{w}(\cdot)\) on all images and an additional strong augmentation [13]\(\mathcal{A}_{s}(\cdot)\) only on unlabeled ones. **Motivation.** Our aim is to refine pseudo-labels before using them to train our model in order to minimise confirmation bias in label-scarce SSL. We achieve this via a co-training approach (see Fig. 2). For each image, we obtain two pseudo-labels and combine them to obtain our final pseudo-label \(\hat{\mathbf{p}}^{w}\). The first is the classifier softmax prediction \(\mathbf{p}^{w}\) based on a weakly augmented image, whereas the second is an aggregate pseudo-label \(\mathbf{z}^{a}\) describing the sample's neighbourhood. To ensure the two labels are based on sufficiently different representations, we define an image neighbourhood2 via online clustering in an embedding space obtained via projector \(h\) and trained for prototypical consistency instead of class prediction. Projector \(h\) and classifier \(g\) are jointly trained with the encoder \(f\), while interacting over pseudo-labels. The classifier is trained with pseudo-labels which are refined based on their nearest neighbours in the embedding space, whereas the projector \(h\) is trained using prototypes obtained based on the refined pseudo-labels to impose structure on the embedding space. Footnote 2: We use “neighbourhood” and “cluster” interchangeably. **Prototypical Space.** Here, we discuss our procedure to learn our embedding space defined by \(h\). Inspired by prototypical learning [37], we would like to encourage well-clustered image projections in our space by attracting samples to their class prototypes and away from others. Hence, we employ a contrastive objective using the class prototypes as targets rather than the class labels. We calculate class prototypes at the end of a given epoch by knowledge of the "reliable" samples in the previous epoch. Specifically, by employing a memory bank of \(\mathcal{O}(2N)\), we keep track of samples hard pseudo-labels \(\{\hat{y}_{i}=\arg\max(\hat{\mathbf{p}}_{i}^{w})\forall\mathbf{u}_{i}\in\mathcal{U}\}\) in a given epoch; as well as a reliability indicator for each sample \(\eta_{i}=\mathds{1}(\max(\hat{\mathbf{p}}_{i}^{w})\geq\tau)\) denoting if its max prediction exceeds the confidence threshold \(\tau\). Subsequently, we update the prototypes \(\mathcal{P}\in\mathbb{R}^{C\times d}\) as the average projections (accumulated over the epoch) of labeled images and reliable unlabeled images. Formally, let \(\mathcal{I}_{c}^{x}=\{i|\forall\mathbf{x}_{i}\in\mathcal{X},y_{i}=c\}\) be the indices of labelled instances with true class \(c\), and \(\mathcal{I}_{c}^{w}=\{i|\forall\mathbf{u}_{i}\in\mathcal{U},\eta_{i}=1,\hat{y}_{i} =c\}\) be the indices of the reliable unlabelled samples with hard pseudo-label \(c\). The normalised prototype for class \(c\) can then be obtained as per: \[\bar{\mathcal{P}}_{c}=\frac{\sum_{i\in\mathcal{I}_{c}^{x}\cup\mathcal{I}_{w}^{ w}}\mathbf{q}_{i}}{|\mathcal{I}_{c}^{x}|+|\mathcal{I}_{c}^{w}|},\quad\mathcal{P}_{c}= \frac{\bar{\mathcal{P}}_{c}}{||\bar{\mathcal{P}}_{c}||_{2}} \tag{1}\] Figure 2: **Method overview.** A soft pseudo-label \(p^{w}\) is first obtained based on the weak view. Then it is refined using the sample’s cluster pseudo-label \(z^{a}\) before using it as target in \(\mathcal{L}_{u}\). Clustering assignments \(a\) are calculated online using the projections of the weak samples \(q^{w}\) in the embedding space \(h\) which is trained via a prototypical loss \(\mathcal{L}_{p}\). Prototype targets are updated once after each epoch by averaging the accumulated projections of reliable samples for each class throughout the epoch. Cluster pseudo-labels are updated after each epoch using the cluster assignments/scores of all the samples and their respective hard pseudo-labels \(\hat{y}\). Finally, the self-supervised loss \(\mathcal{L}_{c}\) ensures consistency between projections \(q^{s}\) and \(q^{w}\). Subsequently, in the following epoch, we minimize the following contrastive prototypical consistency loss on unlabeled samples: \[\mathcal{L}_{p}=-\frac{1}{\mu B}\sum_{i=1}^{\mu B}\log\frac{\exp(\mathbf{q}_{i}^{s} \cdot\mathcal{P}_{\hat{y}_{i}}/T)}{\sum_{c=1}^{C}\exp(\mathbf{q}_{i}^{s}\cdot\mathcal{ P}_{c}/T))}, \tag{2}\] where \(T\) is a temperature parameter. Note that the loss is applied against the projection of the strong augmentations to achieve consistency regularisation as in [38]. **Online Constrained K-means** Here, the goal is to cluster instances in the prototypical space as a training epoch proceeds, so the cluster assignments (capturing the neighbourhood of each sample) are used to refine their pseudo-labels in the following epoch. We employ a mini-batch version of K-means [36]. To avoid collapsing to one (or a few) imbalanced clusters, we ensure that each cluster has sufficient samples by enforcing a constraint on the lowest acceptable cluster size. Given our \(N\) unlabeled projections, we cluster them into \(K\) clusters defined by centroids \(\mathcal{Q}=[\mathbf{c}_{1},\cdots,\mathbf{c}_{K}]\in\mathbb{R}^{d\times K}\). We use the constrained K-means objective proposed by [6]: \[\min_{\mathcal{Q},\mu\in\Delta}\sum_{i=1,k=1}^{i=N,k=K}\mu_{i,k}\|\mathbf{q}_{i}- \mathbf{c}_{k}\|_{2}^{2}\quad s.t.\quad\forall k\quad\sum_{i=1}^{N}\mu_{i,k}\geq\gamma \tag{3}\] where \(\gamma\) is the lower-bound of cluster size, \(\mu_{i,k}\) is the assignment of the \(i\)-th unlabeled sample to the \(k\)-th cluster, and \(\Delta=\{\mu|\forall i,\,\sum_{k}\mu_{i,k}=1,\forall i,k,\mu_{i,k}\in[0,1]\}\) is the domain of \(\mu\). Subsequently, to solve Eqn. 3 in an online mini-batch manner, we adopt the alternate solver proposed in [32]. For a fixed \(\mathcal{Q}\), the problem for updating \(\mu\) can be simplified as an assignment problem. By introducing dual variables \(\rho_{k}\) for each constraint \(\sum_{i}\mu_{i,k}\geq\gamma\), the assignment can be obtained by solving the problem: \[\max_{\mu_{i}\in\Delta}\sum_{k}s_{i,k}\mu_{i,k}+\sum_{k}\rho_{k}^{t-1}\mu_{i,k} \tag{4}\] where \(s_{i,k}=\mathbf{q}_{i}^{\top}\mathbf{c}_{k}\) is the similarity between the projection of unlabeled sample \(\mathbf{u}_{i}\) and the \(k\)-th cluster centroid, and \(t\) is the mini-batch iteration counter. Eqn. 4 can then be solved with the closed-form solution: \[\mu_{i,k}=\left\{\begin{array}{lr}1&k=\arg\max_{k}s_{i,k}+\rho_{k}^{t-1}\\ 0&o.w.\end{array}\right. \tag{5}\] After assignment, dual variables are updated as3 : Footnote 3: Refer to [32] and supplements for proofs of optimality and more details. \[\rho_{k}^{t}=\max\{0,\rho_{k}^{t-1}-\lambda\frac{1}{B}\sum_{i=1}^{B}(\mu_{i,k }^{t}-\frac{\gamma}{N})\} \tag{6}\] where \(\lambda\) is the dual learning rate. Finally, we update the cluster centroids after each mini-batch4 as: Footnote 4: See supplements for a discussion about updating the centers every mini-batch opposed to every epoch. \[\bar{\mathbf{c}_{k}}^{t}=\frac{\sum_{i}^{m}\mu_{i,k}^{t}\mathbf{q}_{i}^{t}}{\sum_{i}^{ m}\mu_{i,k}^{t}},\quad\mathbf{c}_{k}^{t}=\frac{\bar{\mathbf{c}_{k}}^{t}}{||\bar{\mathbf{c}_{k} }^{t}||_{2}} \tag{7}\] where \(m\) denotes the total number of received instances until the \(t\)-th mini-batch. Accordingly, we maintain another memory bank (\(\mathcal{O}(2N)\)) to store two values for each unlabeled instance: its cluster assignment in the current epoch \(a(i)=\{k|\mu_{i,k}=1\}\) and the similarity score \(s_{i,a(i)}\) (_i.e_. the distance to its cluster centroid). **Cluster Pseudo-labels** are computed at end of each epoch by querying the memory banks. The purpose is to obtain a distribution over classes \(C\) for each of our clusters based on its members. For a given cluster \(k\), we obtain its label \(\mathbf{z}^{k}=[z_{1}^{k},\cdots,z_{C}^{k}]\) as the average of the pseudo-labels of its cluster members weighted by their similarity to its centroid. Concretely, let \(\mathcal{I}_{c}^{k}=\{i|\forall\mathbf{u}_{i}\in\mathcal{U},a(i)=k,\hat{y}_{i}=c\}\) be the indices of unlabeled samples which belong to cluster \(k\) and have a hard pseudo-label \(c\). The probability of cluster \(k\)'s members belonging to class \(c\) is given as: \[z_{c}^{k}=\frac{\sum_{i\in\mathcal{I}_{c}^{k}}\quad s_{i,a(i)}}{\sum_{b=1}^{C} \sum_{j\in\mathcal{I}_{b}^{k}}s_{j,a(j)}} \tag{8}\] **Refining Pseudo-labels.** At any given epoch, we now have two pseudo-labels for an image \(\mathbf{u}_{i}\): the unrefined pseudo-label \(\mathbf{p}_{i}^{w}\) as well as a cluster pseudo-label \(\mathbf{z}^{a(i)}\) summarising its prototypical neighbourhood in the previous epoch. Accordingly, we apply our refinement procedure as follows: first, as recommended by [3, 22], we perform distribution alignment (\(DA(\cdot)\)) to encourage the marginal distribution of pseudo-labels to be close to the marginal of ground-truth labels5, then we refine the aligned pseudo-label as per: Footnote 5: \(DA(\mathbf{p}^{w})=\mathbf{p}^{w}/\hat{\mathbf{p}^{w}}\), where \(\mathbf{p}^{w}\) is a running average of \(\mathbf{p}^{w}\) during training. \[\hat{\mathbf{p}}_{i}^{w}=\alpha\cdot DA(\mathbf{p}_{i}^{w})+(1-\alpha)\cdot\mathbf{z}^{a(i)} \tag{9}\] Here, the second term acts as a regulariser to encourage \(\hat{\mathbf{p}}^{w}\) to be similar to its cluster members' and \(\alpha\) is a trade-off scalar parameter. Importantly, the refinement here leverages information based on the entire training set last-epoch information. This is in contrast to previous work [22, 48] which only stores a limited history of soft pseudo-labels for refinement, due to more memory requirement (\(\mathcal{O}(N\times C)\)). **Classification Loss.** With the refined pseudo-label, we apply the unlabeled loss against the model prediction for the strong augmentation as per: \[\mathcal{L}_{u}=\frac{1}{\mu B}\sum_{i=1}^{\mu B}\eta_{i}\cdot\mathrm{CE}( \hat{\mathbf{p}}_{i}^{w},\mathbf{p}_{i}^{s}), \tag{10}\] where \(\mathrm{CE}\) denotes cross-entropy. However, unlike [4, 38], we do not use hard pseudo-labels or sharpening, but instead use the soft pseudo-label directly. Also, we apply a supervised classification loss over the labeled data as per: \[\mathcal{L}_{x}=\frac{1}{B}\sum_{i=1}^{B}\mathrm{CE}(\mathbf{y}_{i},\mathbf{p}_{i}^{x })), \tag{11}\] **Self-supervised Loss.** Since we use confidence as a measure of reliability (see Eqn. 10), early epochs of training suffer from limited supervisory signal when the model is not yet confident about unlabeled samples, leading to slow convergence and unstable training. Our final ingredient addresses this by introducing a consistency loss in the prototypical space on samples which fall below the confidence threshold \(\tau\). We draw inspiration from instance-consistency self-supervised methods such as BYOL [15] and DINO [9]. In contrast to contrastive instance discrimination [11, 16], the former imposes consistency between two (or more) views of an image without using negative samples. Thereby, we found it to be more aligned with classification tasks than the latter. Formally, we treat the projection \(q\) as soft classes score over \(d\) dimensions, and obtain a distribution over these classes via a sharpened softmax (\(SM(\cdot)\)). We then enforce consistency between the weak and strong views as per: \[\mathcal{L}_{c}=\frac{1}{\mu B}\sum_{i=1}^{\mu B}(1-\eta_{i})\cdot\mathrm{CE} (SM(\mathbf{q}_{i}^{w}/5T),SM(\mathbf{q}_{i}^{s}/T)) \tag{12}\] Note that, as in DINO [9], we sharpen the target distribution more than the source's to encourage entropy minimization [14]. Unlike DINO, we do not use a separate EMA model to produce the target, we just use the output of the model for the weak augmentation. Note that this does not lead to representation collapse [15] because the network is also trained with additional semi-supervised losses. **Final Objective.** We train our model using a linear combination of all four losses \(\mathcal{L}=\mathcal{L}_{x}+\lambda_{u}\mathcal{L}_{u}+\lambda_{p}\mathcal{L }_{p}+\lambda_{c}\mathcal{L}_{c}\). Empirically, we find that fixing \(\forall\lambda=1\), the coefficients to modulate each loss, works well across different datasets. Algorithm 1 describes one epoch of ProtoCon training. ### Design Considerations **Number of Clusters** is a crucial parameter in our approach. In essence, we refine a sample prediction obtained by the classifier by aggregating information from its \(n\) nearest neighbours. However, naively doing nearest-neighbour retrieval has two limitations: 1) it requires storing image features throughout an epoch which is memory expensive; and 2) it requires a separate offline nearest-neighbour retrieval step. Instead, we leverage online clustering to identify nearest-neighbours on-the-fly. To avoid tuning \(K\) for each dataset, we tuned \(n\) once instead, then \(K\) can be simply calculated as \(K=N/n\). Additionally, we fix \(\gamma=0.9n\) to ensure that each cluster contains sufficient samples to guarantee the quality of the cluster pseudo-label while relaxing clusters to not necessarily be equi-partitioned. Empirically, we found that using \(n=250\) works reasonably well across datasets. To put it in context, this corresponds to \(K=4800\) for ImageNet, and \(K=200\) for CIFAR datasets. **Multi-head Clustering** is another way to ensure robustness of our cluster pseudo-labels. To account for K-means stochastic nature, we can employ multi-head clustering to get different cluster assignments based on each head, at negligible cost. Subsequently, we can average the cluster pseudo-labels across the different heads. In practice, we find that for large datasets ImageNet, cluster assignments slightly vary between heads so it is useful to use dual heads, while for smaller datasets, a single head is sufficient. **Memory Analysis.** ProtoCon is particulary useful due to its ability to leverage the entire prediction history in an epoch to approximate class density over neighbourhoods (represented by cluster pseudo-labels) with low memory cost. Particularly, it requires an overall of \(\mathcal{O}(4N+K\times C)\): \(4N\) to store hard pseudo-labels, reliability, cluster assignments, and similarity scores; and \(K\times C\) to store the cluster pseudo-labels. In contrast, if we were to employ a naive offline refinement approach, this would require \(\mathcal{O}(N\times d)\) to store the image embeddings for an epoch. For ImageNet dataset this translates to 9.6M memory units for ProtoCon opposed to 153.6M for the naive approach6 which is a 16\(\times\) reduction in memory; beside, eliminating the additional time needed to perform nearest neighbour retrieval. Footnote 6: considering \(d=128\) ## 4 Experiments We begin by validating ProtoCon's performance on multiple SSL benchmarks against state-of-the-art methods. Then, we analyse the main components of ProtoCon to verify their contribution towards the overall performance, and we perform ablations on important hyperparameters. ### Experimental Settings **Datasets.** We evaluate ProtoCon on five SSL benchmarks. Following [1, 38, 43], we evaluate on **CIFAR-10(100)**[19] datasets, which comprises 50,000 images of 32x32 resolution of 10(100) classes; as well as the more challenging **Mini-ImageNet** dataset proposed in [33], having 100 classes with 600 images per class (84x84 each). We use the same train/test split as in [18] and create splits for 4 and 10 labeled images per class to test ProtoCon in the low-label regime. We also test ProtoCon's performance on the **DomainNet**[30] dataset, which has 345 classes from six visual domains: _Clipart_, _Infograph_, _Painting_, _Quickdraw_, _Real_, and _Sketch_. We evaluate on the _Clipart_ and _Sketch_ domains to verify our method's efficacy in different visual domains and on imbalanced datasets. Finally, we evaluate on **ImageNet**[34] SSL protocol as in [2, 8, 9, 11]. In all our experiments, we focus on the low-label regime. **Implementation Details.** For CIFAR-10(100), we follow previous work and use WideResent-28-2(28-8) [45] as our encoder. We use a 2-layer projection MLP with an embedding dimension \(d=64\). The models are trained using SGD with a momentum of 0.9 and weight decay of 0.0005(0.001) using a batch size of 64 and \(\mu=7\). We set the threshold \(\tau=0.95\) and train our models for 1024 epochs for a fair comparison with the baselines. However, we note that our model needs substantially fewer epochs to converge (see Fig. 3-b and c). We use a learning rate of 0.03 with a cosine decay schedule. We use random horizontal flips for weak augmentations and RandAugment [13] for strong ones. For the larger datasets: ImageNet and DomainNet, we use a Resnet-50 encoder and \(d=128\), \(\mu=5\) and \(\tau=0.7\) and follow the same hyperparameters as in [38] except that we use SimCLR [11] augmentations for the strong view. For ProtoCon-specific hyperparameters, we consistently use the same parameters across all experiments: we set \(n\) to 250 (corresponding to \(K\)=200 for CIFARs, and Mini-ImageNet, and 4800 for ImageNet), and dual learning rate \(\lambda=20\), mixing ratio \(\alpha=0.8\), and temperature \(T=0.1\). **Baselines.** Since our method bears the most resemblance with CoMatch [22], we compare against it in all our experiments. CoMatch uses graph contrastive learning to refine pseudo-labels but uses a memory bank to store the last n-samples embeddings to build the graph. Additionally, we compare with state-of-the-art SSL method (DebiasPL) [42], which proposes a pseudo-labeling debiasing plug-in to work with various SSL methods in addition to an adaptive margin loss to account for inter-class confounding. Finally, we also compare with the seminal method FixMatch and its variant with Distribution alignment (DA). We follow Oliver et al. [29] recommendations to ensure a fair comparison with the baselines, where we implement/adapt all the baselines using the same codebase to ensure using the same settings across all experiments. As for ImageNet experiments, we also compare with representation learning baselines such as SwAV [8], DINO [9], and SimCLR [11], where we report the results directly from the respective papers. We also include results for ProtoCon and DebiasPL with additional pretraining (using MOCO [16]) and the Exponential Moving Average Normalisation method proposed by [7] to match the settings used in [7, 42]. ### Results and Analysis **Results.** Similar to prior work, we report the results on the test sets of respective datasets by averaging the results of the last 10 epochs of training. For CIFAR and Mini-ImageNet, \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{**CIFAR-10**} & \multicolumn{3}{c}{**CIFAR-100**} & \multicolumn{3}{c}{**Mini-ImageNet**} \\ \cline{2-7} Total labeled samples & 20 & 40 & 80 & 200 & 400 & 800 & 400 & 1000 \\ \hline FixMatch [38] & 82.32\(\pm\)9.77 & 86.29\(\pm\)4.50 & 92.06\(\pm\)0.88 & 35.37\(\pm\)5.68 & 51.15\(\pm\)1.75 & 61.32\(\pm\)0.92 & 17.18\(\pm\)6.22 & 39.03\(\pm\)3.99 \\ FixMatch + DA [3, 38] & 83.84\(\pm\)8.35 & 86.98\(\pm\)3.40 & 92.29\(\pm\)0.86 & 41.28\(\pm\)6.03 & 52.65\(\pm\)2.32 & 62.12\(\pm\)0.79 & 19.40\(\pm\)5.87 & 40.92\(\pm\)4.71 \\ CoMatch [22] & 87.37\(\pm\)8.47 & 93.09\(\pm\)1.39 & 93.97\(\pm\)0.62 & 47.92\(\pm\)4.83 & 58.17\(\pm\)3.52 & **66.15\(\pm\)0.71** & 21.29\(\pm\)6.19 & 40.98\(\pm\)3.52 \\ SimMatch [48] & 89.31\(\pm\)7.73 & 94.51\(\pm\)2.56 & 94.89\(\pm\)1.32 & 46.01\(\pm\)6.12 & 57.95\(\pm\)2.37 & 65.50\(\pm\)0.93 & 25.75\(\pm\)5.90 & 39.76\(\pm\)3.77 \\ FixMatch + DB [42] & 89.02\(\pm\)6.37 & 94.60\(\pm\)1.31 & 95.60\(\pm\)0.12 & 46.36\(\pm\)5.05 & 57.88\(\pm\)3.34 & 64.84\(\pm\)0.85 & 27.37\(\pm\)7.01 & 41.05\(\pm\)3.34 \\ \hline ProtoCon & **90.51\(\pm\)4.02** & **95.20\(\pm\)1.8** & **96.11\(\pm\)0.20** & **48.25\(\pm\)4.87** & **59.53\(\pm\)2.94** & **65.91\(\pm\)0.57** & **29.15\(\pm\)6.98** & **45.83\(\pm\)4.15** \\ _delta against best baseline_ & +1.20 & +0.60 & +0.51 & +0.33 & +1.36 & -0.24 & +1.78 & +4.78 \\ \hline \hline \end{tabular} \end{table} Table 1: CIFAR and Mini-ImageNet accuracy for different amounts of labeled samples averaged over 5 different splits. All results are produced using the same codebase and same splits. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{**Clipart**} & \multicolumn{3}{c}{**Sketch**} \\ \cline{2-5} Total labeled samples & 690 & 1380 & 2760 & 690 & 1380 & 2760 \\ \hline FixMatch [38] & 30.21 & 41.21 & 51.29 & 12.73 & 21.65 & 33.07 \\ CoMatch [22] & 35.49 & 48.62 & 54.98 & 24.30 & 33.71 & 41.02 \\ FixMatch + DB [42] & 38.97 & 51.44 & 58.31 & 25.34 & 35.58 & 43.98 \\ \hline ProtoCon & **43.72** & **55.66** & **61.32** & **33.94** & **43.51** & **50.88** \\ _delta_ & +4.75 & +4.22 & +3.01 & +8.60 & +7.93 & +6.90 \\ \hline \hline \end{tabular} \end{table} Table 2: DomainNet accuracy for 2, 4, and 8 labels per class. we report the average and standard deviation over 5 different labeled splits, whereas we report for only 1 split on larger datasets (ImageNet and DomainNet). Different from most previous work, we only focus on the very low-label regime (2, 4, and 8 samples per class, and 0.2% for ImageNet). As shown in Tab. 1 - 3, we observe that ProtoCon outperforms baselines in almost all the cases showing a clear advantage in the low-label regime. It also exhibits less variance across the different splits (and the different runs within each split). These results suggest that besides achieving high accuracy, ProtoCon shows robustness and consistency across splits in low-data regime. Notably, our method performs particularly well on DomainNet. Unlike ImageNet and CIFARs, DomainNet is an imbalanced dataset, and prior work [39] shows that it suffers from high level of label noise. This shows that our method is also more robust to noisy labels. This can be explained in context of our co-training approach: using the prototypical neighbourhood label to smooth the softmax label is an effective way to minimise the effect of label noise. In line with previous findings [23], since in prototypical learning, all the instances of a given class are used to calculate a class prototype which is then used as a prediction target, it results in representations which are more robust to noisy labels. Finally, on ImageNet (Tab. 3), we improve upon the closest baseline with gains of 2.2% in the challenging 0.2% setting; whereas we slightly fall behind PAWS [2] in the 10% regime, again confirming our method's usefulness in the label-scarce scenario. **How does refinement help?** First, we would like to investigate the role of pseudo-labeling refinement in improving SSL performance. Intuitively, since we perform refinement by combining pseudo-labels from two different sources (the classifier predictions in probability space and the cluster labels in the prototypical space), we expect that there will be disagreements between the two and hence considering both the views is the key towards the improved performance. To validate such intuition, we capture a fine-grained view of the training dynamics throughout the first 300 epochs of CIFAR-10 with 40 labeled instances scenario, including: samples' pseudo-labels before and after refinement as well as their cluster pseudo-labels in each epoch. This enables us to capture disagreements between the two pseudo-label sources up to the individual sample level. In Fig. 3-a, we display the average disagreement between the two sources over the initial phase of the training overlaid with the classifier, cluster and refined pseudo-label accuracy. We observe that initially, the disagreement (dashed black line) is high which corresponds to a larger gap between the accuracies of both heads. As the training proceeds, we observe that disagreement decreases leading to a respective decrease in the gap. Additionally, we witness that the refined accuracy curve (green) is almost always above the individual accuracies (orange and blue) which proves that, indeed, the synergy between the two sources improves the performance. On the other hand, to get a qualitative understanding of where each of the pseudo-labeling sources helps, we dig deeper to classes and individual samples level where we investigate which classes/samples are the most disagreed-upon (on average) throughout the training. In Fig. 4, we display the most prototypical examples of a given class (middle) as identified by the prototypical scores obtained in the embedding space. We also display the examples which on average are always correctly classified in the prototypical space (right) opposed to those in the classifier space (left). As expected, we find that samples which look more prototypical, albeit with less distinctive features (blurry), are the ones almost always correctly classified with the prototypical head; whereas, samples which have more distinctive features but are less prototypical are those correctly classified by the discriminative classifier head. This again confirms our intuitions about how co-training based on both sources helps to refine the pseudo-label. Finally, we ask: is it beneficial to use the entire dataset pseudo-label history to perform refinement or is it sufficient to just use a few samples? To answer this question, we use only a subset of the samples in each cluster (sampled uniformly at random) to calculate cluster pseudo-labels in Eqn. 8. For CIFAR-10 with 20 and 40 labels, we find that this leads to about 1-2% (4-5%) average drop in performance, if we use half (quarter) of the samples in each cluster. This reiterates the usefulness of our approach to leverage the history of all samples (at a lower cost) opposed to a limited history of samples. \begin{table} \begin{tabular}{l c c c c} Method & Pre. Epochs & 0.2\% & 1\% & 10\% \\ \hline Supervised & ✗ & 300 & – & 25.4 & 56.4 \\ \hline \multicolumn{5}{l}{_Representation learning methods:_} \\ SwAV [8] & ✓ & 800 & – & 53.9 & 70.2 \\ SimCLRv2++ [12] & ✓ & 1200 & – & 60.0 & 70.5 \\ DINO [9] & ✓ & 300 & – & 55.1 & 67.8 \\ PAWS++ [2] & ✓ & 300 & – & 66.5 & **75.5** \\ \hline \multicolumn{5}{l}{_PL \& consistency methods:_} \\ \multicolumn{5}{l}{MPL [31]} & ✗ & 800 & – & \(65.3^{\dagger}\) & 73.9 \\ CoMatch [22] & ✗ & 400 & \(44.3^{\dagger}\) & 66.0 & 73.6 \\ FixMatch [38] & ✗ & 300 & – & 51.2 & 71.5 \\ FMatch + DA [3, 38] & ✗ & 300 & \(41.1^{\dagger}\) & 53.4 & \(71.5^{\dagger}\) \\ FMatch + EMAN [7] & ✓ & 850 & 43.6 & 60.9 & 72.6 \\ FMatch + DB [42] & ✗ & 300 & \(45.8^{\dagger}\) & 63.0\({}^{\dagger}\) & \(71.7^{\dagger}\) \\ FMatch + DB + EMAN [42] & ✓ & 850 & 47.9 & 63.1 & \(72.8^{\dagger}\) \\ \hline \multicolumn{5}{l}{ProtCoN} & ✗ & 300 & 47.8 & 65.6 & 73.1 \\ ProtoCon + EMAN [7] & ✓ & 850 & **50.1** & **67.2** & 73.5 \\ \multicolumn{5}{l}{_delta against best baseline_} \\ \hline \multicolumn{5}{l}{\(+\)2.2 \(+\)0.7 \(-\)2.0} \\ \end{tabular} \end{table} Table 3: SSL results on ImageNet with different percentage of labels. † denotes results produced by our codebase. Other results are reported as appearing in the cited work. **Role of self-supervised loss.** Here, we are interested to tear apart our choice of self-supervised loss and its role towards the performance. To recap, our intuition behind using that loss is to boost the learning signal in the initial phase of the training when the model is still not confident enough to retain samples for pseudo-labeling. As we see in Fig. 3-b and c. there is a significant speed up of our model's convergence compared to baseline methods with a clear boost in the initial epochs. Additionally, to isolate the potentially confounding effect of our other ingredients, we display in Fig. 3-d the performance of our method with and without the self-supervised loss which leads to a similar conclusion. Finally, to validate our hypothesis that instance-consistency loss is more useful than instance-discrimination, we run a version of ProtoCon with an instance-discrimination loss akin to that of SimCLR. This version completely collapsed and did not converge at all. We attribute this to: 1) as verified by SimCLR authors, such methods work best with large batch sizes to ensure enough negative examples are accounted for; and 2) these methods treat each image as its own class and contrast it against every other image and hence are in direct contradiction with the image classification task; whereas instance-consistency losses only ensure that the representations learnt are invariant to common factors of variations such as: color distortions, orientation, _etc._ and are hence more suitable for semi-supervised image classification tasks. **Ablations.** Finally, we present an ablation study about the important hyperparameters of ProtoCon. Specifically, we find that \(n\) (minimum samples in each cluster) and \(\alpha\) (mixing ratio between classifier pseudo-label and cluster pseudo-label) are particularly important. Additionally, we find that the projection dimension needs to be sufficiently large for larger datasets (we use \(d=64\) for CIFARs and 128 for all others). In Tab. 4, we present ablation results on CIFAR-10 with 80 labeled instances. ## 5 Conclusion We introduced ProtoCon, a novel SSL learning approach targeted at the low-label regime. Our approach combines co-training, clustering and prototypical learning to improve pseudo-labels accuracy. We demonstrate that our method leads to significant gains on multiple SSL benchmarks and better convergence properties. We hope that our work helps to commodify deep learning in domains where human annotations are expensive to obtain. **Acknowledgement.** This work was partly supported by DARPA's Learning with Less Labeling (LwLL) program under agreement FA8750-19-2-0501. I. Nassar is supported by the Australian Government Research Training Program (RTP) Scholarship, and M. Hayat is supported by the ARC DECRA Fellowship DE200101100. \begin{table} \begin{tabular}{l c c c c} \hline \hline Losses & -\(\mathcal{L}_{\mathcal{C}}\) & -\(\mathcal{L}_{\mathcal{D}}\) & -(\(\mathcal{L}_{\mathcal{C}}\),\(\mathcal{L}_{\mathcal{D}}\)) & All \\ & 95.3 & 94.8 & 92.3 & 96.1 \\ \hline \multirow{2}{*}{Cluster size (\(n\))} & 50 & **250** & 500 & 1000 \\ & 95.7 & 96.1 & 94.3 & 92.1 \\ \hline \multirow{2}{*}{Refinement Ratio (\(\alpha\))} & 0.5 & 0.7 & **0.8** & 0.9 \\ & 86.7 & 94.5 & 96.1 & 95.2 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation results. -\(\mathcal{L}_{\star}\) denotes that the respective loss is not applied, and **green** marks the best option. Results are average accuracy over 5 runs for CIFAR-10 (80). Figure 4: The middle panel shows the most prototypical images of CIFAR-10 classes as identified by our model. Left (resp. right) panels show images which have more accurate classifier (resp. cluster) pseudo-labels. Cluster labels are more accurate for prototypical images while classifier labels are more accurate for images with distinctive features (_e.g_. truck wheels) even if not so prototypical. Such diversity of views is key to the success of our co-training method. Figure 3: **Analysis Plots. (a)**: Average disagreement between cluster and classifier pseudo-labels versus ground truth accuracy of the different pseudo-labels. The accuracy gap between refined pseudo-labels (green) and the cluster’s and classifier’s (blue and orange) decreases with disagreement rate (dashed black) showing that refinement indeed helps. **(b), (c):** Convergence plots on CIFAR10/100 show that ProtoCon converges faster due to the additional self-supervised training signal. **(d):** ProtoCon w and w/out consistency loss.
2302.04303
Adapting Pre-trained Vision Transformers from 2D to 3D through Weight Inflation Improves Medical Image Segmentation
Given the prevalence of 3D medical imaging technologies such as MRI and CT that are widely used in diagnosing and treating diverse diseases, 3D segmentation is one of the fundamental tasks of medical image analysis. Recently, Transformer-based models have started to achieve state-of-the-art performances across many vision tasks, through pre-training on large-scale natural image benchmark datasets. While works on medical image analysis have also begun to explore Transformer-based models, there is currently no optimal strategy to effectively leverage pre-trained Transformers, primarily due to the difference in dimensionality between 2D natural images and 3D medical images. Existing solutions either split 3D images into 2D slices and predict each slice independently, thereby losing crucial depth-wise information, or modify the Transformer architecture to support 3D inputs without leveraging pre-trained weights. In this work, we use a simple yet effective weight inflation strategy to adapt pre-trained Transformers from 2D to 3D, retaining the benefit of both transfer learning and depth information. We further investigate the effectiveness of transfer from different pre-training sources and objectives. Our approach achieves state-of-the-art performances across a broad range of 3D medical image datasets, and can become a standard strategy easily utilized by all work on Transformer-based models for 3D medical images, to maximize performance.
Yuhui Zhang, Shih-Cheng Huang, Zhengping Zhou, Matthew P. Lungren, Serena Yeung
2023-02-08T19:38:13Z
http://arxiv.org/abs/2302.04303v1
Adapting Pre-trained Vision Transformers from 2D to 3D through Weight Inflation Improves Medical Image Segmentation ###### Abstract Given the prevalence of 3D medical imaging technologies such as MRI and CT that are widely used in diagnosing and treating diverse diseases, 3D segmentation is one of the fundamental tasks of medical image analysis. Recently, Transformer-based models have started to achieve state-of-the-art performances across many vision tasks, through pre-training on large-scale natural image benchmark datasets. While works on medical image analysis have also begun to explore Transformer-based models, there is currently no optimal strategy to effectively leverage pre-trained Transformers, primarily due to the difference in dimensionality between 2D natural images and 3D medical images. Existing solutions either split 3D images into 2D slices and predict each slice independently, thereby losing crucial depth-wise information, or modify the Transformer architecture to support 3D inputs without leveraging pre-trained weights. In this work, we use a simple yet effective weight inflation strategy to adapt pre-trained Transformers from 2D to 3D, retaining the benefit of both transfer learning and depth information. We further investigate the effectiveness of transfer from different pre-training sources and objectives. Our approach achieves state-of-the-art performances across a broad range of 3D medical image datasets, and can become a standard strategy easily utilized by all work on Transformer-based models for 3D medical images, to maximize performance.1 Footnote 1: Codes are available at [https://github.com/yuhui-zh15/TransSeg](https://github.com/yuhui-zh15/TransSeg). Medical Image Segmentation, Transfer Learning, CT, MRI. ## 1 Introduction The increased utilization of medical imaging technologies in recent years is causing a dramatic increase in radiologists' daily workload (Smith-Bindman et al., 2008; Hendee et al., 2010), leading to longer delays in diagnosis and higher misdiagnosis rates (Alonso-Martinez et al., 2010; Hendriksen et al., 2017). Computer vision techniques such as 3D segmentation have the potential to alleviate the burden for radiologists by automating or assisting current image interpretation workflow, ultimately improving clinical care and patient outcome (Esteva et al., 2021; Huang et al., 2020). Recently, the computer vision community has witnessed a paradigm shift from using CNNs to self-attention based Trans former (Vaswani et al., 2017) architectures. By pre-training at scale using supervised or self-supervised learning, many vision Transformer models have achieved state-of-the-art performances when fine-tuned for various vision tasks, including image classification, object detection, and semantic segmentation (Caron et al., 2021; Liu et al., 2021; Bao et al., 2021; Carion et al., 2020; Dosovitskiy et al., 2020). However, while many recent works on medical image analysis have started to explore Transformer-based models, it is still unclear what is the optimal strategy to effectively leverage Transformers for 3D medical image segmentation, primarily due to the difference in dimensionality between 2D natural images and 3D medical images. A straightforward approach adopted by most existing works is to split 3D images into 2D slices along the depth axis and independently segment each of them (Chen et al., 2021; Liu et al., 2021; Huang et al., 2021). However, this solution compromises the depth information, which is crucial for identifying segmentation boundaries. On the contrary, another line of research modifies the Transformer architecture to support 3D inputs, but the modification makes the pre-trained weights not directly applicable (Xie et al., 2021; Hatamizadeh et al., 2021). In this work, we investigate strategies and best practices for adapting Transformers for 3D medical images. We first analyze the importance of _transfer learning_ and _depth information_, and find that both are critical for segmentation performance, especially transfer learning. The model initialized from pre-trained weights outperforms the same randomly initialized model by 11.18% on a multi-organ segmentation dataset, and the randomly initialized 3D model outperforms the randomly initialized 2D model by 6.07%. To retain these two advantages, we use a simple yet effective _weight inflation_ strategy to adapt pre-trained Transformers from 2D to 3D, which has been a standard approach in video understanding to transfer from models pre-trained on images since it was first proposed in I3D (Carreira and Zisserman, Figure 1: _Approach overview._ Large-scale pre-trained Transformers are used as the encoder in the segmentation model for transfer learning, in which weights are adapted using the inflation strategy to support 3D inputs. Each 3D image is split into windows, which contain a small number of neighbor slices. Each window is fed into the segmentation model and the segmentation of the center slice is predicted. All the predicted slices are aggregated to form the final 3D prediction. 2017). After careful ablations of different inflation and transfer settings, our best strategy achieves 1.95% improvements over baselines with only a 0.75% increase in computational cost, and outperforms all the state-of-the-art methods. We further evaluate our method on 11 additional datasets to understand the generalizability of our best practice. Experiments show that our approach consistently benefits from weight inflation and achieves many state-of-the-art performances. In summary, the major contribution of our work to the medical image segmentation community is that we show how a simple yet effective weight inflation strategy, which is not currently used by state-of-the-art medical image segmentation methods, can lead to substantial improvements in performance. Moreover, we also performed systematic ablations, which further provide insights into the best strategies to adapt pre-trained Transformers from 2D to 3D through weight inflation. Our method can become a standard strategy utilized by all future works on Transformer-based models for 3D medical images to maximize performance. Our best practice is summarized in Figure 1 and below: * Find a vision Transformer model pre-trained on natural images via a combination of self-supervised and supervised learning. * Adapt pre-trained weights in the embedding layer using the centering inflation strategy by transferring weights to the center-most slice and initializing all other weights to zero. * Parse 3D input into windows of a few neighbor slices, and segment only the center slice. Aggregate 3D segmentation with predictions from all windows. ## 2 Related Works Vision Transformers.Originally proposed for machine translation (Vaswani et al., 2017), Transformer based architectures have started to flourish in computer vision in recent years. ViT (Dosovitskiy et al., 2020) split images into patches and create patch embeddings as inputs to the Transformer model, which lays the foundation to use Transformer architectures for visual recognition. However, ViT requires more training data to surpass convolutional neural networks. More recent works explored different self-supervised pre-training strategies to reduce the requirement of large-scale labeled datasets. Caron et al. (Caron et al., 2021) proposed DINO, a self-distillation contrastive framework. Drawing inspiration from BERT in the NLP field, Bao et al. (Bao et al., 2021) proposed BEiT, which is pre-trained to predict masked image patches given context. These approaches achieve state-of-the-art performances in image classification and semantic segmentation. Several works have explored leveraging Transformers for video inputs by utilizing weights from pre-trained image models (Carreira and Zisserman, 2017). ViViT (Arnab et al., 2021) and Video SwinTransformer (Liu et al., 2021) explored inflation and centering strategies to initialize weights of video Transformers from pre-trained image Transformers. In this work, we systematically investigate the benefit of transfer learning from these Transformer models under significant domain shifts, and compare the effectiveness of transferring from different sources and pre-training objectives. Transformers for 3D Medical Image Segmentation.Following successful adaptations of Transformers in vision, some works in medical image segmentation also achieved state-of-the-art results by utilizing Transformers. Earlier work such as Tran sUNet (Chen et al., 2021) or CoTr (Xie et al., 2021) still relied on convolutional layers for the Transformer model. More recent methods, such as SwinUNet (Cao et al., 2021) and UNETR (Xie et al., 2021), use a purely Transformer-based encoder. To improve the information flow of the segmentation framework, different mechanisms such as context bridge (Huang et al., 2021) or interleaved layers (Gao et al., 2021) are designed. Based on their model input dimension, these prior works can be grouped into two categories: 3D approaches that develop 3D model architectures directly predicting 3D segmentation (Xie et al., 2021; Hatamizadeh et al., 2021); and 2D approaches that split 3D images into 2D slices and stack 2D predictions to form the 3D segmentation (Chen et al., 2021; Cao et al., 2021; Huang et al., 2021). 3D segmentation benefits from having more depth-wise contextual information, which is especially important for medical images where awareness of anatomical structures is crucial for identifying segmentation boundaries. However, due to the limited availability of pre-trained 3D Transformers, all prior 3D approaches randomly initialized their models and used no transfer learning. On the contrary, 2D approaches can easily utilize Transformer weights pre-trained on natural images, but lack the spatial context awareness of 3D models. In this work, we attempt to address the shortcomings of existing approaches by combining the benefit of transfer learning of 2D approaches and depth-wise information of 3D approaches. ## 3 Method In this section, we first introduce semantic segmentation (Sec. 3.1) and vision Transformer (Sec. 3.2), and then present strategies for adapting Transformers pre-trained on 2D image to 3D medical image inputs (Sec. 3.3). ### Semantic Segmentation Semantic segmentation is the task of assigning a class label for each pixel (voxel for 3D) of an input image. Most prior works use _encoder-decoder_ frameworks for semantic segmentation. The encoder progressively extracts higher-level features that capture the input image's global information. Then the decoder utilizes these features and progressively reconstructs fine-grained pixel/voxel-level predictions. Therefore, the quality of the features extracted by the encoder directly impacts the segmentation quality. CNNs have been the dominant choices of the encoder in semantic segmentation models for years (Long et al., 2015; Ronneberger et al., 2015). Recently, several studies have reported improvements in performance for both natural and medical image segmentation by using Transformers as encoders (Liu et al., 2021; Bao et al., 2021; Zheng et al., 2021; Chen et al., 2021; Cao et al., 2021). In this work, we compare different pre-trained Transformers as encoders and study the most effective way to perform transfer learning for 3D medical images. ### Vision Transformer Transformer was first introduced in (Vaswani et al., 2017) and adapted to visual inputs in (Dosovitskiy et al., 2020). Unlike CNNs, Transformer uses the self-attention mechanism to aggregate information from the input image, which can be viewed as more expressive convolutions and can capture long-range dependencies (Cordonnier et al., 2019). Empirical results show that vision Transformers achieve state-of-the-art performances across many vision tasks when pre-trained on large-scale natural image datasets (Dosovitskiy et al., 2020; Liu et al., 2021; Caron et al., 2021; Bao et al., 2021). **Patch Partitioning and Embedding Layer.** Vision Transformers first use a patch partitioning and embedding layer_ to convert a visual input into patch embeddings to reduce input dimensionality. Specifically, for a 2D image with \(H\times W\) pixels, the image is partitioned into multiple smaller patches of \(P\times P\) pixels (\(P\in\{16,32\}\) in practice). Similarly, 3D images of size \(H\times W\times D\) can be partitioned into patches of \(P\times P\times P\) voxels. A linear layer is then applied to each patch's flattened pixel/voxel values to generate input embeddings to the Transformer encoder. By using patch partitioning, the number of inputs \(T=\frac{H}{P}\times\frac{W}{P}\) (\(T=\frac{H}{P}\times\frac{W}{P}\times\frac{D}{P}\) for 3D) is much smaller than the number of pixels/voxels, which makes the computation tractable for subsequent Transformer encoder layers given the \(O(T^{2})\) computational cost. Notably, the input patch partitioning and embedding layer is equivalent to strided convolution, where the kernel size and stride size are both equal to \(P\times P\) (\(P\times P\times P\) for 3D). This is important as we introduce weight inflation strategies to adapt Transformers from 2D to 3D in Sec 3.3. Transformer Encoder.These patch embeddings \(H^{0}\in\mathbb{R}^{T\times D}\) are then provided as a sequence input into a _Transformer encoder_ that generates contextualized representations for each patch \(H^{L}\in\mathbb{R}^{T\times D}\). The encoder is constructed with a stack of \(L\) multi-head self-attention layers, which facilitates the aggregation of information from all the other input patches. After information exchange from every layer in a sequential manner \(H^{l}=\text{Layer}_{l}(H^{l-1})\in\mathbb{R}^{T\times D}\), the output vectors of the last layer \(H^{L}\) are used as contextualized representations for the input patches. We refer to (Rush, 2018; Vaswani et al., 2017) for a more detailed description of the Transformer model. Variations of Vision Transformers.While many vision Transformers' variations exist, they all have similar architectures as described above. In this work, we compare four variations: DINO (Caron et al., 2021), BEIT (Bao et al., 2021), SwinT (Liu et al., 2021), and VideoSwinT (Liu et al., 2021). DINO and BEiT have the same architectures as the original vision Transformer (Dosovitskiy et al., 2020) and only differ in the self-supervised pre-training objectives. SwinT proposes shifted window attention and hierarchical feature size shrinking to introduce inductive bias of locality. VideoSwinT is an adaptation of SwinT for video inputs. As each variation has many size configurations, we use all the base models with around 85M parameters for fair comparisons. ### Adapting Transformers from 2D to 3D through Weight Inflation Many variations of Transformers that have been pre-trained on existing large-scale datasets are publicly available for transfer learning (Caron et al., 2021; Bao et al., 2021; Liu et al., 2021, 2021). However, direct utilization of these Transformers is non-trivial, due to the difference in the dimensionality between 2D natural images and 3D medical images. Most existing solutions either split 3D images into 2D slices and predict each slice independently, thereby losing crucial depth-wise information (Chen et al., 2021; Cao et al., 2021). Other solutions modify the Transformer architecture to support 3D inputs, but the modification makes pre-trained weights not directly applicable (Xie et al., 2021; Hatamizadeh et al., 2021). We combine both advantages inspired by recent works in video understanding. Videos are collections of 2D image frames organized along the temporal axis, similar to medical images where 2D slices are organized along the depth axis. I3D (Carreira and Zisserman, 2017) proposed to transfer CNNs pre-trained on 2D images to 3D video inputs by inflating the convolutional weights along the temporal axis. As the input patch partitioning and embedding layer in Transformer is equivalent to CNN, we adopt the weight inflation strategy to initialize the 3D Transformers. We compare two inflation strategies: _average inflation_ and _centering inflation_. For average inflation, we copy the weights of the input layer \(K\) times in the depth axis and divide them by \(K\). This setting assumes input slices are similar within a certain range of depths, and the model treats all the input slices equally at the beginning. For centering inflation, we transfer pre-trained weights for the center-most slice and initialize all other weights to zero. For this setting, the model uses only information from the center slice at the beginning and progressively learns to contextualize information from all the neighbor slices. Both strategies keep the mean and variance of the input to Transformer encoder, allowing more effective transfer learning. Another difference is in the input channel. Transformers pre-trained on colored natural images have three channels. Similarly, We reduce the channel to one by modifying the input layer weight, where we take the sum over the input channel axis. We further use average inflation in the input channel for medical images with more than one input channel such as MRIs. These modifications still keep the mean and variance of the input unchanged for the Transformer encoder. After adapting the weight in the input patch partition and embedding layer, the weights of Transformer encoder layers can be directly initialized from 2D pre-trained Transformers since input shapes are aligned. ## 4 Results ### Dataset We use 12 publicly available 3D medical image datasets in this study: Beyond the Cranial Vault (BCV) multi-organ segmentation dataset (BCV), Automated Cardiac Diagnosis Challenge (ACDC) (ACD), and 10 datasets from Medical Segmentation Decathlon (MSD) (Antonelli et al., 2021). These datasets cover major anatom \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Data** & **Modality** & **Raw Reso.** & **Input Reso.** & **Clip Range** & **Label** & **Split** \\ \hline BCV & CT & 512\(\times\)512\(\times\)126 & 512\(\times\)512\(\times\)5 & [-175, 250] & 13 & 18/12/0 \\ ACDC & CT & 220\(\times\)247\(\times\)10 & 512\(\times\)512\(\times\)5 & [-175, 250] & 3 & 70/10/20 \\ Brain & MRI & 240\(\times\)240\(\times\)155 & 240\(\times\)240\(\times\)5 & N/A & 3 & 387/72/25 \\ Heart & MRI & 320\(\times\)320\(\times\)114 & 320\(\times\)320\(\times\)5 & N/A & 1 & 16/3/1 \\ Liver & CT & 512\(\times\)512\(\times\)447 & 512\(\times\)512\(\times\)5 & [-175, 250] & 2 & 104/20/7 \\ Hippocampus & MRI & 34\(\times\)49\(\times\)36 & 240\(\times\)240\(\times\)5 & N/A & 2 & 208/39/13 \\ Prostate & MRI & 320\(\times\)320\(\times\)19 & 320\(\times\)320\(\times\)5 & N/A & 2 & 25/5/2 \\ Lung & CT & 512\(\times\)512\(\times\)282 & 512\(\times\)512\(\times\)5 & [-175, 250] & 1 & 50/9/4 \\ Pancreas & CT & 512\(\times\)512\(\times\)94 & 512\(\times\)512\(\times\)5 & [-175, 250] & 2 & 224/42/15 \\ Vessel & CT & 512\(\times\)512\(\times\)69 & 512\(\times\)512\(\times\)5 & [-175, 250] & 2 & 242/45/16 \\ Spleen & CT & 512\(\times\)512\(\times\)88 & 512\(\times\)512\(\times\)5 & [-175, 250] & 1 & 32/6/3 \\ Colon & CT & 512\(\times\)512\(\times\)110 & 512\(\times\)512\(\times\)5 & [-175, 250] & 1 & 100/19/7 \\ \hline \hline \end{tabular} \end{table} Table 1: _Details of the twelve publicly available 3D medical image segmentation datasets we used, including BCV, ACDC, and 10 MSD datasets._ These datasets cover major anatomical structures of the body and different modalities. We use the BCV dataset to develop our methods and compare them with existing methods, and use the remaining eleven datasets to test the generalizability of our best practices. ical structures of the body and different modalities such as CT and MRI. We use the same BCV training and validation set as existing works (Chen et al., 2021; Cao et al., 2021) to develop our method and fairly compare it with other methods. We use the remaining 11 datasets to test the generalizability of our best practice. Details of the datasets such as labels, modality, resolution, data split are available in Table 1. ### Main Result Most existing works such as TransUNet (Chen et al., 2021) and SwinUNet (Cao et al., 2021) that use Transformers for 3D medical image segmentation are slice-based approaches, which split a 3D image into 2D slices and predict segmentation independently on each slice. This approach enables the direct utilization of pre-trained weights for transfer learning, but none of these works study the effect of transfer learning by comparing their methods with randomly initialized encoders. We find that _transfer learning significantly boosts the segmentation performance even under significant domain shifts_. Using the pre-trained encoder leads to 11.18% improvements in DSC as compared to the same randomly initialized encoder (Table 2(a))2. Footnote 2: SwinUNet\({}_{\downarrow}\) indicates an input resolution of 384, while other models use an input resolution of 512. TransUNet and SwinUNet results are from their original papers, while other results are from our experiments. Recent works such as CoTr (Xie et al., 2021) or UNETR (Hatamizadeh et al., 2021) modify the Transformer architecture to support 3D inputs to leverage depth information. However, they all use randomly initialized 3D Transformers as encoders, which sacrifices the clear benefit of transfer learning as discussed above. Due to the different amount of training data and number of labels reported in their papers3, we reproduced UNETR using its officially released codes for a fair comparison. From Table 2, we can see UNETR underperforms TransUNet or Swin \begin{table} \end{table} Table 2: _Segmentation results on the BCV dataset. (a)_ Our approach achieves state-of-the-art performances by combining the advantages of transfer learning (T) and depth information (D). DSC is the Dice coefficient averaged on the 8 major organs. _(b)_ Transfer effectiveness from models pre-trained with different sources and objectives. NI, NV, and MI are natural images, natural videos, and medical images. SL and SSL are supervised learning and self-supervised learning. nUNet, but it achieves 6.07% improvements in DSC compared to the randomly initialized slice-based model, demonstrating _the importance of depth information_. To combine the advantages of _transfer learning_ and _depth information_, we use the weight inflation strategy introduced in Sec. 3.3 to initialize 3D Transformers. We observe consistent improvements with this initialization over random, but we find that the weight inflation depth should not be too large (Sec. 4.4). Our best model uses a window-based approach. Specifically, we split 3D images into small windows along the depth axis, where each window consists of a few (e.g., 5) neighbor slices. We aggregate all the window predictions to form the final prediction. _By incorporating shallow depth information, our method achieves 1.95% improvements in DSC compared to slice-based methods_. More importantly, _such improvement is at only 0.75% increased computational cost_ (213.3 GFLOPS before adaptation vs. 215.0 after). We also evaluate our method stability by running our model and baselines on the BCV dataset three times with three random seeds (1234, 5678, 910). Our model (i.e., Ours in Table 2(a)) achieves 86.65 \(\pm\) 0.35 (87.13, 86.52, 86.29 for three runs) DSC, while the model without weight inflation (i.e., Ours w/o D in Table 2(a)) only achieves 85.14 \(\pm\) 0.20 (85.18, 84.88, 85.36 for three runs) DSC. The results indicate _a 98% level of significance_ using the paired t-test. To better evaluate the improvements of our method, we extend Table 2 and show the DSC results for each of the 8 organs in Table 3. We find that the most significant improvements from weight inflation are Gallbladder, Pancreas, and Stomach segmentation. We compute different statistics of each organ to understand the reason, including the number of voxels, the number of occurring slices, and the variation between slices (measured by the average DSC between labels of neighbor slices). We find that _the larger the variation between neighbor slices, \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Method** & **Aor** & **Gal** & **KidL** & **KidR** & **Liv** & **Pan** & **Spl** & **Sto** \\ \hline TransUNett (2021) & 90.68 & 71.99 & 86.04 & 83.71 & 95.54 & 73.96 & 88.80 & 84.20 \\ SwinUNet\({}_{\downarrow}\) (2021) & 87.07 & 70.53 & 84.64 & 82.87 & 94.72 & 63.73 & 90.14 & 75.29 \\ \hline UNETR (2021) & 86.30 & 63.79 & 84.94 & 83.93 & 95.94 & 58.01 & 88.95 & 78.76 \\ \hline Ours w/o T\&D & 82.71 & 61.34 & 80.73 & 76.25 & 94.37 & 46.52 & 82.62 & 67.46 \\ Ours w/o T & 81.26 & 63.78 & 80.90 & 75.41 & 94.82 & 53.67 & 82.15 & 65.64 \\ Ours w/o D & 90.22 & 67.51 & 93.84 & 91.40 & 96.27 & 66.62 & 94.22 & 81.33 \\ Ours & 90.29 & 75.20 & 93.70 & 91.40 & 96.17 & 71.82 & 93.81 & 84.62 \\ \hline \# Voxels & 441936 & 118445 & 887310 & 879180 & 9762041 & 504539 & 1386739 & 2641170 \\ \# Slices & 827 & 131 & 362 & 350 & 631 & 316 & 332 & 421 \\ Slice Variation & 90.33 & 69.84 & 82.80 & 82.57 & 88.93 & 75.91 & 83.37 & 84.94 \\ \hline \hline \end{tabular} \end{table} Table 3: _Fine-grained segmentation results of different organs on the BCV dataset._ We report performances on Aorta, Gallbladder, Kidney (Left), Kidney (Right), Liver, Pancreas, Spleen, and Stomach, respectively. The most significant improvements of our method are Gallbladder, Pancreas, and Stomach segmentation, which can be interpreted as the larger the variation between neighbor slices, the more improvement from incorporating depth information through weight inflation. the more improvements from incorporating depth information through weight inflation_. This can be interpreted as _inter-slice prediction consistencies are significantly improved with depth information_ based on the qualitative result shown in Figure 2. ### Pre-training Source and Objective Since pre-training sources may significantly impact the transfer performance, we compare the transfer effectiveness of various vision Transformers pre-trained with different sources (natural images, medical images, natural videos) and objectives (supervised learning, self-supervised learning, combination of both) (Table 2(b)). To fairly compare models, we carefully select same-size models with similar computations. SwinT (Liu et al., 2021) is pre-trained on natural image dataset ImageNet-22K (Deng et al., 2009) via supervised learning. DINO (Caron Figure 2: _Transfer learning reduces noise in local predictions, and depth information further improves prediction consistency across slices._ We show five consecutive input images, ground-truth labels, predictions from _Ours_, _Ours w/o D_, _Ours w/o T&D_. et al., 2021) is pre-trained on ImageNet-1K via self-supervised contrastive learning. BEiT-SSL (Bao et al., 2021) is pre-trained on ImageNet-22K via self-supervised masked image modeling, and BEiT is further fine-tuned on the same dataset via supervised learning. VideoSwinT (Liu et al., 2021) is pre-trained on natural video dataset Kinetics-600 (Carreira et al., 2018). We also pre-trained a CT-DINO on the medical image dataset RSNA-CT (Colak et al., 2021) using the same model and objective as DINO. Interestingly, we find that _models pre-trained on medical images performs worse than models pre-trained on natural images if both models are pre-trained using self-supervised learning_. This might be attributed to the high inter-class similarity for medical images due to the nature of human anatomy, causing limitations in learning meaningful representations using self-supervised learning. Furthermore, medical image datasets are much smaller than natural image datasets. Designing effective self-supervised learning methods in medical images will be meaningful future work. Moreover, we find that _additional supervision from image labels or video labels further improved the transfer performance_, as models pre-trained via supervised learning generally achieved better performances than models pre-trained via self-supervised learning. Since our goal is to explore the best strategy to leverage existing large-scale pre-trained 2D vision Transformers given their strong adaptability to various tasks and public availability, we did not perform additional pre-training. Thoroughly exploring pre-training on different sources with different objectives and data amounts is a meaningful future direction but requires extensive computational resources and costs. ### Best Practice of Weight Inflation We conduct ablations on different inflation settings, including prediction target, weight initialization, stride, and the number of input slices (Table 4)4. We find that _initializing the model with the centering inflation strategy and only predicting the center slice's segmentation leads to the best performance_. This setting is similar to residual connection (He et al., 2016), where the model gradually learns to utilize additional depth information, and it should not be worse than the model without depth information. Moreover, we find that _including a few neighbor slices is enough_, because neighbor slices are more similar to the center slice than non \begin{table} \end{table} Table 4: _Best practice of weight inflation on the BCV dataset._ Initializing the model with the centering inflation strategy and only predicting the center slice’s segmentation from a small number of neighbor slices leads to the best performance. neighbor slices are and provide more information for the model to predict accurate segmentation. Including too many slices may lead to overfitting. This finding also applies to VideoSwinT, where we observe 81.10 and 81.83 DSC for 9 and 5 slices, respectively. ### Generalization of Best Practice To verify the generalizability of our best practice -- adapting pre-trained vision Transformers from 2D to 3D using the strategies in Table 4, we train and evaluate our method on 11 additional 3D medical image datasets of different anatomical regions and imaging modalities. From Table 5, we observe that _weight-inflated vision Transformers (Ours) achieve consistently better performances on 10 of 11 datasets compared to the original Transformers (Ours w/o D)_, which _clearly demonstrates the effectiveness of our proposed method_. Also, these results are obtained with just a single set of hyperparameters tuned on the BCV dataset, _showing the robustness of our method_. Since our model uses all the hyperparameters tuned on the BCV dataset, continuing tuning hyperparameters on each dataset should lead to further improvements. For example, since the hyperparameters of our method are tuned on the CT dataset, the optimal hyperparameters may be very different for MRI datasets, thus MRI datasets such as Bra, Hea, Hip, and Pro show smaller gains than other CT datasets. However, since the main purpose of our paper is to show the importance of weight inflation, not to achieve different state-of-the-arts, we leave the relation between data characteristics and optimal hyperparameters to future work. ## 5 Conclusion In this work, we investigated adapting pre-trained Transformers for 3D medical image segmentation via simple yet effective weight inflation strategies. Our approach achieved consistent improvements on 12 datasets with only 0.75% increased computational cost, which can become a standard strategy easily utilized by all work on Transformer-based models for 3D medical images, to maximize performance. ## Acknowledgments We thank all the reviewers for their constructive feedback. This work is partially supported by the Stanford Center for Artificial Intelligence in Medicine & Imaging (AIMI). \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline **Method** & **ACDC** & **Bra** & **Hea** & **Liv** & **Hip** & **Pro** & **Lun** & **Pan** & **Ves** & **Spl** & **Col** \\ \hline SwinUNet\({}^{+}\) (2021) & 90.00 & - & - & - & - & - & - & - & - & - & - \\ UNETR\({}^{+}\) (2021) & - & 71.1 & - & - & - & - & - & - & - & 96.4 & - \\ nnUNet\({}^{*}\) (2021) & 91.79 & 74.11 & 93.28 & 79.71 & 88.91 & 75.37 & 72.11 & 67.45 & 68.37 & 96.38 & 45.53 \\ \hline Ours w/o D & 92.62 & 79.30 & 91.22 & 87.24 & 86.24 & 74.29 & 62.01 & 58.60 & 67.95 & 95.12 & 46.28 \\ Ours & 92.69 & 80.13 & 90.78 & 87.67 & 86.83 & 75.20 & 70.20 & 59.94 & 70.42 & 95.56 & 51.70 \\ \hline \hline \end{tabular} \end{table} Table 5: _Generalizability of best practice._ To verify the generalization of our best practice that adapts Transformers from 2D to 3D, we compare our adapted Transformers with original Transformers on 11 additional datasets and observe improvements over 10 datasets. DSC excluding background on the test set is reported. We also include other baseline performances for reference, but numbers are not fully comparable due to no standardized data split available for these datasets (+ indicates same data split ratio but not same split; * indicates different data split ratios).
2308.11033
Constructing cost-effective infrastructure networks
The need for reliable and low-cost infrastructure is crucial in today's world. However, achieving both at the same time is often challenging. Traditionally, infrastructure networks are designed with a radial topology lacking redundancy, which makes them vulnerable to disruptions. As a result, network topologies have evolved towards a ring topology with only one redundant edge and, from there, to more complex mesh networks. However, we prove that large rings are unreliable. Our research shows that a sparse mesh network with a small number of redundant edges that follow some design rules can significantly improve reliability while remaining cost-effective. Moreover, we have identified key areas where adding redundant edges can impact network reliability the most by using the SAIDI index, which measures the expected number of consumers disconnected from the source node. These findings offer network planners a valuable tool for quickly identifying and addressing reliability issues without the need for complex simulations. Properly planned sparse mesh networks can thus provide a reliable and a cost-effective solution to modern infrastructure challenges.
Rotem Brand, Reuven Cohen, Baruch Barzel, Simi Haber
2023-07-30T20:34:03Z
http://arxiv.org/abs/2308.11033v1
# Constructing cost-effective infrastructure networks ###### Abstract The need for reliable and low-cost infrastructure is crucial in today's world. However, achieving both at the same time is often challenging. Traditionally, infrastructure networks are designed with a radial topology lacking redundancy, which makes them vulnerable to disruptions. As a result, network topologies have evolved towards a ring topology with only one redundant edge and, from there, to more complex mesh networks. However, we prove that large rings are unreliable. Our research shows that a sparse mesh network with a small number of redundant edges that follow some design rules can significantly improve reliability while remaining cost-effective. Moreover, we have identified key areas where adding redundant edges can impact network reliability the most by using the SAIDI index, which measures the expected number of consumers disconnected from the source node. These findings offer network planners a valuable tool for quickly identifying and addressing reliability issues without the need for complex simulations. Properly planned sparse mesh networks can thus provide a reliable and a cost-effective solution to modern infrastructure challenges. **Keywords:** Network science, Electrical grid, Reliable networks *Corresponding author(s). E-mail(s): [email protected]; [email protected]; [email protected]; [email protected]; ## 1 Model Ensuring the reliability of infrastructure networks is of utmost importance in today's world. The conventional approach to designing such networks involves extensive simulations and trial-and-error methods. Our primary objective is to uncover design principles that facilitate the construction of reliable and cost-effective networks. Our inspiration comes from electrical networks, which mainly exhibit three main topologies[1; 2]: radial (tree), ring, and mesh (Figure 1.a). While ring networks have improved significantly over the traditional radial topology due to their 2-connected nature, offering two disjoint paths from each node to the source, our research demonstrates that large rings are prone to unreliability. This discovery underscores the need to develop sparse and cost-effective mesh networks. Moreover, we find that incorporating a small number of redundant edges and adhering to specific design rules divides the network into smaller rings and significantly enhances its reliability. These findings are significant as they illustrate how a well-designed network can achieve high reliability at a low cost, and transforming large rings into sparse mesh networks proves beneficial. Our study focuses solely on the combinatorial aspect of network reliability, using the widely studied independent edges failure model[3; 4], which assumes that nodes are reliable and the edges have a probability of failure independent of other edges. Also, we assume that the failure probabilities are small similar to typical infrastructure components. We use the SAIDI index[5] (System Average Interruption Duration Index) to measure network reliability, calculating the expected number of consumers disconnected from the source node at any given time. We propose an enhanced version of this index, assigning weights to each node and determining the expected weight disconnected from the source node. While the combinatorial perspective has limitations, it provides analytical insights, unlike more advanced models that rely on numerical solutions[6; 7]. Understanding a network's combinatorial properties is essential to developing reliable and efficient networks. A large part of the research in the field focuses on finding uniformly most reliable graphs[8; 9]. Assuming that all the failure probabilities are equal to a constant \(p\), the all-terminal reliability polynom is the probability that the network is connected. Given a class of all graphs with a fixed number of nodes and edges, a uniformly most reliable graph is more reliable than all the other graphs in the same class for all \(p\). Although such networks are helpful, we prove that the uniformly most reliable graph does not exist for the SAIDI index(supplementary C.2). Thus, our study focuses on finding reliable graphs relative to the more realistic SAIDI index for \(p\) near zero. We are also quantifying the difference in reliability between graphs rather than just their rank and show that if the network complies with some basic design rules, there is no need for further improvements. A fundamental result of previous studies is that the most reliable graph for \(p\) near zero is 2-connected[10]. Also, the network should minimize the second coefficient of the polynom[11]. ## 2 Trees and large rings are unreliable Consider a graph \(G=(V,E)\) with \(n+1\) nodes, \(m\) edges, and a designated source node \(s\in V\). \(p\) is the failure probability of all the edges in the network. We note that in the case of multiple sources, we can contract them into one source, as shown in figure 1.d. The SAIDI index is denoted as \[\bar{F}_{G}=\frac{1}{n}F_{G}=\frac{1}{n}\sum_{v\in V}Pr_{G}(s\leftrightarrow v) \tag{1}\] is used to measure the reliability of the network. Assume that \(G\) is a radial(tree) network. In this scenario, the unreliability calculations are relatively simple. Each node that is \(d\) distance away from the source works with a probability of \(1-q^{d}\), where \(q=1-p\) is the working probability. Therefore, the Figure 1: **a** shows different networks topologies with different redundancy(\(r\)). **b** shows that large ring networks are unreliable and that radial networks are even more unreliable. The SAIDI index is plotted against the edge failure probabilities for different network structures (star, binary tree, path, and ring) and different numbers of nodes (\(100,50,30\)). **c** shows the optimal network near zero (edge failure probability) with 100 nodes and various redundant edges (\(r\) from 1 to 6). **d** Sparse graphs are easier to analyze through their structure graph. First, contact the sources into one node, such that all the neighbors of the old sources are the neighbors of the new source. Then, Turn each chain of nodes from degree two to one edge in the structure graph. Finally, Remove bridges. The nodes connecting to the bridges in these disconnected components can be considered the sources of these components. SAIDI index of the tree is \[\bar{F}_{G} = 1-\mathbb{E}[q^{d}]=\mathbb{E}[d]\cdot p+o(p^{2}) \tag{2}\] This result implies that the most reliable trees are short and branched. As shown in figure 1.b, the star graph is the most reliable tree for all p values, while the path graph is the least. We also calculate the explicit SAIDI index of a ring network. Each node works if at least one of its two disjoint paths to the source is working. Therefore, the SAIDI index for a ring network with one source and \(n\) consumers is: \[\bar{F}_{ring}(n) = \frac{1}{n}\sum_{k=1}^{n}\left(1-q^{k}\right)\left(1-q^{n+1-k}\right)\] \[= 1+q^{n+1}-\frac{2}{n}\cdot\frac{q\left(1-q^{n}\right)}{p}\] \[= \frac{1}{6}\left(n^{2}+3n+2\right)p^{2}+O\left(p^{3}\right)\] This equation implies that despite the order of a ring network being two, large rings are unreliable because their \(p^{2}\) coefficient is \(O\left(n^{2}\right)\)(fig 1.b). This phenomenon is a version of the birthday paradox: even though the probability of any specific edge pair failing is low, the number of such pairs is \(O\left(n^{2}\right)\). Also, because each pair of edges, on average, disconnects \(1/3\) of the nodes, the overall unreliability of the ring network is relatively high. On the other hand, by reducing the length of the ring by dividing it into \(k\) equally sized rings or by adding \(k\) equally distributed sources to the ring, the SAIDI index is approximately \(k^{2}\) times better. \[\frac{\bar{F}_{rings}(n;k)}{\bar{F}_{ring}(n)}\approx\frac{\frac{n^{2}}{k}+3 \frac{n}{k}+2}{n^{2}+3n+2}\stackrel{{ n\rightarrow\infty}}{{ \longrightarrow}}\frac{1}{k^{2}} \tag{4}\] This equation demonstrates that a small change in the graph can lead to a significant improvement in reliability. The cases of unequal failure probabilities are discussed in the supplementary materials(supplementary A.5) ## 3 Analyzing sparse mesh networks We can simplify the analysis of sparse mesh networks using various mathematical methods, including chain decomposition, decomposition of bridges, cut-set formula, and tree decomposition(supplementary D). The chain decomposition method[12] represents a path of \(c\) nodes in a mesh network as a single edge with a failure probability of \(1-q^{c+1}\), where \(q\) is the probability of success(fig 1.d). This new graph is called the _structure graph_(or Distillation), its nodes are the hubs of the original graph, and its edges are called chains. The structure graph of the network \(G\) is \(S(G)\). If the number of redundant edges is small, the structure graph is also small and simple to analyze by, for example, using the ring-path formula(supplementary A.1). A bridge is an edge in the graph that, if removed, splits the network into two components. The bridge decomposition method involves analyzing each of those components independently(figure 1.d). Finally, we use the cut-set formula to analyze the impact of each cut-set on the network reliability. A cut-set is a set of edges that separates the graph into multiple components, and a minimal cut-set is a cut-set that does not contain any other cut-sets. The set of all the minimal cut-sets of graph \(G\) is \(mcs_{G}\). We propose the risk index of a minimal cut set (RIC) to assess the effect of each minimal cut set on the SAIDI index. RIC of a minimal cut set \(X\) is defined as the probability that \(X\) fails and is connected to the source multiplied by the size of the disconnected component of \(X\), represented as \(D_{G}(X)\). Mathematically, RIC is expressed as: \[R_{G}(X) = Pr_{G}\left(s\leftrightarrow X\wedge X\,\mbox{fail}\right) \cdot D_{G}(X)\] \[= p^{|X|}\cdot D_{G}(X)+O\left(p^{|X|+1}\right)\] Where \(Pr_{G}(s\leftrightarrow X)\) is the probability that all the edges in \(X\) are connected to the source. In essence, RIC represents the expected number of disconnected nodes resulting from the cut set failure. The SAIDI index of a graph is calculated using the risk index: \[F_{G}=\sum_{X\in mcs_{G}}R_{G}(X) \tag{6}\] Instead of analyzing all possible cut-sets in \(G\), we can examine only those in the structure graph. Each minimal cut set \(X^{\prime}\in E(S(G))\) disconnects \(D_{G}(X^{\prime})\) nodes outside the chains of \(X\). Also, the expected number of disconnected nodes from each chain \(e^{\prime}\in X\) is approximate to \(\frac{1}{2}\cdot c(e^{\prime})\), where \(c(e^{\prime})\) is the length of the chain. Similarly, the failure probability of each chain is approximately \((c(e^{\prime})+1)p\). As a result, an approximation of the cut-set risk in the structure graph is: \[R_{G}(X^{\prime})=p^{|X^{\prime}|}\prod_{e^{\prime}\in X^{\prime}}\left(c(e^{ \prime})+1\right)\left(D_{G}(X^{\prime})+\frac{1}{2}\sum_{e^{\prime}\in X^{ \prime}}c(e^{\prime})\right)+O\left(p^{|X^{\prime}|+1}\right) \tag{7}\] Another type of risk is the _chain inter-risk_. If the two nodes in the boundary of a chain are connected to the source, the chain becomes a ring. For each chain \(e^{\prime}\), the chain inter-risk is \[R_{G}(e^{\prime})=P_{2}(e^{\prime})\cdot F_{ring}(c(e^{\prime})) \tag{8}\] Where \(P_{2}(e^{\prime})\) is the probability that the two nodes on the boundary of the chain are connected to the source in the graph without the chain \(e^{\prime}\). An approximation to the inter-chain risk is \[R_{G}(e^{\prime}) = \frac{1}{6}p^{2}\left(c(e^{\prime})^{3}+3c(e^{\prime})^{2}+2c(e^{ \prime})\right)+O\left(p^{3}\right) \tag{9}\] The inter-risk is the expected number of disconnected nodes in the chain resulting from a cut set of nodes inside the chain. Combining the results above give us a formula for the SAIDI index in terms of the structure graph: \[F_{G}\,=\,\sum_{X^{\prime}\in mcs_{S(G)}}R_{G}(X^{\prime})+\sum_{e^{\prime}\in E (S(G))}R_{G}(e^{\prime}) \tag{10}\] The complexity of the SAIDI calculation is due to the factors \(\Pr_{G}\left(s\leftrightarrow X\right),P_{2}(e^{\prime})\) in the risk formulas(5, 8). Fortunately, if \(p\) and the chain's length are small, we can settle for a 3rd-order approximation which is much easier to calculate. Additional information on computing 3rd-order approximations using the risk index and the approximation quality is discussed in the supplementary material(supplementary A.5). We conclude that the SAIDI index is influenced by two types of cut sets: structural and inter-chain cut sets. The length of the chain influences inter-chain cut sets. The risk of a cut set in the structure graph is determined by equation 7, which considers the cut set's order, the number of disconnected nodes and the multiplication of the cut set chain's length. The sum of the SAIDI index of all the rings with the length of the graph chains is a lower bound to the network unreliability even if the structure graph is highly connected. ## 4 Construct the optimal graph We have determined that the main risk factors of a graph are long chains, low-order structure graph cut sets, and the number of disconnected nodes resulting from these cut sets. The significance of each factor varies depending on the value of \(p\) and the length of the chains. For small values of \(p\) and short chains, the order and the number of minimal cut sets of the structure graph are the most critical factors. However, as \(p\) and chain lengths increase, low-order cut sets with long chains that disconnect many nodes become more significant. For a large \(p\), the most crucial factor is the proximity of the nodes to the source(supplementary C.2). Based on these conclusions, we can characterize the optimal graph near zero as demonstrated in figure 2. The optimal network is the most reliable given a fixed number of nodes, redundant edges, and equal failure probability \(p\). Firstly, the optimal is bridgeless to minimize the impact of low-order structure graph cut sets. Secondly, the optimal network should have equal-length chains to minimize the inter-chain risk, which is square in the length of the chain. A shorter chain length is a challenging task given a fixed number of redundant edges, but achievable by avoiding hubs with degrees greater than 3. In fact, the optimal structure graph is 3-regular, as a redundant edge between nodes of degree 2 reduces the length of both chains, while a chain from a hub to a chain or from hub to hub reduces the length of one or zero chains, respectively. A graph with a 3-regular structure graph, equal-length chains, and \(k\)-redundant edges contains \(3(k-1)\) chains and therefore reduces the inter-chain risk by a factor of 9 compared to the naive \(k\)-equal rings topology(supplementary C.3). To further minimize the risk of three-order cut sets, the optimal network's structure graph should also be 3-connected. When the structure graph is 3-connected, the only two-order risks present are the inter-chain risks. However, when the chains are long, or \(p\) is large, there is a non-negligible probability of three-order cut sets in the structure graph(supplementary A.5). In this case, the optimal network should be super 3-connected, meaning that the only three-order cut sets present are the trivial cut sets consisting of the edges connecting a node's neighbors. Cubic super 3-connected networks always exist for any size[13]. Note that if third-order cut sets are not negligible due to long chains, the best solution is, if possible, reducing the chain's length as it also reduces the chain inter-risk. The rules provided are enough to build a reliable graph because of the observation that a third-order approximation is adequate for small p, as shown in the supplementary subsection A.5. The results are fundamental because they hold for other reliability indices, such as the all-terminal reliability[11] and the pair-wise reliability(supplementary C.4). Despite the simplicity of the model and the many assumptions we made, those design rules serve as the fundamental analytical rules of reliable network planning. These rules are easy to interpret and can be identified at a glance without the need for complicated numeric simulations. However, to make real-world decisions, other essential variables must be considered, such as the cost of each edge, the weight of each node, and the diversity of the edge failure probabilities. For example, in most cases, the optimal network does not have an equal chain length due to geometrical considerations(figure 4). To give more accurate results, the risk approximation formulas presented earlier are helpful tools, as discussed in the next section. Figure 2: The optimal network has a 3-regular and super 3-connected structure with equal-length chains. The table above compares network topologies with 120 nodes and 124 edges, with color indicating the SAIDI index (\(p=0.001\)). The columns show different structure graphs, and the rows show different chain lengths. Although covering all network topologies is impossible, we can see that unequal chains reduce network reliability and that, if they exist, they should not be on the edges of low-order structure graphs. Also, if the structure graph is not 3-regular, the number of chains is lower, causing longer and less reliable chains. ## 5 Improve existing network Figure 3: The figure shows the effect of adding a new edge on inter and structural risks. The first row demonstrates a chain-to-chain edge(**a**) and inter-chain edge(**b**). **c** present the inter-risk difference of the chain-to-chain edge from node \(i\) to \(j\) (as presented in a) **d** presents the inter-risk of an inter-chain edge from node \(i\) to \(i+k\), clockwise(as presented in b). We observed that chain-to-chain edges have a greater inter-risk difference. The optimal chain-to-chain edges should be positioned close to the center of each chain, and the best inter-chain edge should have a length equal to half a chain and start as close as possible to the source. If \(i=0\) in both edges type, the edge is a hub-to-chain edge. Minimizing the structural risk of a minimal cut set is possible by adding an edge between the two components separated by the cut set(**e**) or by adding an edge that touches at least one of the cut set chains(**f**). **g** compares the SAIDI index after adding each edge kind and demonstrates that adding an edge between the cut set components is more effective. Finally, **h** shows the SAIDI index after adding the edge \((i,j)\) as depicted in figure f. It follows that the optimal edge has one end in one of the components and the other end in the other component. We can use the risk difference to measure the effectiveness of adding a new edge to an existing network. The risk difference is between the risk of some cut set \(X\) before and after adding a new edge \(e\). \[\Delta R_{G}(X;e)=R_{G}(X)-R_{G\cup\{e\}}(X) \tag{11}\] By summing the risk difference for each edge and the predefined cost of it, we can decide which new edge is the most beneficial. You can find exact formulas for the risk difference of any edge in the general model in supplementary B. The effect of adding new edges to an existing network is limited because it creates chains from length 1. Therefore, good preplanning is preferred, as demonstrated in figures 4 and 5. It is possible to add four different types of edges to a network: chain-to-chain, inter-chain, hub-to-chain, and hub-to-hub connections. Adding a chain-to-chain edge is the most effective in reducing inter-risk, as it transforms a pair of rings into four rings. Hub-to-chain edge is A specific case of chain-to-chain, which is less effective as it only affects one chain. An inter-chain edge is also less effective because it turns a ring into three rings but produces another order two structural cut set. Lastly, a hub-to-hub edge does not affect inter-risk at all. Figure 3(a-d) compares the effect of each edge type on the inter-risk. It follows that the optimal chain-to-chain edge touches the middle of each chain, and the optimal hub-to-chain and inter-chain edges are from the end to the middle of the chain, dividing the chain into two equal rings. The optimal chain-to-chain edge is approximately 2 times better than the other edge types in reducing inter-risk(supplementary B.1). There are two options to reduce the structural risk of a minimal cut set(figure 3(e-h)): adding an edge between the two components separated by the minimal cut set or adding an edge that divides one or two of the minimal cut set chains. The first option is the most effective in reducing structural risk because it raises the order of the cut set. However, the second option only minimizes the coefficient of the polynom and not its order. The closer one side of the new edge is to the source component and the other side to the disconnected component, the bigger the risk difference, as shown in figure 3. For this reason, chain-to-chain edges are also more effective than inter-chain edges in reducing structural risk. ## 6 real-world analysis We demonstrate the main ideas of our research in two examples: the Baran & Wu system and grid networks. The Baran & Wu system is a synthetic distribution network[14]. The system has one source, 32 loads, and 37 edges(figure4.b). Despite its high redundancy and cost, the network is unreliable due to a bridge that disconnects the entire network. Figure 4 shows several versions of the same nodes. In each version, we display the ratio of the network's length and SAIDI index compared to the basic ring network. Furthermore, we present the top 5 risks for each example and use them to improve the network. Using the risk analysis, we managed to construct more cost-effective versions of this network while keeping the budget at a 10 extra length than the ring version and improving the network reliability up to 8.83 times more than the ring network. However, due to the sparsity of the network nodes, it is hard to further improve the network without using high-cost edges. The second example is a \(7\times n\) grid with sources in the left upper and the right bottom corners(figure 5). This example is important because the grid is a common topology among cities. A basic approach is filling the top and bottom rows, creating chains with \(n\) nodes. A naive improvement for this topology is a grid configuration with a middle row that divides each ring into two equal rings. However, this design is not agreed with the design rule of making hubs with a degree of 3. To address this problem, we separated the middle row into a \(1/3,2/3\) configuration that divided the middle chain into three almost equal chains and the outer chains into two nearly equal chains. This new configuration improves the SAIDI ratio by a factor of 6, which is 1.54 times better than the naive middle-row configuration even though the number of nodes is very high. Also, by defining the number of redundant edges between every two consecutive chains, we can use a dynamic programming algorithm to find the optimal edge configuration(supplementary E.1). Figure 5 presents the optimal and the nonoptimal configurations. The downside of those configurations is that the east-west chains are from length 1. Finally, we analyze three real distribution networks from the Atlantic City website1. Distribution networks tend to traverse major roads[15]. Therefore, we illustrate that by adding a relatively small number of redundant edges based on roads, we can transform the ring-like topologies into sparse mesh topologies, which improved the SAIDI ratio by a factor of 27.12 to 3.14 for \(p=0.0001\). The data is taken from the electrical vehicle load capacity map on March 2023. Although the map is not necessarily accurate, the data demonstrate a real-world distribution network. More technical details of the analysis are in the supplementary E.2. Footnote 1: [https://www.arcgis.com/apps/dashboards/30d93065bbcf41d08e39638f60e5ad77](https://www.arcgis.com/apps/dashboards/30d93065bbcf41d08e39638f60e5ad77) Figure 4: Improve the Baran & Wu system. The first row represents the original test case with high density but low reliability. The second row introduces a ring approach, while subsequent rows show further enhancements. Comparisons of the mesh networks to the ring configuration are made based on edge lengths, SAIDI(column 2), and identifying the top 5 risks(last column). The last row demonstrates that designing a network from scratch proves more effective than adding edges to an existing network. Figure 5: Adding redundant edges for existing networks. The upper figure presents a common \(100X7\) grid topology with vertical and horizontal edges in the upper and bottom rows. In each row of the figure, we add redundant edges as presented in the second column(nonoptimal) and the same amount of redundant edges with a more equal-chains topology in the third column(optimal). The right column shows the SAIDI ratio of the basic network to the nonoptimal network(in orange) and the optimal network(in green). By creating hubs with three neighbors, we get more chains and improve the grid’s reliability. Also, we observed that a small number of new edges significantly improved the reliability. The lower figure shows real ring networks(on the left column) and two versions of modifications to the network. The right column compares the SAIDI ratio of those versions to the original network. The floating numbers represent the number of nodes in each chain. By following the design rules, we significantly improved the networks with a relatively small amount of redundant edges. ## Appendix A Exact analysis of the general model ### SAIDI computation This section provides a formula for the SAIDI index of various topologies in the general weight-probabilities model. The SAIDI index of a network \(G\) is calculated through its minimal cut sets risk(proposition 1). \[F_{G}=\sum_{X\in mcs_{G}}R_{G}(X)=\sum_{X\in mcs_{G}}Pr_{G}\left(s\leftrightarrow X \wedge X\operatorname{fail}\right)\cdot D_{G}(X)\] (A1) To calculate \(Pr_{G}(s\leftrightarrow X\wedge X\operatorname{fail})\), we focuses on \(cc(X)\) - the connected component of \(s\) after the removal of \(X\) and use the inclusion-exclusion principle \[Pr_{G}(s\leftrightarrow X\wedge X\operatorname{fail}) = Pr_{G}(X)\cdot\sum_{X\subseteq mcs_{cc(X)}(X)}(-1)^{|X|-1}\cdot Pr _{G}\left(\bigcup_{A\in\chi}A\right)\] (A2) Where \(mcs_{cc(X)}(X)\) is the set of all minimal cut sets of \(cc(X)\) that disconnect \(X\) from the source. This formula complexity is \(O\left(2^{|mcs_{cc(X)}(X)|}\right)\). Fortunately, if the probabilities \(p_{e}\) are sufficiently small, we can ignore the probability of high-order cut sets. Subsection A.5 contains the specific details of this approximation. As mentioned in the main article, we can simplify the SAIDI computations in a sparse network by enumerating only the minimal cut sets of the structure graph. The ring-path formula is another method to calculate the SAIDI index of a sparse network based on its structure graph. We can determine the expected number of disconnected nodes from a specific chain \((u,v)\) in the structure graph by considering the different scenarios that can occur if we remove the chain, as presented in figure A1. For example, assume that the chain contains \(c\) nodes. If both nodes \(u\) and \(v\) are disconnected, the entire chain is disconnected, resulting in \(f_{0}=c\) disconnected nodes. If only one of them is connected, the chain becomes a path with \(c\) nodes, resulting in \(f_{1}=F_{path}(c)\) disconnected nodes on average. Finally, if both nodes are connected, the chain becomes a ring with \(c\) nodes, resulting in \(f_{2}=F_{ring}(c)\) disconnected nodes on average. We can use the probabilities of each of these cases \(P_{0}(e^{\prime}),P_{1}(e^{\prime}),P_{2}(e^{\prime})\) to calculate the ring-path formula: \[F_{G} = \sum_{e^{\prime}\in E(S(G))}\sum_{i=0}^{2}f_{i}(e^{\prime})\cdot P _{i}(e^{\prime})+F_{S(G)}\] \[= \sum_{e^{\prime}\in E(S(G))}c(e^{\prime})\cdot P_{0}(e^{\prime})+ F_{path}(e^{\prime})\cdot P_{1}(e^{\prime})+F_{ring}(c(e^{\prime}))\cdot P_{2}(e^{ \prime})+F_{S(G)}\] where \(S(G)\) is the structure graph of \(G\). Additionally, in the general weight-probabilities model, the chains are not symmetric. Hence, two types of \(P_{1}\) events exist. Each connected node in the boundary of the chain creates a path with a different source. In this scenario, we can use the more general _partitions formula_. For a set of nodes \(X\subseteq V(G)\), denote \(\pi(X)\) as the set of all the partitions of \(X\cup\{s\}\). Assume that the graph is partition into two subgrphs \(A_{1}\cup A_{2}=G\), and denote \(X=A_{1}\cap A_{2}\). We can split the probability space of \(G/A_{i}\) to an equivalence class defined by the connected components of \(X\cup\{s\}\). Given a specific partition in \(G/A_{i}\), we can contract all the nodes in \(A_{i}\) that are in the same partition component to one node. Then, by the law of total probability: \[F_{G}=\sum_{i=1}^{2}\sum_{\chi\in\pi(A_{1}\cap A_{2})}Pr_{A_{i}^{c}}(\chi)\cdot F _{A_{i}|\chi}\] (A4) Where \(A_{i}\mid\chi\) is the graph \(A_{i}\) after contracting each component in \(\chi\) to one node(figure A2). As a private case, the ring-path formula in the general model is \[F_{G}=\sum_{(u,v)\in E(S(G))}\sum_{\chi\in\pi(\{u,v\})}Pr_{G/(u,v)}(\chi)\cdot F _{C(u,v)|\chi}+F_{S(G)}\] (A5) Where \(C(u,v)\) is the chain from \(u\) to \(v\). This article showcases several applications of the partition formula. One of its applications is a fast algorithm that can compute the network's reliability using tree decomposition(subsection D.1). When combined with the ring-path formula, this algorithm can also calculate the SAIDI index of a network even faster. Another application of the partition formula is the modularity of network planning. For instance, if we have a neighborhood \(A\in G\), we can quickly identify the impact of changes made inside the neighborhood, such as adding or removing edges, as we only need to identify those changes on the indexes \(F_{G|\chi}\) for \(\chi\in\pi(A\cap A^{c})\). For example, in subsection E.1, we design reliable grids using this formula. A private case of the partitions formula is _bridge decomposition[12]_. A bridge is an edge \(e=(u,v)\) that, if removed, splits a set of nodes \(X\) from the source. For example, suppose that \(u\) is a member of \(X\) and mark the subgraph of \(X\) in \(G\) with \(u\) as the source node as \(G_{X}\), then for any other node \(w\) in \(X\), the probability that \(w\) is connected to the source is \[Pr_{G}(s\leftrightarrow w)=Pr_{G}(s\leftrightarrow v)\cdot p_{e}\cdot Pr_{G_{X }}(u\leftrightarrow w)\] (A6) This means that by adding the weight \(1-p_{e}\cdot F_{G_{X}}\) to the node \(u\), we can remove \(X\) from \(G\), thereby reducing the size of the graph and ensuring that there are no bridges in it. Finally, we use the risk sum formula to compute the SAIDI index of the network. **Proposition 1** (The risk formula).: _For a network \(G\), \(F_{G}\) is the sum of the risks associated with all minimal cut sets:_ \[F_{G}=\sum_{X\in mcs_{G}}R_{G}(X)\] (A7) Proof.: Let \(v\) be a node in the network. We denote the set of all minimal cut sets that disconnect \(v\) from the source \(s\) as \(mcs_{G}(s,v)\). If \(v\) is disconnected from the source, there is a unique minimal cut set in \(mcs_{G}(s,v)\) that fails, and all its edges are reachable from the source. This minimal cut set is the set of edges between the connected component of \(v\) and the source component. The probability space can be split into disjoint sets by the unique reachable minimal cut set in \(mcs_{G}(s,v)\). The probability that \(X\) is the reachable failing cut set is \(Pr_{G}(s\leftrightarrow X)\cdot Pr_{G}(X)\). Summing over all the nodes of the network gives us the desired formula. Therefore, we can express the total risk of the network as the sum of the risks associated with all minimal cut sets in \(G\). ### Radial networks Here we present the analytical analysis of the radial networks' reliability in the general model. The SAIDI analysis of the general tree is quite simple. Let the set of edges in the unique path between \(s,v\) be \(path(s,v)\). Note that \[Pr_{T}(s\nleftrightarrow v)=1-\prod_{e\in path(s,v)}q_{e}\] (A8) and hence, the SAIDI index of a tree \(T\) is \[F_{T}=\sum_{v\in V(T)}\left(1-\prod_{e\in path(s,v)}q_{e}\right)\cdot w_{v}\] (A9) Where \(\omega_{v}\) is the weight of the node \(v\). To calculate the risk of each edge, denote the total weight of disconnected nodes resulting from the failure of an edge \(e\) as \(D_{T}(e)\). The risk of an edge \(e\) is \[\begin{cases}R_{T}(e)=\left(\prod_{e^{\prime}\in path(s,e)}q_{e^{\prime}}\right) \cdot p_{e}\cdot D_{T}(e)\\ F_{T}=\sum_{e\in E(T)}R_{T}(e)\end{cases}\] (A10) By set \(q_{e^{\prime}}=1-p_{e^{\prime}}\) in A10, it follows that the polynom coefficients are \[F_{T}\,=\sum_{e\in E(T)}\sum_{J\subseteq path(s,e)}(-1)^{|J|}D_{T}(e)\left(p_ {e}\prod_{e^{\prime}\in J}p_{e^{\prime}}\right)\] (A11) And that the first-order approximation is \[F_{T}\,\approx\,\sum_{e\in E(T)}p_{e}\cdot D_{T}(e)\] (A12) As a private case, the SAIDI index of a tree in the equal probabilities-weights model is \[F_{T}\,=\,\sum_{k=1}^{n}(-1)^{k-1}\left(\sum_{a=k}^{n}d(a)\cdot\binom{a}{k} \right)p^{k}\] (A13) Where \(d(a)\) is the number of the nodes at a distance \(a\) from the source. By set \(d(a)=1\) and using the hockey Stick Identity[16] we get The SAIDI index of a path \[F_{path}(n)=\sum_{k=1}^{n}(-1)^{k-1}\binom{n+1}{k+1}p^{k}\] (A14) ### Ring networks To calculate the SAIDI index of a general ring, we observed that each ring node connects in two disjoint paths to the source(figure B6) \[F_{ring}=\sum_{v=1}^{n}\left(1-\prod_{i=1}^{v}q_{i}\right)\left(1-\prod_{i=v+ 1}^{n+1}q_{i}\right)\cdot w_{v}\] (A15) And in the risks method, each minimal cut set is a pair of edges \(\{i,j\}\) \[R_{ring}(\{i,j\})\,=\,\prod_{k=1}^{i}q_{k}\prod_{k=j+1}^{n+1}q_{k}\cdot p_{i}p _{j}\cdot\sum_{k=i}^{j}w_{k}\] (A16) \[= \left(\prod_{k=1}^{n+1}q_{k}\right)\left(p_{i}p_{j}\right)\left( \frac{\sum_{k=i}^{j}w_{k}}{\prod_{k=i}^{j}p_{k}}\right)\] We can see that a ring cut set risk depends on the failure probability of the cut set\((p_{i}p_{j})\), by \(D_{ring}(\{i,j\})=\sum_{k=i}^{j}w_{k}\), and by the probabilities product of the disconnected path\((\prod_{k=i}^{j}p_{k})\). Another insight is that in the equal weight-probabilities model \[R_{ring}(\{i,j\})<R_{ring}(\{i^{\prime},j^{\prime}\})\] (A17) For \(i^{\prime}<i,j^{\prime}>j\) and \(i<j\). However, unlike the equal probabilities model, where the maximal risk is of the two edges that are connected to the source, in the general model, any cut set can have the maximal risk for the appropriate parameters. The SAIDI index of the ring with \(n\) consumers is computed in \(O(n)\) using a dynamic programming approach. In the equal weights-probabilities model \[F_{ring}(n)=\sum_{k=2}^{n+1}(-1)^{k}(k-1)\binom{n+2}{k+1}p^{k}\] (A18) ### Structural risk The exact calculation of the risk difference after adding a new edge is hard to calculate. However, there are some methods to approximate this risk difference relatively quickly. We present the exact risk difference of each kind and, in the approximation section, provide the proper approximations. The structural risk of a structural graph cut set defines as \[R_{G}(X^{\prime}) = Pr_{S(G)/X^{\prime}}(s\leftrightarrow X^{\prime})\left(Pr_{S(G) }(X^{\prime})\cdot D_{G}(X^{\prime})+\sum_{C\in X}Pr_{S(G)}(X^{\prime}/C) \cdot F_{path}(C)\right)\] (A19) \[= Pr_{S(G)/X^{\prime}}\left(s\leftrightarrow X^{\prime}\right) \cdot Pr_{S(G)}\left(X^{\prime}\right)\left(D_{G}(X^{\prime})+\sum_{C\in X} \frac{F_{path}(C)}{Pr_{G}(C)}\right)\] The probability \(Pr_{S(G)}(X^{\prime})=\prod_{C\in X^{\prime}}\left(1-\prod_{e\in C}q_{e}\right)\). \(F_{path}(C)\) calculate using equation A9 or A11. Subsection D.1 explains how to calculate the probability \(Pr_{S(G)}(s\leftrightarrow X^{\prime})\). Although, calculating the precise value of \(Pr_{S(G)}(s\leftrightarrow X^{\prime})\) is a hard task; if we examine only cut sets from orders less than \(k\), then we calculate the reaching probability up to order \(k-|X|\). We can approximate the structural risk using the approximations introduced in section A.2. For example, in the equal probabilities-weights model, the most basic approximation is \[R_{G}(X^{\prime})=p^{|X^{\prime}|}\prod_{e^{\prime}\in X^{\prime}}\left(c(e^{ \prime})+1\right)\left(D_{G}(X^{\prime})+\frac{1}{2}\sum_{e^{\prime}\in X^{ \prime}}c(e^{\prime})\right)+o\left(p^{|X^{\prime}|+1}\right)\] (A20) ### Approximations We aim to determine the minimum order required to achieve a satisfactory approximation. The accurate computation of SAIDI is not feasible for graphs with many edges[17]. Our primary focus is identifying weak points in the network and determining which new edges to add, which involves analyzing all the lower-order cut sets. However, the calculations are inefficient due to a large number of minimal cut sets and their intersections in the inclusion-exclusion principle. However, if \(p\) is small, the number of concurrent failures is relatively low, allowing us to examine only lower-order cut sets. Additionally, not all simultaneous failures lead to network breakdown, so that lower-order cut sets can approximate reliable graphs more easily. The critical question is, what minimum order is required to achieve a satisfactory approximation? To answer this question, we first examine the approximations of fundamental components such as ring, chain, and structural risk and then analyze the approximation of several graphs. Figure A3 presents the approximation quality of fundamental network components. Although the precise calculation of these components is straightforward, their exact evaluation is unnecessary. For each order of approximation, the figure displays the minimum p with an error deviation of at least \(\epsilon=0.0001\) from the actual value (\(p_{0}\)) as a function of chain length\((c)\). If the approximation order is smaller than the minimum non-zero coefficient, the approximation is zero, indicating that the risk of the component can be ignored. The figure indicates that the ring and chain failure components are relatively accurate for orders 2 or 3. We use equal chain configurations to estimate the approximation quality of structural risks. This choice is based on the observation that equal chains represent the most unreliable configuration regarding structural risk, as the failure probability is the product of each chain's failure probability (it is worth noting that equal chains aim to minimize inter-risks). Consequently, estimating the error in the equal chain's configuration provides an upper bound for the error in structural risk when dealing with unequal chains but an equal number of nodes. However, the approximation error increases when a significant number of nodes become disconnected (denoted as \(D_{G}(X)\)). This implies that even 5th-order structural cut sets cannot be disregarded if the number of disconnected nodes is sufficiently large. Figure A3 illustrates a 2nd-order and 3rd-order structural cut set using equal chains of length \(c\) that disconnect \(d=100\) nodes. Figure A4 demonstrates that a second or third order of approximation is adequate for small \(p\). The figure examines three different topologies, one with a second-order structural cut set, one with a significant four-order cut set, and one with a super 3-connected structure graph. The graphs have equal-length chains, but a more heterogeneous configuration results in an even more accurate approximation. The vertical lines represent \(p_{0}\), the minimum \(p\) with an approximation error of at least \(\epsilon=0.0001\). In summary, a second or third-order approximation is sufficient for small \(p\), even for networks with many nodes. Although very unreliable network approximations may be inaccurate, they are rare in practice because of their ineffectiveness. **Fig. A3** The figure displays the minimum order of approximation required to achieve an error deviation of at least \(0.0001\) from the actual value (\(p_{0}\)) for each fundamental network component. The x-axis represents the length of the chains, while the y-axis represents \(p_{0}\). The structural cut sets in the figure result in \(100\) disconnected nodes. **Fig. A4** The figure illustrates the quality of low-order approximation for several structured graphs with different equal-length chains. The vertical lines in the figure represent \(p_{0}\), which is the minimum value of \(p\) required for a satisfied approximation with an error deviation of at least \(0.0001\). ### Inter-risk Calculating a third-order approximation of the inter-risk for a chain in the structure graph is straightforward. The challenge lies in calculating \(P_{2}(e^{\prime})\). Fortunately, using only the first order of \(P_{2}(e^{\prime})\) provides a good approximation because the SAIDI polynomial of a ring is \(O(p^{2})\), and an order three approximation is accurate enough for SAIDI calculations. To compute the inter-risk of an edge \(e^{\prime}\) in the structure graph G, use the following formula: \[R_{G}(e^{\prime})=\left(1-\sum_{e\in B_{G}(e^{\prime})}p_{e}\right)\cdot F_{ ring}(e^{\prime})\] (A21) Here, \(B_{G}(e^{\prime})\) is the set of all bridges in the graph that disconnect \(e^{\prime}\) from the source. In the equal probabilities-weights model: \[R_{G}(e^{\prime})=\frac{1}{6}\left(1-\left(\sum_{C\in B_{S(G)}(e^{\prime})}(c( e^{\prime})+1)\right)p\right)\cdot\left(c(e^{\prime})^{3}+3c(e^{\prime})^{2}+2c(e ^{\prime})\right)\] (A22) ## Appendix B Risk difference analysis In this section, we provide the formulas for calculating the risk difference that results from adding a new edge. Throughout this section, we assume that we have identified all the minimal cut sets of the network up to the desired order. More details about the algorithm for finding these cut sets are in subsection D.2. To find the risk difference of a new edge \(e\) we only need to calculate \(R_{G\setminus e}\). \(G\backslash e\) is the network \(G\) after contracting the nodes of \(e\). From the definition of risk difference \(\Delta R_{G}(X;e)=R_{G}(X)-R_{G\cup\{e\}}(X)=R_{G}(X)-p_{e}R_{G}(X)-q_{e}R_{G \setminus e}(X)\) and hence \[\Delta R_{G}(X;e)=q_{e}\left(R_{G}(X)-R_{G\setminus e}(X)\right)\] (B23) The purpose of this risk difference analysis is twofold. Firstly, we aim to understand the impact of a new edge on the graph. For example, a chain-to-chain edge is typically more efficient than an inter-chain edge. Secondly, we aim to efficiently approximate the impact of the new edge on the risk difference. There are four types of new edges: chain-to-chain, inter-chain, hub-to-chain, and hub-to-hub. We begin by analyzing the inter-risk difference of all new edge types and then analyzing the structural risk difference. ### Inter-risk difference We only need to analyze chain-to-chain and inter-chain edges to understand the inter-risk difference, as hub is the zero node of a chain. The inter-risk formula is \(R_{G}(C)=P_{2}(C)\cdot F_{ring}(C)\). The hub-to-hub edge can affect only \(P_{2}(C)\). #### Chain-to-chain Assume that there are two chains \(C_{1},C_{2}\), and that we add an edge \(e\) between the \(i_{1}\) node on the chain \(C_{1}\), and the \(i_{2}\) node on the chain \(C_{2}\)(figure 10). The new edge is dividing each of the chains \(C_{k}\) into two smaller chains \(C_{k,1},C_{k,2}\). After contracting the edge \(e\), the chains \(C_{1}\) and \(C_{2}\) are turning into star network subgraphs with the node \(v_{e}\) in the middle(figure 10). Note that the contraction of the edge does not change the value of \(P_{2}(C_{k})\). \[R_{G\setminus e}(C_{k}) = P_{2}(C_{k})[Pr_{G/C_{k}}\left(s\leftrightarrow v_{e}\mid P_{2} (C_{k})=1\right)(F_{ring}(C_{k,1})+F_{ring}(C_{k,2})) \tag{10}\] \[+ \left(1-Pr_{G/C_{k}}\left(s\leftrightarrow v_{e}\mid P_{2}(C_{k} )=1\right)\right)(F_{ring}(C_{k}))]\] Which is the risk given that \(v_{e}\) is connected to the source in \(G\backslash C_{k}\) plus the risk given the complementary event. From here, \[R_{G\backslash e}(C_{k}) = P_{2}(C_{k})Pr_{G/C_{k}}\left(s\leftrightarrow v_{e}\mid P_{2} (C_{k})=1\right)(F_{ring}(C_{k,1})+F_{ring}(C_{k,2})-F_{ring}(C_{k})) \tag{11}\] \[+ P_{2}(C_{k})F_{ring}(C_{k})\] Therefore, \[\Delta R_{G}(C_{k};e) = q_{e}\cdot Pr_{G/C_{k}}\left(s\leftrightarrow v_{e}\wedge P_{2}(C_{k })=1\right)\cdot\left(F_{ring}(C_{k})-F_{ring}(C_{k,1})-F_{ring}(C_{k,2})\right)\] (B26) \[\stackrel{{\text{order}\,2}}{{\approx}} F_{ring}(C_{k})-F_{ring}(C_{k,1})-F_{ring}(C_{k,2})\] We can infer some insights about the optimal new edge from the equation above. The position of the edge in \(C_{2}\) should maximize the probability \(Pr_{G/C_{1}}\left(s\leftrightarrow v_{e}\wedge P_{2}(C_{1})=1\right)\) to minimize the inter-risk of \(C_{1}\), which imply that the edge should be close to the boundary of \(C_{2}\). On the other hand, by adding the new edge, we remove minimal cut sets that are edges pairs from the opposite sides of the new edge, which imply that the optimal new edge is closer to the middle of \(C_{1}\) in relative equal weights and probabilities (Although this can change in a heterogeneous chain). Equation A16 explains the causes of high-risk minimal cut sets in a chain and can help analyze the optimal new edge's location. For example, the optimal new edge in equal-length and homogeneous chains is from the middle of each chain. ### Inter-chain Assume that there is a chain \(C\), and that we add an edge \(e\) between the \(i\) and the \(j\) node of the chain(figure B5). After the contracting of the edge \(e\), the chain transforms into a smaller chain \(C_{1}\cup C_{2}\) with a self-loop in the \(i\) node denoted as \(C_{3}\). The risk of the chain after the contraction of \(e\) i \[R_{G\setminus e}(C)=P_{2}(C)\cdot\left[F_{ring}(C_{1}\cup C_{2})+p_{C1}p_{C2} \left(w(C_{3})-F_{ring}(C_{3})\right)+F_{ring}(C_{3})\right]\] (B27) Where \(p_{C_{k}}\) is the failure probability of the chain \(C_{k}\), and \(w(C_{3})\) is the total weight of \(C_{3}\). \[\Delta R_{G}(C;e) = q_{e}\cdot P_{2}(C)\cdot\left[F_{ring}(C)-F_{ring}(C_{1}\cup C_{2 })-F_{ring}(C_{3})-p_{C1}p_{C2}\left(w(C_{3})-F_{ring}(C_{3})\right)\right]\] (B28) \[\stackrel{{\text{order}\,2}}{{\approx}} F_{ring}(C)-F_{ring}(C_{1}\cup C_{2})-F_{ring}(C_{3})-p_{C1}p_{C2}w(C_{3})\] In the equal probabilities-weights model, the optimal chain-to-chain edge is approximately two times better than the other edge types in reducing inter-risk. Suppose there are two chains with \(n\) nodes. The optimal chain-to-chain edge divides two chains into two chains, but the optimal inter-chain edge divides only one chain into two chains. ### Structural risk The hub-to-chain is a private case of chain-to-chain or inter-chain, even in the structural risk scenario. To simplify the calculations we introduce the notation \(\rho(X,D)\) for a set of chains \(X\) and a number \(D\) \[\rho(X,D)=Pr_{S(G)}(X)\cdot D+\sum_{C\in X}Pr_{S(G)}(X/C)\cdot F_{path}(C)\] (B29) And the structural risk formula transform to \(R_{G}(X)=Pr_{G/X}(s\leftrightarrow X)\cdot\rho(X,D_{G}(X))\) ### Chain-to-chain Suppose there is a new edge between two of the chains \(C_{1}\) and \(C_{2}\) of the structural cut set similar to figure B5. Denote \(X^{\prime}=X/(C_{1}\cup C_{2})\). After contracting the new edge, the structural cut set transforms into two structural cut sets: \(X_{1}=X^{\prime}\cup\{C_{1,1}\cup C_{2,1}\}\) and \(X_{2}=X^{\prime}\cup\{C_{1,2},C_{2,2}\}\). The first cut set disconnects a weight of \(D_{G}(X_{1})=D_{G}(X)+w(C_{1,2}\cup C_{2,2})\), and the second cut set disconnects a weight of \(D_{G}(X)\). \[R_{G\setminus e}(X)=Pr_{G/X}(s\leftrightarrow X)\cdot Pr_{G}(X^{\prime})\cdot \left(\rho(X_{1},D_{G}(X_{1}))+\rho(X_{2},D_{G}(X))\cdot(1-p_{C_{1,1}}p_{C_{1,2 }})\right)\] (B30) Where \(p_{C_{1,j}}\) is the failure probability of the chain \(C_{1,j}\). ### Inter-chain An inter-chain edge shortens one of the chains of the cut set. Assume that there is a chain \(C\) and that we add an edge \(e\) between the \(i\) and the \(j\) node of the chain(figure B5). After the contraction of the edge \(e\), the cut set split into two structural cut sets: \(X/C\cup C_{1}\) and \(X/C\cup C_{2}\). The first cut set disconnects a weight of \(D_{G}(X)+w(C_{3}\cup C_{2})\), and the second cut set disconnects a weight of \(D_{G}(X)\). It follows that the new risk is \[R_{G\setminus e}(X) = Pr_{G/X}(s\leftrightarrow X)\cdot\left(\rho\left(X/C\cup\{C_{1 }\},D_{G}(X)+w(C_{3}\cup C_{2})\right)\right.\] \[+ q_{C_{1}}\cdot\rho\left(X/C\cup\{C_{2}\},D_{G}(X)\right))\] ### Hub-to-hub A hub-to-hub edge between the source component and the disconnected component increases the order of the structural risk by one. After the edge contraction, \(X\) is no longer a minimal cut set, therefore \(R_{G\setminus e}(X)=0\) and \[\Delta R_{G}(X;e)=q_{e}\cdot R(X)\] (B32) ### The optimal new edge It is not easy to characterize the optimal new edge in the general model. We characterize the optimal edge for reducing inter and structure risks in the main article for the equal model. However, in the general model, for a given chain and a new edge that touches the chain, we can change the probabilities and the weights of the chain so that the new edge is optimal. For example, an inter-chain new edge reduces minimal cut sets with one edge in \(C_{3}\) and the other in \(C_{1}\cup C_{2}\). By assigning a high failure probability to the edges in \(C_{3}\) and low weight to the nodes in \(C_{3}\), the optimal new edge is on the boundary of \(C_{3}\)(figure B6). Despite its complexity, the general model analysis provides valuable insights for reducing structural risk. The first insight is that structural cut sets worth investing in are not only those with high failure probability and that disconnect a large number of nodes. They are also connected to the source with high probability, and their disconnected component is well-connected. The first statement follows from the structural risk definition, and the last statement is since improving the cut set increases the risk of the disconnected component cut sets. The second insight is that the optimal new edge for reducing the structural risk of a structural cut set is close to its source component from one side and to its disconnected component from the other side. ## Appendix C Most reliable networks near zero We aim to find the most reliable graph near zero for a fixed number of nodes and edges. First, we must define a reliable graph for small, large, or all \(p\). The set \(G_{n,m}\) includes all networks with \(n\) nodes and \(m\) edges. **Definition 1**.: _Most reliable graph_ _Given \(n,m\in\mathbb{N}\), define:_ 1. _An uniformly most reliable graph_ is a graph \(G\in G_{n.m}\) s.t \(\forall p\in[0,1]:F_{G}(p)=\min_{H\in G_{n,m}}F_{H}(p)\) 2. _A most reliable graph near zero_ is a graph \(G\in G_{n.m}\) s.t \(\exists p^{\prime}\in[0,1]:\forall p\in[0,p^{\prime}]\,F_{G}(p)=\min_{H\in G_{n,m}}F_{H}(p)\) 3. _A most reliable graph near one_ is a graph \(G\in G_{n.m}\) s.t \(\exists p^{\prime}\in[0,1]:\forall p\in[p^{\prime},1]\,F_{G}(p)=\min_{H\in G_{n,m }}F_{H}(p)\) Identifying the uniformly most reliable graph is helpful because, in practice, we do not always have accurate information on the failure probability of network edges. This problem has been extensively researched by mathematicians[8]. The main difference is that their focus has been on the connectivity index called the all-terminal unreliability \(Pr(G\) disconnected). While there are some known most reliable graphs relative to the all terminal unreliability, and some cases where it proved to have no such most reliable network[18], we prove in subsection C.2 that the only uniformly most reliable network relative to the SAIDI index is the star graph. This result encourages us to find the most reliable graph near zero(subsection C.3). ### Methods An alternative approach to represent the SAIDI polynomial with a more combinatorial interpretation exists. Instead of using the conventional polynomial expression \(F_{G}=\sum_{k=0}^{m}a_{k}(G)p^{k}\), we can employ the binomial form \[F_{G}=\sum_{k=1}^{m}b_{k}(G)p^{k}q^{m-k}\] (C33) In this form, the value \(b_{k}\) corresponds to the sum of the number of disconnected nodes resulting from each of the graph's \(k\) edges cut sets, which is deduced from the probability of having \(k\) edge failures and \((m-k)\) operational edges, which is given by \(p^{k}q^{m-k}\). The following equations can be used to convert between the two representations [12]: \[a_{1} = b_{1};\] \[a_{k} = \sum_{j=1}^{k}(-1)^{j+k}\binom{k}{j}b_{j}\] (C34) \[b_{k} = a_{k}+\sum_{j=1}^{k-1}(-1)^{j+k-1}\binom{k}{j}b_{j}\] The binomial SAIDI representation creates an effective method for finding the most reliable graphs. **Proposition 2**.: _Coefficients compression lemma[10] Given two polynoms \(f=\sum_{k=1}^{m}a_{k}p^{k}(1-p)^{m-k},g=\sum_{k=1}^{m}b_{k}p^{k}(1-p)^{m-k}\)_ 1. _If_ \(\exists i\) _s.t_ \(\forall j\in[1,i):a_{j}=b_{j}\)_, and_ \(a_{i}\leq b_{i}\)_, then there exist_ \(p^{\prime}\) _s.t_ \(\forall p\in[0,p^{\prime}]:f(p)\leq g(p)\)__ 2. _If_ \(\exists i\) _s.t_ \(\forall j\in[1,i):a_{m-j}=b_{m-j}\)_, and_ \(a_{m-i}\leq b_{m-i}\)_, then there exist_ \(p^{\prime}\) _s.t_ \(\forall p\in[p^{\prime},1]:f(p)\leq g(p)\)__ 3. _If_ \(\forall i\)__\(a_{i}\leq b_{i}\)_, then_ \(\forall p\in[0,1]:f(p)\leq g(p)\)__ We can follow a sequential approach to identify the most reliable graph for small or large values of \(p\). We begin by identifying all the networks in \(G_{n,m}\) that minimizes the first coefficient from each side. We then choose the network with the minimum value of the second coefficient from this subset, and so on. This process establishes a total order in \(G_{n,m}\), the lexicographic order induced by the polynomial coefficient vector. Near zero, the lexicographic order is from the left to right of the coefficient vector; near one, the order is in the opposite direction. By sorting the polynomial coefficient vector from the \(0\) to the \(m\) coefficients, we can say that \(G_{1}<G_{2}\) if and only if there exists a value \(p^{\prime}\) such that for all \(p\) in the range \([0,p^{\prime}],F_{G1}\leq F_{G2}\). Similarly, by sorting the coefficient vector from the \(m\) to the \(0\) coefficients, we can say that \(G_{1}<G_{2}\) if and only if there exists a value \(p^{\prime}\) such that for all \(p\) in the range \([p^{\prime},1],F_{G1}\leq F_{G2}\). We define a chain of network sets for a fixed \(n,m\in\mathbb{N}\) \[A_{0} = \{G\in G_{n,m}\mid\forall H\in G_{n,m}:b_{0}(G)\leq b_{0}(H)\}\] \[A_{k} = \{G\in A_{k-1}\mid\forall H\in A_{k-1}:b_{k}(G)\leq b_{k}(H)\}\] (C35) \[B_{0} = \{G\in G_{n,m}\mid\forall H\in G_{n,m}:b_{m}(G)\leq b_{m}(H)\}\] \[B_{k} = \{G\in B_{k-1}\mid\forall H\in B_{k-1}:b_{m-k}(G)\leq b_{m-k}(H)\}\] Note that \(A_{k}\subseteq A_{k-1}\) and that \(B_{k}\subseteq B_{k-1}\). Also, the networks in \(A_{k}\) or \(B_{k}\) are smaller than the graph in \(A_{k-1}\), \(B_{k-1}\), respectively, relative to the appropriate order. The most reliable network near zero is in \(A_{m}\), and the most reliable network near one is in \(B_{m}\). ### The only uniformly most reliable network is a star graph Reliable networks near zero are from maximal connectivity. The connectivity of a network is the size of its smallest cut set. A network that is \(k\)-connected is in \(A_{k-1}\) because all of its \((k-1)\) first coefficients are \(0\). The network's connectivity is at least its minimal degree because the set of all adjacent edges of a node is a cut set. From the hand-shaking lemma, a network with a minimal degree of \(k\) has at least \(kn/2\) edges. A super \(k\)-connected graph is a \(k\)-connected graph such that the only cut sets from size \(k\) are the edges adjacent to a node. A super \(k\)-connected graph is in \(A_{k}\) because it minimizes the \(k\)-coefficient. Finally, Bauer [19] found that a super \(k\)-connected \(k\)-regular graph for \(k\geq 3\) always exists. We conclude that the most reliable network near zero has connectivity of at least \(\lfloor\frac{2m}{n}\rfloor\). From the other side, we prove in the rest of the subsection that the most reliable networks near one have smaller connectivity which leads to the conclusion that there is no uniformly most reliable graph for non-tree graphs. The most reliable networks near one are star multigraphs with almost equal edge multiplicity. Using the coefficient compression method, we must first minimize the coefficient \(b_{m-1}\). This coefficient counted the number of connected nodes resulting from only one working edge. In that case, the only optionally connected nodes are the neighbors of the source, which imply that the optimal network is a star multigraph with the source in the middle. The next step is to minimize \(b_{m-2}\) over \(B_{1}\). Mark the edge multiplicity of the edge \((s,t)\) for \(t\in V(G)\) as \(r_{t}\). \(b_{m-2}\) represent two working edges. If the two edges are from the same nodes, \((n-1)\) nodes fail, and else, \((n-2)\) nodes fail. \[b_{m-2}=\left([n-1]\sum_{i=1}^{m}\binom{r_{i}}{2}+[n-2]\left(\binom{m}{2}-\sum _{i=1}^{m}\binom{r_{i}}{2}\right)\right)\] (C36) So in order to minimize \(b_{m-2}\) we must minimize \(\sum_{i=1}^{m}\binom{r_{i}}{2}=\frac{1}{2}(\sum_{i=1}^{m}r_{i}^{2}-m)\) i.e minimize \(\sum_{i=1}^{m}r_{i}^{2}\). Proposition 3 states that this sum is minimized if the numbers \(\{r_{i}\}\) differ by at most \(1\). Finally, we conclude that the most reliable network near one is a star multigraph such that the edge multiplicity of any two edges differed at most by \(1\). It follows that the only uniformly most reliable network is a star graph in the case of \(m=n-1\) or \(n=2\). The minimal degree of the most reliable network near one is \(\lfloor\frac{m}{n-1}\rfloor\). On the other hand, the connectivity of the most reliable network near zero is \(\lfloor\frac{2m}{n}\rfloor\). Therefore, if \(m\geq n\) and \(n\neq 2\), the uniformly most reliable networks do not exist. Also, in the case of \(m=n-1\), the graph is a tree with a SAIDI index of \(\bar{F}_{G}=1-\mathbb{E}[q^{d}]\) for \(d\) the number of nodes from each distance to the source. The multi-star graph minimizes this distance and is the uniformly most reliable network. Figure C7 shows an example of such a graph and compares it SAIDI with a super \(3\)-connected graph. We conclude that reliable networks with unreliable components should minimize their distances from the source. **Proposition 3**.: _minimize the sum of powers with a constant sum._ _Let \((x_{1}...,x_{k})\in\mathbb{N}^{k}\) s.t \(\sum_{i=1}^{k}x_{i}=C\) for \(s\geq 2\). The function \(f_{s}=\sum_{i=1}^{k}x_{i}^{s}\) is minimized if and only if \(\forall i,j:|x_{i}-x_{j}|\leq 1\)_ Proof.: If \(k=2\) then \(f_{s}=x_{1}^{s}+x_{2}^{s}\). \(f_{s}\) has only one minimum point \(x_{1}^{\prime}=x_{2}^{\prime}=\frac{x_{1}+x_{2}}{2}\). Therefore, the minimum in \(\mathbb{N}\) is \(x^{\prime}\in\{\lfloor\frac{x_{1}+x_{2}}{2}\rfloor,\lceil\frac{x_{1}+x_{2}}{ 2}\rceil\}\). For \(k>2\), assume by contradiction and without loss of generality that there is a minimum point \(X=(x_{1},...x_{k})\in\mathbb{N}^{k}\) of \(f_{s}\) s.t \(x_{1}=x_{2}+a\) for \(|a|>2\) then for \(Y=(\lfloor\frac{x_{1}+x_{2}}{2}\rfloor,\lceil\frac{x_{1}+x_{2}}{2}\rceil,x_{3 },...,x_{k})\) \[f_{s}(Y) = \lfloor\frac{x_{1}+x_{2}}{2}\rfloor^{s}+\lceil\frac{x_{1}+x_{2}}{ 2}\rceil^{s}+\sum_{i=3}^{k}x_{i}^{k}\] \[< \sum_{i=1}^{k}x_{i}^{k}=f_{s}(X)\] From the other hand, all the points \(X\in\mathbb{N}^{k}\)s.t \(\forall i,j:|x_{i}-x_{j}|\leq 1\), have the same value of \(f_{s}(X)\) because, suppose that \(\beta=\mod(C,k)\), then \(x_{1}=\ldots=x_{\beta}=\lceil\frac{C}{k}\rceil\) and \(x_{\beta-1}=\ldots=x_{k}=\lfloor\frac{C}{k}\rfloor\). ### Reliable network graphs are subdivisions of a cubic graphs A _sparse network_ is defined as one that maintains \(m<1.5n\). For small values of \(p\), the most reliable sparse network has a connectivity of two. Chains of edges with a degree of two can be contracted into a single chain, resulting in a structure graph where nodes are called hubs and edges are chains. The failure of each chain is \(1-q^{c+1}\) for \(c\) the number of nodes in the chain. By the ring-path formula(equation A3), we can calculate the SAIDI index of the network using its structure graph. [11] shows that in the most reliable graph near zero relative to the all terminal polynomial, the structure graph is a 3-connected and 3-regular graph, and the chain's lengths differ by at most one relative to the all terminal reliability. We prove that the result for the SAIDI polynomial is the same. First, we minimize the second coefficient of \(P_{2}f_{2}\) in equation A3. The second coefficient is(equation A18) \(\sum_{e^{\prime}\in S(G)}c(e^{\prime})^{3}+3c(e^{\prime})^{2}+2c(e^{\prime})\) where \(c(e^{\prime})\) is the chain length. From proposition 3, because the sum of all chain's lengths is constant, the minimum of the sum obtain if the chain lengths differ at most by 1. In this scenario, the length of each chain is approximately \(\frac{n-n^{\prime}}{m^{\prime}}\) where \(n^{\prime}\) and \(m^{\prime}\) are the number of nodes and edges in the structure graph, respectively. The structure graph of the most reliable network near zero is a 3-connected cubic graph with almost equal chains. To prove it, we first notice that in the structure graph, \(m^{\prime}-n^{\prime}=m-n\) because by transforming a node with two edges to one edge, we preserve the difference \(m-n\). To further minimize the second coefficient of \(P_{2}f_{2}\), we need to minimize the chain's length \(\frac{n-n^{\prime}}{m^{\prime}}\). From the hand-shaking lemma and because the minimal degree of the structure graph is three, \(3n^{\prime}\leq\sum_{v\in V(S(G))}deg(v)=2m^{\prime}\), which creates the upper bounds \(m^{\prime}\leq 3(m-n)\) and \(n^{\prime}\leq 2(m-n)\). The upper bound of the inequalities is received if the average degree in the structure graph is exactly 3; hence, \(S(G)\) is a cubic graph. If the structure graph is also a 3-connected graph, the second coefficients of \(P_{0}f_{0},P_{1}f_{1}\) and \(F_{S(G)}\) are 0. We conclude that all the networks in \(A_{2}\) have a cubic 3-connected structure graph with a chain length that differ by at most 1. We can minimize the network's third coefficient by strategically placing the long chains. Assuming the structure graph is super 3-connected, we aim to reduce the number of long chains connected to the source node and minimize the number of long chains that share a connection with the same hub. We can prove it using the risk formula 6. Let \(e_{i}^{\prime}\) denote the length of the chain, given by \(\alpha+r_{i}\), where \(\alpha\) is a constant, and \(r_{i}\in\{0,1\}\). In a super 3-connected graph, every three-order minimal cut set comprises one edge from each chain that connects to a particular hub. The risk associated with the cut sets originating from hub \(v\) is given by: \[\prod_{e_{i}^{\prime}\sim v}\left(\alpha+r_{i}+1\right)\left(D_{G}(E(v))+\frac {1}{2}\sum_{e^{\prime}\in X^{\prime}}(\alpha+r_{i})\right)\] (C37) Here, \(D_{G}(E(v))\) is 1 if \(v\) is not the source, and \(n-1\) otherwise. Consequently, we must avoid long chains that connect to the source. Furthermore, to minimize the factor \(\prod_{e_{i}^{\prime}\sim v}(\alpha+r_{i}+1)\), we aim to reduce the number of long chains that connect to the same hub. The 3 connectivity of the structure graph is the most crucial structure graph factor in practice. Figure C8 compares different structure graphs with equal chains: a super 3-connected graph, the worst 3-connected graph with a given size, and a \(k\)-rings graph. All of those graphs have almost the same number of total nodes. Although the differences between the \(k\)-rings and the other graphs are significant, the difference between the optimal and worst 3-connected graphs is minor, near zero. The figure also presents a family of super 3-connected graphs, which are two connected rings. This simple form is used in subsection E.1 to create a reliable grid graph. The worst 3-connected graph is chosen by iterating all the cubic graphs with a given size using a database of all the cubic graphs with a given size [20][21] Note that by adding \(r+1\) redundant edges near zero, we approximate the SAIDI of the optimal graph as a ring with \((n-2r)/3r\) nodes \[\bar{F}_{G}\overset{\text{order }2}{=}\bar{F}_{ring}(\lfloor\frac{n-2r}{3r} \rfloor)\overset{n\gg 1}{\approx}\frac{\left(1-2\frac{r}{n}\right)^{2}}{9r^{2}} \cdot\left[F_{ring}(n)\right]\] (C38) On the other hand, a naive solution uses a \((r+1)\)-rings approach, meaning setting a network with \((r+1)\) almost equal chains. The rings size in the \((r+1)\) rings solution is approximately \(\frac{n}{r+1}\). We conclude that for a large value of \(n\), the cubic structure graph approach is more than 9 times better than the multi-rings approach. **Remark 1**.: _The optimal cubic structure graph solution is better than the naive \((r+1)\)-rings solution by a factor of approximately_ \[9\cdot\frac{r^{2}}{(r+1)^{2}(1-2\frac{r}{n})^{2}}\] (C39) _for a large value of \(n\)(figure C8)._ ### Other reliability indexes The construction rules of the most reliable graph near zero are maintained for other reliability indexes, such as the all-terminal reliability and pairwise reliability. The all-terminal reliability is the probability that the network is not connected \(U_{G}=1-Pr(G\mbox{ connected})\). The pairwise reliability is the total number of disconnected node pairs \(C_{G}=\sum_{v,u\in V(G)}Pr_{G}(v\nleftrightarrow u)\). The pairwise reliability is also the sum of the SAIDI index among all the optional sources in the graph. [11] proves that the most reliable sparse graph relative to the all terminal reliability index is a 3-regular and 3-connected structure graph with chain lengths that differ by at most one. This equal-length chain property is also correct for the pairwise reliability index. If the structure graph is three-connected, the second coefficient of the polynom \(Pr_{G}(v\nleftrightarrow u)\) for a pair of nodes \(u\) and \(v\) is the sum of all two-order cut sets that disconnect the two nodes. Those two-order cut sets are in the same chain. A pair of edges in the same chain with a distance \(s\) disconnect \(s(n-s)\) pairs of nodes if \(n\) is the number of nodes in the graph. Each chain with \(c+1\) edges has \(c+1-s\) edges pairs with a distance \(s\). it follows that the second coefficient of the pairwise polynomial is \[b_{2}=\sum_{e_{i}^{\prime}\in E(S(G))}\sum_{s=1}^{c_{i}}(c_{i}+1-s)\cdot s(n-s) =\frac{1}{12}\sum_{e_{i}^{\prime}\in E(S(G))}c_{i}(c_{i}+1)(c_{i}+2)(2n-(c_{i} +1))\] (C40) This sum is minimized if the chain's length is differed by at most one. We can prove it by observe that the function \(h(x)=x(x+1)(x+2)(2n-(x+1))\) is convex. We define two numbers \(a<b\) and two numbers \(\alpha=ta+(1-t)b\) and \(\beta=(1-t)a+tb\). From the convex definition, \(h(\alpha)+h(\beta)<h(a)+h(b)\). It follows that \(h(\lfloor\frac{a+b}{2}\rfloor)+h(\lceil\frac{a+b}{2}\rceil)<h(a)+h(b)\), and similar to proposition 3, if the optimal solution has two chains length that differs by more than one, we can take the floor and the ceil of their average to get a better solution. ## Appendix D Computation details There are three significant algorithmic challenges in the network analysis. The first involves computing the SAIDI index. The second is identifying all minimal cut sets, and the third challenge is calculating the risk of a minimal cut set. ### SAIDI computation Calculating the SAIDI index of a graph is a computationally expensive task and is known to be an NP-hard problem [17]. The classical algorithm used to find the SAIDI index is the deletion-contraction algorithm, which has a time complexity of \(O(2^{m})\), where \(m\) is the number of edges in the graph. The deletion-contraction algorithm is a recursive algorithm where, in each step, one edge is chosen, and the probability space is split into the case where this edge is failing (deletion) and the case where the edge works (contraction). However, for certain families of graphs, a tree decomposition-based algorithm can be used to calculate the SAIDI index more efficiently. _Tree decomposition_ is a tree-like structure that enables a faster computation of certain graph properties. A tree decomposition of a graph \(G\) is a pair \((T,X)\), where \(T\) is a tree, and \(X=\{X_{i}:i\in V(T)\}\) is a collection of sets, called bags, such that: 1. For each vertex \(v\in G\), there exists at least one bag \(X_{i}\) that contains \(v\). 2. For each edge \((u,v)\in G\), there exists at least one bag \(X_{i}\) that contains both \(u\) and \(v\). 3. For each vertex \(v\in G\), the bags containing \(v\) form a connected subtree of \(T\). The width of a tree decomposition is the maximum size of any bag minus one. The treewidth of a graph is the minimum width among all of its tree decompositions. There exists a linear-time algorithm that identifies if a graph's treewidth is at most \(\omega\), and if so, finds the tree decomposition with treewidth at most \(\omega\)[22]. Tree decomposition algorithms can efficiently calculate the SAIDI index of graphs with small treewidth. Existing algorithms use tree decompositions to calculate the \(K\)-terminal reliability of a graph and the reaching probability between two nodes [23, 24]. The \(K\)-terminal reliability is the probability that a set \(K\) of nodes is connected. These algorithms have a linear-time fixed-parameter complexity when the treewidth is used as the parameter. This means that the \(K\)-terminal reliability of graphs with treewidth smaller than a constant can be calculated in time that it linear in the number of nodes but exponential in the treewidth. Therefore, it is inefficient for graphs with large treewidth. By applying the algorithm for each node, a SAIDI index algorithm can be obtained with a complexity of \(O(n^{2}\cdot\omega^{3}\cdot 2^{(\omega+2)(\omega+1)})\), where \(n\) represents the number of nodes in the graph. Note that using the chain decomposition method and the ring-path formula in the tree decomposition-based approach, the algorithm's complexity is linear in the number of hubs in the graph but exponential in the treewidth of the structure graph. Although the exact SAIDI calculation is challenging, \(k\)-order approximations offer a polynomial time complexity in terms of the number of edges. In subsection A.5, we show that a 3rd-order approximation of the SAIDI index satisfies near zero. The complexity of the deletion-contraction \(k\)-order approximation is \(O(m^{k})\). Similarly, the tree decomposition-based algorithm is polynomial in the treewidth of the graph. By combining the ring-path formula, the tree decomposition algorithm, and the observation that a third-order approximation is good in sparse graphs, we conclude that the SAIDI calculation of the graph is relatively fast. ### Find minimal cut sets Minimal cut sets of the structure graph are used to find structural risks. Finding those structural risks are important to understand the weak point of the network and decide which new edges are the most important. Because we focus on finding minimal cut sets of the structure graph and approximate the reliability index through 3rd-order cut sets, finding the relevant minimal cut sets of a sparse network is simpler. Minimal cut sets are induced from connected subgraphs. Minimal cut sets separate the graph into exactly two connected components because otherwise, we can find a cut set contained in the minimal cut set. We can find the minimal cut sets by enumerating all the connected subgraphs and finding their boundary edges[25]. However, not every boundary edges of a connected subgraph is a minimal cut set. Therefore, after finding all the connected subgraphs of the structure graph, we need to check which cut sets are not minimal. We can also use the algorithm proposed in [26] or simply enumerate all the edges subsets of the graph with at most three edges and check which ones are minimal. ### Calculating risks After enumerating all the structure graph minimal cut sets, we can calculate the inter and structural risks of the network. The challenging part of calculating inter and structural risks is the probabilities \(P_{2}(e^{\prime})\) and \(Pr_{S(G)}(s\leftrightarrow X^{\prime})\) for a chain \(e^{\prime}\) and structural minimal cut set \(X^{\prime}\). The exact calculations can be done using the deletion-contraction or tree decomposition-based algorithm. However, suppose we calculate only a third-order approximation. In that case, those probabilities are much simpler: to calculate \(P_{2}(e^{\prime})\), we only need to find bridges that disconnect the chain, and finding \(Pr_{S(G)}(s\leftrightarrow X^{\prime})\) only requires to find cut sets with at most \((3-|X^{\prime}|)\) edges. ## Appendix E Test cases ### Grid We use dynamic programming to calculate the optimal new edges in a grid, as shown in figure 5. The objective is to minimize the second coefficient of the super 3-connected network grid. The grid consists of m columns, and the source points are located in the top-left and bottom-right corners. Initially, the number of new edges between each adjacent column pair is defined, and the score of each new edge configuration is calculated by summing the ring second coefficient for each chain in the new graph. The goal is to minimize the score. In the algorithm's \(i\) iteration, we calculate for each edge's configuration between the \(i\) and the \((i+1)\) columns the optimal edge's configuration between columns \(1\) to \(i\). In the first step, we enumerate all the edge configurations between the first and the second columns. In the \(i\) iteration, we already know the score of the optimal configurations on columns \(1\) to \((i-1)\) given each edge configuration between the columns \((i-1)\) and \(i\). We then enumerate each pair of edges configuration between the \((i-1)\) to the \(i\) columns and between the \(i\) to the \((i+1)\) columns. The score of each configuration is the sum of the score on columns \(1\) to \((i-1)\), which is already known, and the score of the \(i\) columns. The complexity of the algorithm is \(O\left(m\cdot max_{i=1}^{m-1}\left(n_{i}^{k_{i}}\cdot n_{i+1}^{k_{i+1}}\right)\right)\) where \(n_{i}\) is the length of the \(i\) columns, and \(k_{i}\) is the number of edges between the columns \(i\), \((i+1)\). This example demonstrates the principle of modular design as implied by equation A4: the only effect of new edges within a subgraph on the rest of the graph is the partition probabilities of the subgraph boundaries. We can more efficiently determine the optimal new edges by computing the optimal inter-edges given the boundary configurations. There are even better grid configurations. The grid topology, as presented above, has a major disadvantage because the east-west chains are from length \(1\). To overcome this problem, we present, for example, a \(5X100\) grid network with only four redundant edges, the same as the basic network topology(figure E9). However, the chains in the network are distributed more evenly, resulting in a more than 4-fold improvement in the second coefficient of the SAIDI index. However, since the structure graph is not super 3-connected, the improvement in SAIDI is only by a factor of \(3.32\) for \(p=0.0001\). We create the network by choosing a family of 3-connected cubic graphs of two connected rings, as shown in figure C8, with six nodes to create five redundant edges and then fit the topology to the actual nodes. Using the cubic networks database[20], we can verify that there is no planner super 3-connected cubic graph with six nodes. Because we have two sources, it is enough that each of them is from a degree of two. Therefore, we can split two of the chains into two chains, which results in a total of 11 chains. Figure E9 presents the construction of this graph. ### Real networks analysis We analyze data from the public electrical vehicle load capacity map by "Atlantic city electric" as presented in the main paper. The map goal as stated on the map website2 is to "represent areas on the distribution grid where it is reasonable capacity to accommodate electric vehicle charging infrastructure and other load sources with a lower probability of necessitating extensive equipment upgrades or line extensions that would add cost or time to projects". The data was extracted manually in March 2023. A screen shot of the map is shown in figure E10. Footnote 2: [https://www.arcgis.com/apps/dashboards/30d93065bbcf41d08e39638f60e5ad77](https://www.arcgis.com/apps/dashboards/30d93065bbcf41d08e39638f60e5ad77) To increase the network's reliability, we identify short, optional road segments that can be used to create both chain-to-chain and inter-chain configurations, thereby improving the network's reliability. We then experiment with various new edge configurations, aiming to minimize the second coefficient of the polynomial. For instance, two chains are relatively close in network 2. Therefore, we can use a chain-to-chain edge to connect them in the small gaps, which helps to enhance reliability. Finally, we enumerate the different combinations of these new edges and try to minimize the sum of the second coefficient of the SAIDI for a 3-connected subgraph, represented as \(\sum_{e\in S(G)}\frac{1}{6}(c(e)^{3}+3c(e)^{2}+2c(e))\). Similar to optimizing new edges in a grid, we can use a more sophisticated dynamic programming approach, whereby each new edge divides the chains into two segments, and we optimize each of these segments independently. There are networks in which it is not worth investigating in new lines. For example, in network 3, apart from a crossing new edge between the two rings, there are no significant improvements we can make to the network's reliability without resorting to expensive lines. Moreover, since the neighborhood is small, it is not worth investing in more lines. However, another optional improvement (version 2) is to move some nodes for the new crossing edge to reduce the length of the longer chain.
2304.07584
FSDNet-An efficient fire detection network for complex scenarios based on YOLOv3 and DenseNet
Fire is one of the common disasters in daily life. To achieve fast and accurate detection of fires, this paper proposes a detection network called FSDNet (Fire Smoke Detection Network), which consists of a feature extraction module, a fire classification module, and a fire detection module. Firstly, a dense connection structure is introduced in the basic feature extraction module to enhance the feature extraction ability of the backbone network and alleviate the gradient disappearance problem. Secondly, a spatial pyramid pooling structure is introduced in the fire detection module, and the Mosaic data augmentation method and CIoU loss function are used in the training process to comprehensively improve the flame feature extraction ability. Finally, in view of the shortcomings of public fire datasets, a fire dataset called MS-FS (Multi-scene Fire And Smoke) containing 11938 fire images was created through data collection, screening, and object annotation. To prove the effectiveness of the proposed method, the accuracy of the method was evaluated on two benchmark fire datasets and MS-FS. The experimental results show that the accuracy of FSDNet on the two benchmark datasets is 99.82% and 91.15%, respectively, and the average precision on MS-FS is 86.80%, which is better than the mainstream fire detection methods.
Li Zhu, Jiahui Xiong, Wenxian Wu, Hongyu Yu
2023-04-15T15:46:08Z
http://arxiv.org/abs/2304.07584v1
# FSDNet-An efficient fire detection network for complex scenarios based on YOLOv3 and DenseNet ###### Abstract Fire is one of the common disasters in daily life. To achieve fast and accurate detection of fires, this paper proposes a detection network called FSDNet (Fire Smoke Detection Network), which consists of a feature extraction module, a fire classification module, and a fire detection module. Firstly, a dense connection structure is introduced in the basic feature extraction module to enhance the feature extraction ability of the backbone network and alleviate the gradient disappearance problem. Secondly, a spatial pyramid pooling structure is introduced in the fire detection module, and the Mosaic data augmentation method and CIoU loss function are used in the training process to comprehensively improve the flame feature extraction ability. Finally, in view of the shortcomings of public fire datasets, a fire dataset called MS-FS (Multi-scene Fire And Smoke) containing 11938 fire images was created through data collection, screening, and target annotation. To prove the effectiveness of the proposed method, the accuracy of the method was evaluated on two benchmark fire datasets and MS-FS. The experimental results show that the accuracy of FSDNet on the two benchmark datasets is 99.82\(\%\) and 91.15\(\%\), respectively, and the average precision on MS-FS is 86.80\(\%\), which is better than the mainstream fire detection methods. f fire classification, fire detection, spp, feature extraction, dense connection. ## 1 Introduction Fires are characterized by suddenness and spread, and can cause huge economic losses and seriously threaten human life safety in a short period of time. According to data released by the Fire and Rescue Bureau of the National Emergency Management Department in 2022, there were a total of 748,000 reported fires in China in 2021, causing 1,987 deaths, 2,225 injuries, and direct property losses of up to 6.75 billion yuan. Therefore, researching fire automatic detection methods that combine real-time and accuracy has important theoretical and practical significance. Generally speaking, fire detection methods can be roughly divided into two types: sensor-based detection methods[1] and vision-based detection methods[2]. Sensor-based detection methods played a crucial role in early fire detection, which mainly sense the characteristic changes accompanying a fire, such as heat, light, and smoke concentration, through sensors[3]. However, the fire features from the source require a certain amount of time to be sensed by deployed sensors, which leads to the response of corresponding sensors being lagging. This characteristic also limits the applicability of this method to large spaces or public places. In addition, this method also requires human and other equipment involvement to further determine information such as the size, location, and degree of burning of the fire[4]. With the popularization of digital cameras and the rapid development of various technologies such as image processing, machine learning, and deep learning, video-based fire detection methods have been rapidly developed. This type of method captures real-time video image information indoors and outdoors through cameras, transmits the images to a data processing center, performs real-time processing and analysis, and finally judges the fire situation on-site and takes corresponding response measures. In the early days, vision-based methods were accomplished by manually extracting single or multiple features from fire images, including static features such as brightness, color, and texture, and dynamic features such as flicker and motion[5]. For example, Celik et al.[6] effectively detected fires from video sequences by combining the YCbCr color space model with foreground object information. Chen et al.[7] proposed a method that combines the RGB color space model with dynamic analysis to detect flames and uses the growth characteristics of the flames to perform secondary judgment on suspected areas. However, detection methods based on a single static feature are difficult to adapt to various complex situations and are easily affected by environmental factors. To overcome this limitation, researchers have considered combining dynamic features with the static characteristics of flames. Dedeoglu et al.[8] proposed a flame detection method that combines color, motion features, and wavelet domain analysis. Time-domain wavelet transforms are used to determine the boundaries of flames, while spatial-domain wavelet transforms are used to detect changes in flame color. Toreyin et al.[9] utilized an implicit Markov model to detect the flickering process of flames, effectively reducing false alarms caused by dynamic flame color similar objects, but the method is sensitive to changes in lighting conditions. Habiboglu et al. [10] developed a video-based wildland fire detection system that obtains the moving area of flames through background subtraction and color thresholding, and then extracts relevant features from the motion area. Zhao et al. [11] constructed static SVM classifiers and dynamic SVM classifiers based on 11 static features and 27 dynamic features of fire samples to achieve real-time detection of forest fires, but the false detection rate was high. Most of the methods proposed by the aforementioned researchers are based on hand-crafted features and have limitations in their application scenarios. Therefore, they have gradually been replaced by methods based on deep learning [12]. Deep learning-based methods can more effectively extract deep features and semantic information from fire images, further improving the performance of fire detectors. In recent years, there have been many fire detection methods based on Convolutional Neural Networks (CNNs) [13]. Frizzi et al. [14] used a CNN for fire and smoke detection and tested it with real fire video sequences, showing superior classification performance compared to traditional fire detection methods. Gonzalez et al. [15] proposed a fire detection model combining CNNs and unmanned aerial vehicles based on AlexNet [16] and convolutional sequences, but the model had a high false alarm rate. In [17], a video fire detection method was proposed by combining Faster R-CNN with LSTM to extract spatial and temporal features of flames. In [18], a video fire detection method that simultaneously considers dynamic features based on motion flicker and depth static features was proposed. Some researchers have applied CNNs, Transformers [19], and image segmentation [20] to the field of visual fire detection to develop trained models for collecting fire image features. Sharma et al. [21] used VGG16 and ResNet50 to perform fire detection in images and provided a self-collected fire dataset. Zhang et al. [22] compared the detection performance of forest fires using object detection algorithms such as Faster R-CNN [23], YOLO [24, 25, 26], and SSD [27], where SSD showed better real-time performance and detection accuracy. They further optimized the network structure and proposed a new Tiny-YOLO network, demonstrating its effectiveness. Although some achievements have been made in the aforementioned fire detection methods, the overall accuracy is still low, and they have limited applicability, making it difficult to achieve high robustness in complex real-life scenarios. To address these issues, this paper proposes a fire detection network named FSDNet, which combines classification and detection functions based on YOLOv3 and is used for fire monitoring and prevention. Our main contributions can be summarized as follows: (1) A fire detection network named FSDNet based on YOLOv3 and dense connections is proposed. The network consists of a feature extraction module, a fire classification module, and a fire detection module, which combines classification and detection functions and can perform real-time and accurate fire recognition and detection on image and video data. (2) In the FSDNet network, a dense connection structure is introduced in the feature extraction module, which reduces the number of parameters while strengthening feature propagation through module stacking, small-scale convolution kernels, and convolutional pooling. The fire detection branch incorporates spatial pyramid pooling to fuse global and local features. The CIOU loss function is used as the bounding box regression loss to improve the precision of predicted bounding box localization. (3) To address the insufficiency of publicly available fire datasets, we independently established a fire dataset called MS-FS, which contains 11,938 fire images. Through a large amount of material collection, data screening, and object annotation work, the dataset covers hundreds of real fire scenarios. The experimental results show that the proposed FSDNet network has higher accuracy than existing mainstream fire detection methods. The overall accuracy, false alarm rate, and miss rate on two benchmark fire datasets DS1 and DS2 are 99.82\(\%\), 0.07\(\%\), 0.17\(\%\) and 91.15\(\%\), 10.28\(\%\), 7.56\(\%\), respectively. The mean Average Precision (mAP) on MS-FS reaches 86.80\(\%\), which is 8.5\(\%\), 6.4\(\%\), and 1.2\(\%\) higher than YOLOv3, Gaussian-YOLOv3, and YOLOv4 [28], respectively. It can accurately and real-time accomplish fire detection tasks. The proposed FSDNet will have a promising application prospect in community fire monitoring and prevention. ## 2 Method As shown in Fig. 1, the overall framework of the proposed fire detection network FSDNet consists of three modules: a dense-connected basic feature extraction module, a fire classification module, and a fire detection module with three detection branches. This network takes the flame dataset created in this paper as input and establishes two models with the same feature extraction module, respectively used to complete the fire classification task and real-time fire detection task. ### Feature Extraction Module DenseNet [29] has shown excellent performance in classification experiments based on CIFAR and ImageNet datasets, achieving lower error rates with fewer parameters compared to ResNet [30]. The specific connectivity structure of the dense connections is shown in Fig. 2. By establishing dense connections such that the input of each layer is derived from the output of all preceding layers, feature reuse is achieved in the channel dimension, maximizing information exchange between preceding and succeeding layers. This connectivity method has the following advantages: 1. Dense connections are equivalent to connecting each layer directly to the input and loss, effectively mitigating the problem of gradient vanishing. 2. Connecting all preceding layers to succeeding layers enhances feature propagation. 3. Compared to the common residual connection method of ResNet, the dense connection method significantly reduces the number of parameters. Based on the sensitivity of error rates to fire detection and the advantages of DenseNet, FSDNet chose the dense connection method used in DenseNet for the convolutional layer connections in the basic feature module. The introduced dense connection feature extraction module consists of four convolutional layers, three pooling layers, three Dense Block modules, and one fully connected layer, which increases the feature reuse, making it highly meaningful for detecting objects like fire Fig 1: FSDNet algorithm framework diagram. and smoke that do not have fixed shapes. Additionally, the feature extraction module uses small convolution kernels for feature extraction, consisting only of small convolution kernels (1*1, 3*3), to obtain a deeper and wider network with fewer parameters. The features extracted during the initialization stage are fed into the first Dense Block, while adjacent Dense Blocks are connected by 1*1 convolutional layers and max pooling layers for transition. The 1*1 convolutional layer is used for dimension reduction, and the max pooling layer suppresses local image noise and reduces interference from background items. The 1x1 convolutional layer acts as the bottleneck layer, reducing the computational load by decreasing feature dimensions and merging features from different channels. The 3*3 convolutional layer captures information from the eight-neighborhood of pixels, with the smallest kernel size, and cascading multiple 3*3 convolutional layers can capture the same receptive field with fewer parameters. For example, replacing one 7*7 convolutional layer with three cascading 3*3 convolutional layers can reduce 45\(\%\) of computational load and parameters. Additionally, the stacking of nonlinear activation functions increases the network's ability to perform nonlinear fitting. Figure 2: Dense connection structure in the basic feature extraction module. ### Fire Classification Module As shown in Fig. 1(a), after passing through the basic feature extraction module, the different layer feature maps generated by the input image are separately transmitted to the classification module and detection module. To fully utilize the global information of the input, the classification module uses a global average pooling layer[31] to receive the feature maps generated by the final output of the basic feature extraction module. The global average pooling layer can collect the spatial information of the input feature maps to obtain the global context information of the object; at the same time, its introduction can complete regularization throughout the entire network structure, reduce the number of parameters, and effectively prevent overfitting.After the global average pooling layer, a fully connected layer is connected to it, which can complete the learning of global information for the corresponding feature map, and can selectively emphasize useful features and suppress useless features. Considering that fire detection is a binary classification task, the activation function of the output layer is selected as Sigmoid. The final output of this module is Fire/Normal. ### Fire Detection Module The YOLOv3 detection framework was applied to the fire detection module, which shows great advantages in accuracy and real-time, suitable for fire detection tasks. To alleviate the problem of vanishing gradient that is easily occurred in the feature extraction network Darknet53 in YOLOv3, the feature extraction network proposed in section 2.1 of this paper is selected as the Backbone. Its dense connection structure makes full use of the feature information of different layers in the process of feature forward propagation and significantly reduces the number of network parameters. In addition, this paper further improves YOLOv3 in terms of model training and structure optimization, aiming to complete the fire detection task accurately and in real-time. #### 2.3.1 Architecture Similar to the idea of FPN[32], the detection part adopts a multi-scale detection method, where the feature maps output by the 24th, 44th, and 204th layers of the feature extraction module are respectively passed to the three detection branches of the detection part. The small feature map, with a larger receptive field and richer semantic information, generates small-scale detectors responsible for detecting large-sized objects. In addition, the small feature map, after upsampling, is concatenated with the feature map from the 44th layer of the feature extraction module to obtain medium-sized detectors responsible for detecting medium-sized object s. Similarly, the medium-sized feature map, after upsampling, is concatenated with the feature map from the 24th layer of the feature extraction module to obtain large-sized feature maps with higher resolution and accurate position information, and the large-scale detectors are responsible for detecting small-sized objects. #### 2.3.2 Spatial Pyramid Pooling Module and Mosaic Due to the morphological characteristics and large differences in size of fire detection objects (flames, smoke), a spatial pyramid pooling structure[33] is added to reduce the accuracy loss caused by object morphological differences in detection. The main purpose of the spatial pyramid pooling structure, first proposed, is to solve the following two problems: 1. image distortion caused by operations such as clipping and scaling of the input image; 2. repetitive extraction of image features by convolutional neural networks. In this paper, the SPP module is introduced between the basic feature extraction module and the bottom detection branch of the detection module. The SPP module consists of four parallel branches, with kernel sizes of 5*5, 9*9, 13*13 max pooling layers and a skip connection. The largest pooling kernel size in the SPP module is designed to be as consistent as possible with the size of the input feature map, in order to strengthen the fusion between features of different sizes. The outputs of the four branches are concatenated at the feature map level, which enables the fusion of global and local features in the feature extraction module, greatly enriches the expression ability of the feature map, improves the recognition accuracy, and is beneficial for detecting objects with large differences in size, such as flames and smoke. Considering the unbalanced proportion of objects with different sizes in the MS-FS established in section 3 of this paper, we use Mosaic during training. Mosaic is a data augmentation technique Figure 3: The realization process of the Mosaic data enhancement method. first proposed in YOLOv4, which randomly scales, crops, and arranges four images and feeds the resulting image into the training network, equivalent to training with four images at once. This approach not only speeds up the training process but also greatly enriches the background of detected objects. The specific implementation process of Mosaic is as follows: firstly, four images are randomly selected from the dataset; secondly, each original image is subjected to operations such as flipping, scaling, color space transformation (brightness, saturation, hue), and the images are placed in order from top-left to top-right; finally, adjustments and combinations of images and object candidate boxes are made. As shown in Fig. 3. #### 2.3.3 Loss Function The complete form of the loss function for the detection algorithm FSDNet proposed in this article is shown as follows: \[Loss=L_{box}+L_{obj}+L_{cls} \tag{1}\] \[=\lambda_{coord}\sum_{i=0}^{K^{2}}\sum_{j=0}^{M}I^{obj}_{ij}(2-w_{i}\times h_ {i})[(x_{i}-\overset{\wedge}{x_{i}})^{2}+(y_{i}-\overset{\wedge}{y_{i}})^{2}] \tag{2}\] \[+\lambda_{coord}\sum_{i=0}^{K^{2}}\sum_{j=0}^{M}I^{obj}_{ij}(2-w_{i}\times h_ {i})[(w_{i}-\overset{\wedge}{w_{i}})^{2}+(h_{i}-\overset{\wedge}{h_{i}})^{2}] \tag{3}\] \[-\sum_{i=0}^{K^{2}}\sum_{j=0}^{M}I^{obj}_{ij}[\overset{\wedge}{C_{i}}\log(C_{ i})+(1-\overset{\wedge}{C_{i}})\log(1-C_{i})] \tag{4}\] \[-\lambda_{noobj}\sum_{i=0}^{K^{2}}\sum_{j=0}^{M}I_{ij}^{noobj}[\stackrel{{ \wedge}}{{C}}_{i}\log(C_{i})+(1-\stackrel{{\wedge}}{{C}}_{i})\log( 1-C_{i})] \tag{5}\] \[-\sum_{i=0}^{K^{2}}I_{ij}^{obj}\sum_{c\in class}[\stackrel{{ \wedge}}{{p}}_{i}(c)\log(p_{i}(c))+(1-\stackrel{{ \wedge}}{{p}}_{i}(c))\log(1-p_{i}(c))] \tag{6}\] In this paragraph, \(K^{2}\) represents the total number of grids into which the input image is divided, and \(M\) represents the number of candidate boxes generated by each grid. Each candidate box will produce a corresponding bounding box after passing through the network, and the total number of bounding boxes obtained will be \(K^{2}\times M\). \(\lambda_{coord}\) and \(\lambda_{noobj}\) are hyperparameters that represent the weights of the coordinate error and class error, respectively. \(I_{ij}^{noobj}\) represents whether the \(j\) th anchor box of the \(i\) th grid is responsible for the object object. If it is responsible, then \(I_{ij}^{noobj}\) is 1; otherwise, it is 0. The responsibility is specifically defined as follows: among all anchor boxes of the \(j\) th anchor box in the \(i\) th grid, if the IoU (Intersection over Union) with the ground truth of the object is the highest, then the anchor box is responsible for predicting the object. Similarly, \(I_{ij}^{noobj}\) represents that the \(j\) th anchor box of the \(i\) th grid is not responsible for the object object. In YOLOv3, the Smooth L1 Loss function is responsible for the position loss, which overcomes the drawbacks of both MAE and MSE. However, treating (x, y, w, h) as independent variables separates their relative relationship and ignores their holistic nature. Therefore, this article introduces CIoU as the algorithm's bounding box regression loss. CIoU is an improvement on DIoU loss, which not only considers the overlap between two bounding boxes and the distance between their centers but also adds an adjustment parameter. Based on this, the aspect ratio of the bounding box is also considered, which is more favorable for detecting objects with large differences in shape and size, such as flames and smoke. The formula for the CIoU loss function introduced in this article is shown below: \[L_{box}=L_{CIoU}=1-IoU+\frac{\rho^{2}(b,b^{gt})}{c^{2}}+\alpha v \tag{7}\] Among them, \(b\) represents the center point of the predicted bounding box, \(\rho\) represents the center point of the ground truth bounding box, \(\rho\) denotes the Euclidean distance between the center points of the two bounding boxes, \(c\) is the diagonal distance of the smallest rectangle that can simultaneously cover the two bounding boxes. \(v\) is responsible for measuring the similarity of aspect ratios, \(\alpha\) is the weight coefficient, and the specific functional forms of both are shown in the formula. \[v=\frac{4}{\pi^{2}}(\arctan\frac{\omega^{gt}}{h^{gt}}-\arctan\frac{\omega}{h} )^{2} \tag{8}\] \[\alpha=\frac{v}{(1-IoU)+v} \tag{9}\] In the above equation, \(h^{gt}\) and \(\omega^{gt}\) represent the height and width of the ground truth bounding box, while \(h\) and \(\omega\) represent the height and width of the predicted bounding box, and \(IoU\) denotes the intersection over union between the two boxes. ## 3 Establishment of Fire and Smoke Dataset Deep learning models have a certain degree of dependency on datasets, and a high-quality and properly partitioned dataset can accelerate model convergence speed and improve model performance. Therefore, this article first investigated and summarized the existing publicly available fire and smoke datasets. Table 1 shows a comparison between the dataset created in this article and some of the existing publicly available fire datasets. The research results show that existing publicly available datasets have shortcomings such as small data volume, limited coverage of scenarios, imbalanced data, and lack of annotation information. Therefore, this article has created a large-scale flame and smoke dataset, MS-FS, with annotation information. The detailed process is as follows. (1) Data collection. A web crawler was built to search and download fire-related videos and images. Ultimately, 70 video clips of real fire scenes and thousands of fire images were collected, covering complex fire scenarios in daily life as much as possible, such as city roads, large buildings, supermarkets, and parking lot fires. \begin{table} \begin{tabular}{c c c c} \hline Dataset & \multirow{2}{*}{Content Description} & Object Classes & Pixel level Annotation \\ \hline \multirow{3}{*}{Bowfire dataset} & An image dataset including & & \\ & 119 fire images & Fire & False \\ & and 106 normal images. & & \\ \hline \multirow{3}{*}{Bilkent University} & A video dataset composed of 14 & & \\ & fire videos and 17 normal videos. & & \\ & The average duration of the videos & & \\ & is142s. & & \\ \hline \multirow{3}{*}{The University of Modena} & A video dataset consisting of 14 & & \\ & outdoor smoke videos.The & & \\ & average duration of the videos is & & \\ & 12.6s. & & \\ \hline \multirow{3}{*}{Keimyung University} & A video dataset including 16 & & \\ & indoor and outdoor fire videos. & & \\ & The average duration of the videos & & \\ & is 39s. & & \\ \hline \multirow{3}{*}{The proposed dataset} & A dataset composed of 11938 fire & & \\ & images with preprocessing, & & \\ \cline{1-1} & covering hundreds of real-time & & \\ \cline{1-1} & fire scenarios. & & \\ \hline \end{tabular} \end{table} Table 1: Comparison with other fire dataset. (2) Video frame splitting. The collected video clips were processed by frame splitting, dividing each second of video into 25 frames. To avoid excessive similarity between consecutive images, at least one image was extracted as valid data every 3 frames. (3) Data filtering. In order to ensure the validity of the data, the data was screened according to the following criteria. First, according to the principle of object proportion, filter out the data whose area of flame and smoke in each scene accounts for less than 30\(\%\); secondly, according to the principle of image signal-to-noise ratio, filter out the data whose signal-to-noise ratio is less than or equal to 35 dB; finally, according to theRGB color model, filter out the data whose background RGB value conforms to the following formula: \[R>R_{avg} \tag{10}\] \[G>G_{avg} \tag{11}\] \[R>G>\mathrm{B} \tag{12}\] (4) Object annotation. Correct category labeling of images is extremely important for the dataset. The selected data was annotated one by one using the image annotation tool, LabelImg. The annotated objects were divided into two categories: flames and smoke. The data format for annotation was the XML file format of the standard dataset, Pascal VOC. This file contained annotation information for the original image, including the object position, object category, and bounding box position information. MS-FS contains 11,938 images covering hundreds of real fire scenes, including various backgrounds from simple to complex, environments from indoors to outdoors, lighting from day to night, and heights from ground level to high-rise buildings. MS-FS effectively solves the problems of small data volume, limited coverage of scenarios, imbalanced data, and lack of annotation information in existing publicly available datasets. MS-FS is divided into three categories, which are scenes containing only flames, scenes containing only smoke, and scenes containing both flames and smoke. The annotation examples for these three types of data are shown in Figure 4, where the purple and yellow boxes represent the annotations for smoke and flames, respectively. ## 4 Experiment and Result ### Dataset Description In order to validate that the proposed FSDNet algorithm can still achieve excellent performance in different scenarios and to compare it more intuitively with other fire detection algorithms, this study Figure 4: Labeled examples of the dataset proposed in this paper. selected DS1[34] and DS2[35] as the test set and some of the training set for the FSDNet classifier, and chose the dataset MS-FS proposed in this paper as the training and test set for the FSDNet detector. DS1 is a fire video dataset consisting of 31 video clips captured by cameras in different scenarios, with a total duration of 74 minutes (4450 seconds). Among the 31 video clips, the first 14 contain frames with fires, while the remaining 17 do not and were captured in normal environments. This dataset imposes a high demand on the model's performance, as it contains scenes with heavy smoke, fire-colored objects, and fire objects at different distances. DS2 consists of 119 fire images and 107 non-fire images, making it a highly challenging dataset that is commonly used to test the detection accuracy of fire detection methods. MS-FS is a self-prepared fire dataset in this paper. ### Experimental Results #### 4.2.1 Classification results of FSDNet This paper uses three commonly used evaluation metrics to assess the performance of the FSDNet classifier and compare it with other methods: false positive rate (FP Rate), false negative rate (FN Rate), and accuracy. A lower FP Rate and FN Rate and a higher accuracy indicate better classifier performance. In fire detection tasks, FP Rate represents the fire false alarm rate, while FN Rate represents the fire miss rate. In the experiments based on DS1, this study compared several relevant methods, including those based on handcrafted feature extraction and deep learning, and the comparison results are shown in Table 2. (1) The false positive rate (FPR) of fire alarms is discussed in this article. The FSDNet algorithm proposed in this study has the lowest FPR of only 0.07\(\%\), which is 5.81\(\%\) to 41.11\(\%\) lower than other methods. This suggests that the FSDNet algorithm proposed in this study is the most reliable and robust. The second-lowest FPR is Habiboglu's method, with a rate of 5.88\(\%\). However, this method has a higher false negative rate (FNR) of 14.29\(\%\), which is the highest value among all the methods in Table 2. (2) The false negative rate (FNR) was studied and the methods proposed by Lascio, Foggia, and Celik showed the best performance, achieving zero FNR on the DS1 test set. However, these three algorithms had high false positive rates of 6.67\(\%\), 11.76\(\%\), and 29.41\(\%\) respectively. On the other hand, FSDNet exhibited some false positives but still maintained a relatively low FNR of 0.17\(\%\), making it a superior performing algorithm. (3) The accuracy of fire detection was studied, and FSDNet achieved the highest accuracy among all methods with a score of 99.82\(\%\), approaching 100\(\%\), and obtaining outstanding results. Its accuracy was 5.32\(\%\) higher than the second-ranked algorithm proposed by Muhammad. In summary, among the three performance metrics compared, the FSDNet algorithm proposed in this paper showed the highest accuracy (99.82\(\%\)), the lowest false positive rate (0.07\(\%\)), and excellent false negative rate (0.17\(\%\)). Its overall performance in fire detection was the best among \begin{table} \begin{tabular}{c c c c} \hline \hline Method & FP Rate & FN Rate & Accuracy \\ \hline Rafiee et al. (RGB)[36] & 41.18 & 7.14 & 74.20 \\ Celik et al. & 29.41 & **0** & 83.87 \\ Chen et al. & 11.76 & 14.29 & 87.10 \\ Habiboglu et al. & 5.88 & 14.29 & 90.32 \\ Lascio et al.[37] & 6.67 & **0** & 92.59 \\ Foggia et al. & 11.76 & **0** & 93.55 \\ Muhammad et al.[38] & 8.87 & 2.12 & 94.50 \\ FSDNet & **0.07** & 0.17 & **99.82** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with other fire detection methods on DS1. all methods in Table 2. The fire classification results of FSDNet based on DS1 are shown in Fig. 5. From the fire results in Fig. 5, it can be seen that: (1) Even in the face of conventional fire scenes with clearly visible flame objects such as (a)-(d), FSDNet can achieve a detection accuracy of up to 99\(\%\) even if the shape of the flame object varies significantly. This means that FSDNet can easily handle routine fire detection tasks. (2) When faced with a single fire source scene with a small object area such as the similar scenes (e)-(h), although the proportion of the fire source area gradually decreases from the scene (e)-(h), the The detection results of FSDNet are always kept at a reliable level (\(>\)90\(\%\)). Even if Figure 5: Fire classification results On DS1 of FSDNet. (a)-(d)Conventional fire scenarios with obvious flames. (e)-(h)Single fire source scenario with a relatively small object. (i)-(l)Scenarios containing interferences items. the flame object is relatively small, such as scene (h), the algorithm also guarantees an accuracy of 74\(\%\), which shows that FSDNet can be effectively applied to the task of early fire detection. By detecting a small area of flame in the early stage of fire, the fire spread can be curbed, reduce the loss caused by fire. (3) In the face of non-fire scenarios with certain interference items such as similar scenarios (i)-(l), the detection accuracy rate of FSDNet also remains above 90\(\%\), which is also the embodiment of the low false alarm rate in Table 2. Among them, scene (i) is moving smoke, scene (j) is an indoor light source, and scenes (k) and (l) are both outdoor light sources at night. This shows that FSDNet has good anti-jamming performance. From the above three sets of fire detection results, it can be concluded that FSDNet maintains a high detection accuracy when faced with conventional fire scenes, fire scenes with a relatively small flame area, and non-fire scenes with some interference. The algorithm demonstrates strong resistance to interference and shows potential for application in early fire detection tasks. The performance of the FSDNet algorithm is also compared with five existing high-performance, lightweight object detection algorithms MobileNet, ShuffleNet, ShuffleNetV2, MobileNetV2 and SqueezeNet on the DS2 dataset. The results are shown in Table 3. \begin{table} \begin{tabular}{c c c c} \hline \hline Method & FP Rate & FN Rate & Accuracy \\ \hline MobileNet & 23.36 & 12.61 & 82.30 \\ ShuffleNet & 21.50 & 10.92 & 84.07 \\ ShuffleNetV2 & 16.82 & 9.24 & 87.17 \\ MobileNetV2 & 15.09 & 9.32 & 87.95 \\ SqueezeNet & 14.15 & **6.72** & 89.78 \\ FSDNet & **10.28** & 7.56 & **91.15** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with other lightweight networks on DS2. From Table 3, it can be concluded that FSDNet has the lowest false positive rate (FPR) among all methods, which is 10.28\(\%\); SqueezeNet shows the best false negative rate (FNR) at 6.72\(\%\), but its false positive rate is as high as 14.15\(\%\), which is 3.87\(\%\) higher than that of FSDNet. In terms of accuracy, FSDNet still performs the best with an accuracy of 91.15\(\%\). Overall, FSDNet achieves the best combination of the three performance indicators, showing the lowest false positive rate and the highest accuracy. Considering that the DS2 dataset covers a wide range of scenes and has high challenges in terms of interference, the excellent performance of FSDNet on this test set can to some extent reflect its generalization ability in complex real-world situations. An example of the detection results of FSDNet on the DS2 test set is shown in Fig. 6. From the fire detection results in Figure 6, it can be seen that: (1) Common fire scenarios. Such as scenarios (a)-(d), the algorithm in this paper shows excellent performance of 100\(\%\) accuracy and 99.65\(\%\) accuracy. (2) Complex fire scene. For example, in scene (e), the object area of the flame is small; in scene (f), the fire occurs in a building and has obvious flame color and shape characteristics; scene (g), the object area of the flame is small and covered by smoke ; Scenario (h), street lights appear as interference items and small fire sources at the same time, and the accuracy of FSDNet remains above 80\(\%\), which shows that FSDNet has the ability to handle fire detection tasks in complex environments. (3) Non-fire scenarios with strong interference items. For example, in scenarios (i)-(l), the interference items are sunset, dim indoor light, morning glow and night light. FSDNet also guarantees an accuracy rate higher than 85\(\%\) and has strong antiinterference ability. In general, the proposed fire detection algorithm FSDNet shows that the FSDNet has high accuracy, low false alarm rate and low false alarm rate in the face of common fire scenes, complex fire scenes and non-fire scenes with strong interference items. The false alarm rate can be better able to perform the task of fire detection. #### 4.2.2 Detection results of FSDNet To verify the effectiveness of the FSDNet detector, FSDNet was compared with other deep learning algorithms, including the original YOLOv3, YOLOv3-dense, and the high-performance YOLOv4. All experiments in this section were based on the MS-FS dataset proposed in this paper, and the Pascal VOC's mean average precision (mAP) calculation method was used as the evaluation metric to assess the performance of each method. The experimental results are shown in Table 4. It can be seen from Table 4 that the mAP of the FSDNet algorithm on the DS3 dataset is higher Figure 6: Fire classification results On DS2 of FSDNet. (a)-(d)Common fire scenarios occurred in daily life (e)-(h)Fire scenarios that are more complex and challenging. (i)-(l)Normal scenarios with extremely disturbing objects. than that of other algorithms, which is 86.8\(\%\), which is 17\(\%\), 8.5\(\%\) and 1.2\(\%\) higher than that of YOLOv3-dense, YOLOv3 and YOLOv4, respectively. In particular, the detection accuracy of FSDNet for flame category objects is significantly improved, which is 24.9\(\%\) and 15.9\(\%\) higher than YOLOv3-dense and YOLOv3. Even compared to the high-performance YOLOv4, the improvement is as high as 8.1\(\%\), which means that the FSDNet algorithm has better fire detection performance and can detect flame objects more accurately. However, objects in fire detection tasks have uniqueness in that they undergo dynamic changes in morphology and color features over time, leading to corresponding changes in the overlap between the predicted object box and the true label box. For example, the true label box and the detected bounding box only partially overlap, but this does not have a significant impact on the detection performance; there may be multiple detected bounding boxes, but the true label box is only one. The above two points can greatly affect the IOU between the predicted bounding box and the true label box, which in turn affects the algorithm's mAP. However, in the initial stage of fire detection, it is usually expected to determine whether there is a fire as quickly as possible, and in this stage, the relationship between the object box and the true box can be ignored, and the AP value is not calculated. Therefore, in this paper, the detection rate is used as an indicator to further verify \begin{table} \begin{tabular}{c c c c} \hline \hline Method & Backbone & Fire & Smoke & mAP \\ \hline YOLOv3-dense & Densenet201 & 0.685 & 0.711 & 0.698 \\ YOLOv3 & Darknet53 & 0.775 & 0.791 & 0.783 \\ YOLOv4 & CSPResNext50 & 0.853 & **0.858** & 0.856 \\ FSDNet & Feature Extraction Module & **0.934** & 0.801 & **0.868** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison with other fire detection methods on DS3. ## 6 Conclusion Figure 7: Fire and smoke detection results On DS3 of FSDNet.(a)(b)Fire that often occur in cities. (c)(d)Fire that often occur in nature. (e)(f)Fire that often occur in indoor scenarios. the algorithm's performance by focusing on detecting the presence of fire. The specific detection results of FSDNet are presented to further demonstrate the reliability of the FSDNet detector. The fire and smoke detection results of the FSDNet algorithm based on the MS-FS dataset are shown in Fig. 7, which further demonstrates the good performance of the FSDNet detector. (a)-(f) are some common fire scenarios in daily life, including both outdoor and indoor scenes, and all of them have certain detection difficulties. In scenes (b) and (f), the object shapes are varied and irregular; in scenes (c) and (e), there are significant differences in the proportion of the objects; in scene (d), there are many objects with significant differences in proportions. Nevertheless, FSDNet exhibits excellent performance in the specific categories, bounding box positions, and accuracy of detection. In order to more intuitively observe the fire detection performance of the FSDNet algorithm in complex scenes, this section compares the detection results of the FSDNet algorithm and the YOLOv3 algorithm on selected images. The specific results are shown in Fig. 8. Fig. 8 shows the comparison of the detection results of the FSDNet algorithm and the YOLOv3 algorithm on the same data. The left side is the detection result of the YOLOv3 algorithm, and the right side is the detection result of the FSDNet algorithm. There are 4 sets of scene comparisons. For scene (a), the FSDNet algorithm detects more flame objects, and for the same flame objects, the predicted probability value is higher; for scene (b), when the same number of flame objects are detected, the FSDNet algorithm shows that More precise positioning accuracy and classification effect; for scenes (c) and (d), the FSDNet algorithm not only showed higher predicted probability values on the same object, but also detected smaller flame objects. It can be clearly seen that compared with the YOLOv3 algorithm, the FSDNet algorithm can detect more and smaller flame objects in various scenarios, which is of great significance for fire detection and early fire monitoring in complex scenarios. ## 5 Conclusions In this article, a fire detection network called FSDNet is proposed to improve detection accuracy while maintaining real-time performance. In the FSDNet network, a basic feature extraction module consisting solely of small convolutional kernels is further proposed, and a dense connection structure of convolutional layers is applied to strengthen feature propagation while reducing the number of parameters, aimed at enhancing the feature extraction ability of the backbone network and alleviating the problem of gradient disappearance. In addition, a spatial pyramid pooling struc Figure 8: Fire and smoke detection results comparison between YOLOv3 and FSDNet under same scenarios. The first and the second columns are the results of YOLOv3 and FSDNet, respectively. ture and a Mosaic data augmentation method are introduced to improve the accuracy loss caused by the shape differences of flame objects and the imbalance of object proportions in the dataset, respectively. A new loss function is adopted to accelerate model training. Furthermore, a large-scale fire dataset covering diverse scenes, MS-FS, is independently constructed to provide strong data support. Experimental results on the DS1, DS2, and MS-FS datasets demonstrate that FSDNet achieves higher classification accuracy than previous methods, and also outperforms YOLOv3 and high-performance YOLOv4 in detection accuracy, enabling precise and real-time fire detection. Overall, FSDNet proposed in this article is expected to have a promising application prospect in community fire monitoring and prevention. ### Acknowledgments My mentor, Associate Professor Julie
2310.14301
An overview of text-to-speech systems and media applications
Producing synthetic voice, similar to human-like sound, is an emerging novelty of modern interactive media systems. Text-To-Speech (TTS) systems try to generate synthetic and authentic voices via text input. Besides, well known and familiar dubbing, announcing and narrating voices, as valuable possessions of any media organization, can be kept forever by utilizing TTS and Voice Conversion (VC) algorithms . The emergence of deep learning approaches has made such TTS systems more accurate and accessible. To understand TTS systems better, this paper investigates the key components of such systems including text analysis, acoustic modelling and vocoding. The paper then provides details of important state-of-the-art TTS systems based on deep learning. Finally, a comparison is made between recently released systems in term of backbone architecture, type of input and conversion, vocoder used and subjective assessment (MOS). Accordingly, Tacotron 2, Transformer TTS, WaveNet and FastSpeech 1 are among the most successful TTS systems ever released. In the discussion section, some suggestions are made to develop a TTS system with regard to the intended application.
Mohammad Reza Hasanabadi
2023-10-22T13:52:06Z
http://arxiv.org/abs/2310.14301v1
# An Overview of Text-to-Speech Systems and Media Applications ###### Abstract Involving synthetic data, simulated human-like control, can enrich people of modern landscape with system characteristics and potential risks, especially in general synthetic and difficult ways, such digital, flexible, and fast-time, automatic and dynamic forecasting, provides a good understanding of how to detect the underlying trend of the system's evolution. In the simplest case, the system's data is more knowledge than the system's data, and its system's data is more knowledge than key. For economic data, the system's data is more complex than the system's data, and its system's data is more complex than the system's data. The system's data is more complex than the system's data, and its system's data is more complex than the system's data. ## 1 Introduction Text-to-speech (ITS) is an essential component of many speech-enabled applications such as navigation systems, and accessibility for the visually impaired [1]. Typical modern text-to-speech systems are complex to design [2]. These systems usually need a text frontend to extract linguistic features, a model to predict acoustic features and a signal-processing-based vocoder to reconstruct the final waveform [2] as shown on _Figure_**1**. Each of these blocks require expert design and needs to train independently [2]. Neural network based TTS has achieved higher quality compared to conventional concatenative and statistical parametric approaches [3]. Concatenative synthesis mostly relies on concatenation of separated pieces of speech stored in the database [4]. Concatenative synthesis can generate high intelligibility speech close to original voice but it requires huge amount of data to match all possible combination of speech units, but on the other hand, such systems generate voices with lower naturalness quality [4] due to the less flexibility of concatenation of emotional and prosody features. To address such drawbacks of concatenative systems, statistical parametric synthesis speech (SPSS) is proposed. In fact, instead of directly producing an output waveform, acoustic features are firstly extracted, and then final waveform is reconstructed through such predicted parameters [4]. The advantages of SPSS are the output naturalness together with the flexibility to modify the parameters and therefore, create audio with different prosody. However, the generated speech still suffers from low intelligibility and robotic-like artifacts [4]. Some early neural network modeling follow the same architecture as SPSS but replacing each block with its corresponding neural network like Deep Voice 1/2 [1, 5]. Later, one type of end-to-end (E2E) models like Tacotron 1/2 [2, 6], Deep Voice 3 [7], FastSpeech 1/2 [3, 8], which take character/ phoneme sequences directly as input and produce Mel-spectrograms as acoustic features and finally utilise well-known vocoders to generate final waveform. Other types are fully E2E models developed to directly generate waveform from input character/phoneme sequences like FastSpeech 2s [8], Clariket [9] and EATS [10]. ## 2 TTS and Media TTS can be also used in different media applications and can help develop media products cheaper, faster and more easily. Automatic news generation by the voice of a special annourer at regular times obviates the need for human voice costs. Moreover, a high-quality TTS can play a narrator's role in filmmaking and eases the reproduction of documentaries and middle programme clips. China Central Television [11], Japan Broadcasting Corporation (NHK) [12] and British Broadcasting Corporation (BBC) R&D [13] have recently investigated the TTS following an AI anchor (announced). ## 3 Key Components of a Text-to-Speech System Figure 1: General Structure of TTS systems [4] Text analysis Text analysis, also called frontend in TTS systems, transforms input text into linguistic features containing rich information about pronunciation and prosody [4]. Char2Wav and DeepVoice1/2 incorporate the character-to-linguistic feature conversion purely based on neural networks [4]. The prosody information, such as rhythm, stress, and intonation of speech corresponds to the variations in syllable duration, loudness and pitch [4]. A manually collected grapheme-to-phoneme lexicon is usually leveraged for G2P conversion. ## 5 Acoustic models Acoustic features are generated from linguistic features or directly from phonemes and characters. Some important acoustic features are Mel-Cepstral Coefficients (MCC), Mel-generalised coefficients (MGC) [355], band aperiodicity (BAP), fundamental frequency (F0), Voiced/Invoiced (V/ UV), Bark-Frequency Cepstral Coefficients (BFCC), and the most widely used Mel-spectograms. In contrast to traditional acoustic models, neural based TTS systems mostly take character or phoneme sequence as input and generate high-level Mel-spectrograms as output [4]. An important advantage of neural end-to-end acoustic models over conventional ones is how alignment is done. In fact, conventional acoustic models require alignment between linguistic and acoustic features, while sequence-to-sequence based neural models implicitly learn the alignment through attention, or predict the duration jointly [4]. ## 6 Vocoder Generally speaking, vocoders are classified into traditional SPSS and neural-based ones. STRAIGHT [14] and WORLD [15] are popular SPSS. Traditional vocoders generally take acoustic features as input while neural-based vocoders can take both acoustic and linguistic features. For example, WaveNet [16], WaveRNN [17] and char2wave take linguistic features and LPCNet [18], WaveGlow [19]and MelGAN [20] take acoustic features. ## 4 Why End-to-End TTS systems? An end-to-end TTS means to train the model merely by <text, audio- pairs. Such systems make the entire structure easier by removing the need to design each block independently. Another advantage of such systems is the flexibility to condition different types of input. For example, you can replace <text, audio-> with <specific feature, audio-> such as <MFCC, audio->, <Mel-spectrogram, audio-> etc. Moreover, such systems are more robust compared with multi-stage ones. Therefore, these advantages imply that an end-to-end system could allow us to utilise more of huge real-world data [2]. It is worth noting the slight difference between an integrated end-to-end and vanilla end-to-end; the vanilla end-to-end structure is a compound of separated blocks trained independently and connected together to form the end-to-end system, like Deep Voice [1], while in integrated synthesisers, there is just one structure trained from scratch with random initialisation like Tacotron [2]. ## 5 Challenges to End-to-End TTS system In TTS systems, a given text input can lead to different pronunciation or speaking style output. This makes end-to-end TTS more difficult than other simple end-to-end tasks. Moreover, unlike automatic speech recognition (ASR) and Neural Machine Translation (MMT), TTS outputs are continuous and usually have much longer sequences. Each of these properties can cause errors which accumulate quickly [2]. Figure 2: shows different types of TTS system. Type 1 is based on traditional structure including text analysis, acoustic model and vocoder (SPSS). Type 2 and type 4 combines text analysis and acoustic model and directly predicts general acoustic features or Mel-Spectrogram respectively using input characters. Tacotron 2, DeepVoice 3 and FastSpeech 1/2 are such examples. Type 3 TTS combines acoustic models and final vocoder, which directly transforms linguistic features into waveform (WaveNet). Finally, fully E2E models combine all three blocks into one and directly converts characters into waveform (Wave-Tacotron, Char2Wav, ClarNet). ## 6 Detail of some important state-of-the-art TTS ### Wavenet Inspired by generative autoregressive models in image [21, WaveNet (Oord, 2016 #78] addresses the same issue for generating wideband raw audio waveforms with at least 16,000 samples per second. WaveNet adopts dilated causal convolutions to deal with long-range temporal dependencies [16]. This architecture, when conditioned on a specific speaker, could apply to applications such as voice conversion, text-to-speech etc. The main ingredient of WaveNet is causal convolution [16]. In dilated convolutions, a fixed-sized filter covers larger area by skipping input values by certain steps. Dilated convolution allows the network to sweep more time-steps and therefore, modelling more temporal resolution. A stack of dilated casual convolutional layers is illustrated in the _Figure 3_. ### Deep voice 1 Deep voice is based on traditional text-to-speech pipelines and adopts the same structure but replaces each component with its corresponding neural network [1, 2]. According to Deep Voice, TTS systems consist of five major building blocks [1] : * _The grapheme-to-phoneme model_ Graphene-to-phoneme (G2P) conversion is the process of generating pronunciation for words based on their written form. It has a highly essential role for natural language processing, text-to-speech synthesis and automatic speech recognition systems [22]. Deep Voice G2P model is based on an encoder-decoder architecture [1]. _Figure 4_ illustrates the bi-directional LSTM[23]. * _The segmentation model_ Given an audio file and phoneme-by-phoneme transcription of the audio, the segmentation module finds the start and time point of each phoneme. Adopting a state-of-the art speech recognition system could help to segment and label the given phoneme sequence and corresponding audio utterance [1]. This block adopts a CTC-based RNN. * _The phoneme duration model_ Predicting the temporal duration of each phoneme in an utterance. The model architecture used here is shared with fundamental frequency model and explained in the following part. * _The fundamental frequency model_ Having predicted voiced phonemes, this component predicts the fundamental frequency (f0) throughout its duration identified in the past section. Adopting a joint structure to predict phoneme duration and fundamental frequency, Deep Voice suggests an architecture comprising two fully connected layers with 256 units \begin{table} \begin{tabular}{l|c c} \hline \hline & \multicolumn{2}{c}{**Subjective 5-scale MOS in naturalness**} \\ \cline{2-3} **Speech samples** & North American English & Mandarin Chinese \\ \hline \hline LSTM-RNN parametric & 3.67 \(\pm\) 0.098 & 3.79 \(\pm\) 0.084 \\ HMM-driven concatenative & 3.86 \(\pm\) 0.137 & 3.47 \(\pm\) 0.108 \\ **WaveNet** (L+F) & **4.21**\(\pm\) 0.081 & **4.08**\(\pm\) 0.085 \\ \hline Natural (8-bit \(\mu\)-law) & 4.46 \(\pm\) 0.067 & 4.25 \(\pm\) 0.082 \\ Natural (16-bit linear PCM) & 4.55 \(\pm\) 0.075 & 4.21 \(\pm\) 0.071 \\ \hline \hline \end{tabular} \end{table} Table 1: subjective results of WaveNet [16] Figure 4: The bi-directional LSTM - “CAT” for forward and “TAC” for backward [1] Figure 3: structure of diluted convolution layer [16] each followed by two unidirectional recurrent layers with 128 GRU cells each and finally a fully connected output layer [1]. The final layer outputs three estimations for every input phoneme: the phoneme duration, the probability that the phoneme is voiced, and F0 sampled uniformly over the predicted duration [1]. * _The audio synthesis model_ Combining the outputs of grapheme-to-phoneme, duration, and fundamental frequency prediction models, this module synthesises the output raw waveform. Deep Voice utilises WaveNet [16] as its backbone of audio synthesis. In fact, Deep Voice has upgraded each block with the corresponding neural network based models. This model has not meaningfully progressed past the state-of-the-art systems in this regard. It concludes that the main barrier to progress towards natural TTS lies with duration and fundamental frequency prediction [1]. ### _Deep Voice 2_ This model keeps the general structure of the Deep Voice 1. Major differences with Deep Voice 1 are the separation of the phoneme duration and frequency models. The phoneme durations are predicted first and are then used as inputs to the frequency model. All models are trained separately. Each block is also augmented with low-dimensional speaker embedding. Using speaker-embedding technique in each block turns Deep Voice 2 into a multi-speaker TTS. Deep Voice 2 is a high quality text-to-speech systems and conclusively shows that neural speech synthesis models can learn effectively from small amounts of data spread among hundreds of different speakers [5]. ### _Deep Voice 3_ Deep voice 3 is fully convolutional character-to-spectrogram architecture TTS. It generates monotonic attention behaviour, avoiding error modes commonly affecting sequence-to-sequence models [7]. ### _Char2Wave_ Char2wave divides modelling text-to-speech models into two parts: a reader (frontend) and a neural vocoder (backend). The reader is an encoder-decoder architecture with attention that takes as input text or phonemes and transforms it into linguistic features. The vocoder accepts linguistic (acoustic) features and produces the corresponding audio [24]. In this definition, WaveNet [16] is considered as a neural backend [16]. Char2Wave adopts SampleRNN as hierarchical vocoder. Char2Wav integrates the frontend and backend and learn the entire model end-to-end and simultaneously [24]. The general structure of Char2Wav is illustrated in the _Figure 6_. However, Char2Wav need to predict vocoder features firstly and, SampleRNN vocoder used here needs to be separately pre-trained whereas Tacotron directly predicts output raw waveform[2]. Fig. 5: Speaker embedding at each block of Deep Voice 2: a) segmentation b) duration of frequency model [5] Fig. 6: Char2Wav: An end-to-end speech synthesis model [24] ### Tacotron Tacotron is an end-to-end generative text-to-speech model that directly synthesises speech from characters [2]. Tacotron is a sequence-to-sequence architecture producing a magnitude spectrogram from a sequence of characters which replaces traditional linguistic and acoustic features pipeline with a single neural network [6]. Tacotron consists of three blocks: an encoder, an attention-based decoder and a post-processing net. Tacotron takes characters as input and produces spectrogram frames, which are then converted to waveforms using Griffin-Lim. CBHG (1-D Convolution bank + Highway network + Bidirectional GRU) module of encoder reduces overfitting [2]. _Figure 7_ illustrates the structure. **Table 2** shows the Mean Score Opinion (MOS) of Tacotron. ### Tacotron 2 Unlike Tacotron, which uses Griffin-Lim algorithm to produce final audio waveform, Tacotron 2 utilises WaveNet [16] as vocoder to compensate for the typical artifacts made by Griffin-Lim. As shown in figure 5, Tacotron 2 consists of two general blocks: (1) a recurrent sequence-to-sequence feature prediction network with attention, which predicts frames of mel-spectrograms and (2) a WaveNet vocoder to produce final audio waveform [6]. Tacotron 2 significantly outperforms all other TTS systems, and results in an MOS comparable to that of the ground truth audio [6]. ### Transformer TTS network Transformer TTS network [25] is based on Tacotron 2 [6] and Transformer network [26]. Inspired by the success of Transformer networks in Neural Machine Translation (NMT), this approach suggests the use of Transformer combined with Tacotron 2 to predict mel-spectrograms and leveraging WaveNet to reconstruct final waveform. Since WaveNet is used to reconstruct the final raw waveform, this model is autoregressive and still suffers from slow inference [25]. According to the results reported in the paper, this approach outperforms Tacotron 2 with a gap of 0.048 and is very close to human quality (4.39 vs 4.44 MOS) [25]. The structure of Transformer TTS is illustrated in _Figure 8_. ## 7 Discussion & Conclusion **Table 3** collates the state-of-the-art TTS systems. Accordingly, Tacotron 2, Transformer TTS, WaveNet, FastSpeed 1 and ClarNet are among the most successful TTS systems ever released. In the discussion section, some suggestions are made to develop a TTS system with regard to the intended application. Some TTS models are end-to-end and convert the input characters to high-level acoustic features such Mel-spectrogram and adopts a neural vocoder such as WaveNet to generate output waveform. Others are fully end-to-end; converting input character/phoneme sequence directly to output waveform. Char2Wav, ClarNet and FastSpeed 2s are such examples. Therefore, enough notice is to be taken to differentiate between E2E and fully E2E. It is worth noting that, typically, the first versions of TTS systems are trained on single-speaker datasets such as USpeech, Internal US English and later versions are designed and trained on multi-speaker datasets such as VCTK. Embedding speaker vector to different blocks of a TTS can customize it to the specific speaker.
2301.02222
Computing nonsurjective primes associated to Galois representations of genus $2$ curves
For a genus $2$ curve $C$ over $\mathbb{Q}$ whose Jacobian $A$ admits only trivial geometric endomorphisms, Serre's open image theorem for abelian surfaces asserts that there are only finitely many primes $\ell$ for which the Galois action on $\ell$-torsion points of $A$ is not maximal. Building on work of Dieulefait, we give a practical algorithm to compute this finite set. The key inputs are Mitchell's classification of maximal subgroups of $\mathrm{PSp_4}(\mathbb{F}_\ell)$, sampling of the characteristic polynomials of Frobenius, and the Khare--Wintenberger modularity theorem. The algorithm has been submitted for integration into Sage, executed on all of the genus~$2$ curves with trivial endomorphism ring in the LMFDB, and the results incorporated into the homepage of each such curve.
Barinder S. Banwait, Armand Brumer, Hyun Jong Kim, Zev Klagsbrun, Jacob Mayle, Padmavathi Srinivasan, Isabel Vogt
2023-01-05T18:47:17Z
http://arxiv.org/abs/2301.02222v2
# Computing nonsurjective primes associated to Galois representations of genus 2 curves ###### Abstract. For a genus 2 curve \(C\) over \(\mathbb{Q}\) whose Jacobian \(A\) admits only trivial geometric endomorphisms, Serre's open image theorem for abelian surfaces asserts that there are only finitely many primes \(\ell\) for which the Galois action on \(\ell\)-torsion points of \(A\) is not maximal. Building on work of Dieulefait, we give a practical algorithm to compute this finite set. The key inputs are Mitchell's classification of maximal subgroups of \(\operatorname{PSp}_{4}(\mathbb{F}_{\ell})\), sampling of the characteristic polynomials of Frobenius, and the Khare-Wintenberger modularity theorem. The algorithm has been submitted for integration into Sage, executed on all of the genus 2 curves with trivial endomorphism ring in the LMFDB, and the results incorporated into the homepage of each such curve on a publicly-accessible branch of the LMFDB. 2010 Mathematics Subject Classification: 11F80 (primary), 11G10, 11Y16 (secondary). ## 1. Introduction Let \(C/\mathbb{Q}\) be a smooth, projective, geometrically integral curve (referred to hereafter as a nice curve) of genus 2, and let \(A\) be its Jacobian. We assume throughout that \(A\) admits no nontrivial geometric endomorphisms; that is, we assume that \(\operatorname{End}(A_{\overline{\mathbb{Q}}})=\mathbb{Z}\), and we refer to any abelian variety satisfying this property as typical1. We also say that a nice curve is typical if its Jacobian is typical. Let \(G_{\mathbb{Q}}:=\operatorname{Gal}\left(\overline{\mathbb{Q}}/\mathbb{Q}\right)\), let \(\ell\) be a prime, and let \(A[\ell]:=A(\overline{\mathbb{Q}})[\ell]\) denote the \(\ell\)-torsion points of \(A(\overline{\mathbb{Q}})\). Let Footnote 1: Abelian varieties with extra endomorphisms define a thin set (in the sense of Serre) in \(\mathcal{A}_{g}\) and as such are not the typically arising case. \[\rho_{A,\ell}:\,G_{\mathbb{Q}}\to\operatorname{Aut}\left(A[\ell]\right)\] denote the Galois representation on \(A[\ell]\). By fixing a basis for \(A[\ell]\), and observing that \(A[\ell]\) admits a nondegenerate Galois-equivariant alternating bilinear form, namely the Weil pairing, we may identify the codomain of \(\rho_{A,\ell}\) with the general symplectic group \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\). In a letter to Vigneras [11, Corollaire au Theoreme 3], Serre proved an open image theorem for typical abelian varieties of dimensions 2 or 6, or of odd dimension, generalizing his celebrated open image theorem for elliptic curves [11]. More precisely, the set of nonsurjective primes \(\ell\) for which the representation \(\rho_{A,\ell}\) is not surjective -- i.e., the set of primes \(\ell\) for which \(\rho_{A,\ell}(G_{\mathbb{Q}})\) is contained in a proper subgroup of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) -- is finite. In the elliptic curve case, Serre subsequently provided a conditional upper bound in terms of the conductor of \(E\) on this finite set [11, Theoreme 22]; this bound has since been made unconditional [12, 13]. There are also algorithms to compute the finite set of nonsurjective primes [11], and practical implementations in Sage [10]. Serre's open image theorem for typical abelian surfaces was made explicit by Dieulefait [2] who described an algorithm that returns a finite set of primes _containing_ the set of nonsurjective primes. In a different direction Lombardo [15, Theorem 1.3] provided an upper bound on the nonsurjective primes involving the stable Faltings height of \(A\). In this paper we develop Algorithms 3.1 and 4.1, which together allow for the exact determination of the nonsurjective primes for \(C\), yielding our main result as follows. **Theorem 1.1**.: _Let \(C/\mathbb{Q}\) be a typical genus \(2\) curve whose Jacobian \(A\) has conductor \(N\)._ 1. _Algorithm_ 3.1 _produces a finite list_ \(\mathsf{PossiblyNonsurjectivePrimes}(C)\) _that provably contains all nonsurjective primes._ 2. _For a given bound_ \(B>0\)_, Algorithm_ 4.1 _produces a sublist_ \(\mathsf{LikelyNonsurjectivePrimes}(C;B)\) _of_ \(\mathsf{PossiblyNonsurjectivePrimes}(C)\) _that contains all the nonsurjective primes. If_ \(B\) _is sufficiently large, then the elements of_ \(\mathsf{LikelyNonsurjectivePrimes}(C;B)\) _are precisely the nonsurjective primes of_ \(A\)_._ The two common ingredients in Algorithms 3.1 and 4.1 are Mitchell's 1914 classification of maximal subgroups of \(\mathsf{PSp}_{4}(\mathbb{F}_{\ell})\)[Mit14] and sampling of characteristic polynomials of Frobenius elements. Indeed, \(\rho_{A,\ell}\) is nonsurjective precisely when its image is contained in one of the proper maximal subgroups of \(\mathsf{GSp}_{4}(\mathbb{F}_{\ell})\). The (integral) characteristic polynomial of Frobenius at a good prime \(p\) is computationally accessible since it is determined by counting points on \(C\) over \(\mathbb{F}_{p^{r}}\) for small \(r\). The reduction of this polynomial modulo \(\ell\) gives the characteristic polynomial of the action of the Frobenius element on \(A[\ell]\). By the Chebotarev density theorem, the images of the Frobenius elements for varying primes \(p\) equidistribute over the conjugacy classes of \(\rho_{A,\ell}(G_{\mathbb{Q}})\) and hence let us explore the image. Algorithm 3.1 makes use of the fact that if the image of \(\rho_{A,\ell}\) is nonsurjective, then the characteristic polynomials of Frobenius at auxiliary primes \(p\) will be constrained modulo \(\ell\). Using this idea, Dieulefait worked out the constraints imposed by each type of maximal subgroup for \(\rho_{A,\ell}(G_{\mathbb{Q}})\) to be contained in that subgroup. Our Algorithm 3.1 combines Dieulefait's conditions, with some modest improvements, to produce a finite list \(\mathsf{PossiblyNonsurjectivePrimes}(C)\). Algorithm 4.1 then weeds out the extraneous surjective primes from \(\mathsf{PossiblyNonsurjectivePrimes}(C)\). Equipped with the prime \(\ell\), the task here is try to generate enough different elements in the image to rule out containment in any proper maximal subgroup. The key input is a purely group-theoretic condition (Proposition 4.2) that guarantees that a subgroup is all of \(\mathsf{GSp}_{4}(\mathbb{F}_{\ell})\) if it contains particular types of elements. This algorithm is probabilistic and depends on the choice of a parameter \(B\) which, if sufficiently large, provably establishes nonsurjectivity. The parameter \(B\) is a cut-off for the number of Frobenius elements that we use to sample the conjugacy classes of \(\rho_{A,\ell}(G_{\mathbb{Q}})\). As an illustration of the interplay between theory and practice, analyzing the "worst case" run time of each step in Algorithm 3.1 yields a new _theoretical_ bound, conditional on the Generalized Riemann Hypothesis (GRH), on the product of all nonsurjective primes in terms of the conductor. **Theorem 1.2**.: _Let \(C/\mathbb{Q}\) be a typical genus \(2\) curve with conductor \(N\). Assuming the Generalized Riemann Hypothesis (GRH), we have, for any \(\epsilon>0\),_ \[\prod_{\ell\text{ nonsurjective}}\ell\ll\exp(N^{1/2+\epsilon}),\] _where the implied constant is absolute and effectively computable._ While we believe this bound to be far from asymptotically optimal, it is the first bound in the literature expressed in terms of the (effectively computable) conductor. Naturally one wants to find the sufficiently large value of \(B\) in Theorem 1.1(2), which the next result gives, conditional on GRH. **Theorem 1.3**.: _Let \(C/\mathbb{Q}\) be a typical genus \(2\) curve, \(B\) be a positive integer, and \(q\) be the largest prime in \(\mathsf{LikelyNonsurjectivePrimes}(C;B)\). Assuming GRH, the set \(\mathsf{LikelyNonsurjectivePrimes}(C;B)\) is precisely the set of nonsurjective primes of \(C\), provided that_ \[B\geq\left(4\left[(2q^{11}-1)\log\operatorname{rad}(2qN_{A})+22q^{11}\log(2q )\right]+5q^{11}+5\right)^{2}.\] The proof of Theorem 1.3 involves an explicit Chebotarev bound due to Bach and Sorenson [1] that is dependent on GRH. An unconditional version of Theorem 1.3 can be given using an unconditional Chebotarev result (for instance [13]), though the bound for \(B\) will be exponential in \(q\). In addition, if we assume both GRH and the Artin Holomorphy Conjecture (AHC), then a version of Theorem 1.3 holds with the improved asymptotic bound \(B\gg q^{11}\log^{2}(qN_{A})\), but without an explicit constant. Unfortunately, the bound from Theorem 1.3 is prohibitively large to use in practice. By way of illustration, consider the smallest (with respect to conductor) typical genus \(2\) curve, which has a model \[y^{2}+(x^{3}+1)y=x^{2}+x,\] and label 249.a.249.1 in the \(L\)-_functions and modular forms database_ (LMFDB) [10]. The output of Algorithm 3.1 is the set \(\{2,3,5,7,83\}\). Applying Algorithm 4.1 with \(B=100\) rules out the prime \(83\), suggesting that \(7\) is the largest nonsurjective prime. Subsequently applying Theorem 1.3 with \(q=7\) yields the value \(B=3.578\times 10^{23}\) for which \(\operatorname{\mathtt{LikelyNonsurjectivePrimes}}(C;B)\) coincides with the set of nonsurjective primes associated with \(C\). With this value of \(B\), our implementation of the algorithm was still running after \(24\) hours, after which we terminated it. Even if the version of Theorem 1.3 that relies on AHC could be made explicit, the value of \(q^{11}\log^{2}(qN_{A})\) in this example is on the order of \(10^{11}\), which would still be a daunting prospect. To execute the combined algorithm on all typical genus \(2\) curves in the LMFDB - which at the time of writing constitutes \(63\),\(107\) curves - we have decided to take a fixed value of \(B=1000\) in Algorithm 4.1. The combined algorithm then takes about \(4\) hours on MIT's Lovelace computer, a machine with \(2\) AMD EPYC \(7713\)\(2\)GHz processors, each with \(64\) cores, and a total of \(2\)TB of memory. The result of this computation of nonsurjective primes for these curves is available to view on the homepage of each curve in the LMFDB beta: [https://beta.lmfdb.org](https://beta.lmfdb.org) In addition, the combined algorithm has been run on a much larger set of \(1\),\(823\),\(592\) curves provided to us by Andrew Sutherland. See Section 6 for the results of this computation. Algorithm 4.1 samples the characteristic polynomial of Frobenius \(P_{p}(t)\) for each prime \(p\) of good reduction for the curve up to a particular bound and applies Tests 4.4 and 4.5 to \(P_{p}(t)\). Assuming that \(\rho_{A,\ell}\) is surjective, we expect that the outcome of these tests should be independent for sufficiently large primes. More precisely, **Theorem 1.4**.: _Let \(C/\mathbb{Q}\) be a typical genus \(2\) curve with Jacobian \(A\) and suppose \(\ell\) is an odd prime such that \(\rho_{A,\ell}\) is surjective. There is an effective bound \(B_{0}\) such that for any \(B>B_{0}\), if we sample the characteristic polynomials of Frobenius \(P_{p}(t)\) for \(n\) primes \(p\in[B,2B]\) chosen uniformly and independently at random, the probability that none of these pass Tests 4.4 or 4.5 is less than \(3\cdot\left(\frac{9}{10}\right)^{n}\)._ _Remark 1_.: In fact, for each prime \(\ell\) satisfying the conditions of Theorem 1.4, there is an explicit constant \(c_{\ell}\leq\frac{9}{10}\) tending to \(\frac{3}{4}\) as \(\ell\to\infty\) which may be computed using Corollary 5.3 such that bound of \(3\cdot\left(\frac{9}{10}\right)^{n}\) in Theorem 1.4 can be replaced by \(3\cdot c_{\ell}^{n}\). The combined algorithm to probabilistically determine the nonsurjective primes of a nice genus \(2\) curve over \(\mathbb{Q}\) has been implemented in Sage [14], and it will appear in a future release of this software2. Until then, the implementation is available at the following repository: [https://github.com/ivogt/abeliansurfaces](https://github.com/ivogt/abeliansurfaces) Footnote 2: see [https://trac.sagemath.org/ticket/30837](https://trac.sagemath.org/ticket/30837) for the ticket tracking this integration. [https://github.com/ivogt/abeliansurfaces](https://github.com/ivogt/abeliansurfaces) The README.md file contains detailed instructions on its use. This repository also contains other scripts in both Sage and Magma [1] useful for verifying some of the results of this work; any filenames used in the sequel will refer to the above repository. ### Outline of this paper In Section 2, we begin by reviewing the properties of the characteristic polynomial of Frobenius with a view towards computational aspects. We also recall the classification of maximal subgroups of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\). In Section 3, we explain Algorithm 3.1 and establish Theorem 1.1(1); that is, for each of the maximal subgroups of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) listed in Section 2.4, we generate a list of primes that provably contains all primes \(\ell\) for which the mod \(\ell\) image of Galois is contained in this maximal subgroup. Theorem 1.2 is also proved in this section (Subsection 3.3). In Section 4, we first prove a group-theoretic criterion (Proposition 4.2) for a subgroup of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) to equal \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\). Then, for each \(\ell\) in the finite list from Section 3, we ascertain whether the characteristic polynomials of the Frobenius elements sampled satisfy the group-theoretic criterion; Theorem 1.1(2) and Theorem 1.3 also follow from this study. In Section 5 we prove Theorem 1.4 concerning the probability of output error, assuming that Frobenius elements distribute in \(\rho_{A,\ell}(G_{\mathbb{Q}})\) as they would in a randomly chosen element of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\). Finally, in Section 6, we close with remarks concerning the execution of the algorithm on the large dataset of genus 2 curves mentioned above, and highlight some interesting examples that arose therein. ### Acknowledgements This work was started at a workshop held remotely 'at' the Institute for Computational and Experimental Research in Mathematics (ICERM) in Providence, RI, in May 2020, and was supported by a grant from the Simons Foundation (546235) for the collaboration 'Arithmetic Geometry, Number Theory, and Computation'. It has also been supported by the National Science Foundation under Grant No. DMS-1929284 while the authors were in residence at ICERM during a Collaborate@ICERM project held in May 2022. We are grateful to Noam Elkies for providing interesting examples of genus 2 curves in the literature, Davide Lombardo for helpful discussions related to computing geometric endomorphism rings, and to Andrew Sutherland for providing a dataset of Hecke characteristic polynomials that were used for executing our algorithm on all typical genus 2 curves in the LMFDB, as well as making available the larger dataset of approximately 2 million curves that we ran our algorithm on. ## 2. Preliminaries ### Notation Let \(A\) be an abelian variety of dimension \(g\) defined over \(\mathbb{Q}\). By \(\operatorname{\mathsf{conductor}}\) we mean the \(\operatorname{\mathsf{Artin}}\operatorname{\mathsf{conductor}}\)\(N=N_{A}\) of \(A\). We write \(N_{\operatorname{sq}}\) for the largest integer such that \(N_{\operatorname{sq}}^{2}\,|\,N\). Let \(\ell\) be a prime. We write \(T_{\ell}A\) for the \(\ell\)-adic Tate module of \(A\): \[T_{\ell}A\simeq\varprojlim_{n}A[\ell^{n}].\] This is a free \(\mathbb{Z}_{\ell}\)-module of rank \(2g\). For each prime \(p\), we write \(\operatorname{Frob}_{p}\in\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\) for an absolute Frobenius element associated to \(p\). By a \(\operatorname{\mathsf{good}}\)\(\operatorname{\mathsf{prime}}\)\(p\) for an abelian variety \(A\), we mean a prime \(p\) for which \(A\) has good reduction, or equivalently \(p\nmid N_{A}\). If \(p\) is a good prime for \(A\), then the trace \(a_{p}\) of the action of \(\operatorname{Frob}_{p}\) on \(T_{\ell}A\) is an integer. See Section 2.2 for a discussion of the characteristic polynomial of Frobenius. By a \(\operatorname{\mathsf{typical}}\) abelian variety \(A\), we mean an abelian variety with geometric endomorphism ring \(\mathbb{Z}\). A typical genus 2 curve is a nice curve whose Jacobian is a typical abelian surface. Let \(V\) be a 4-dimensional vector space over \(\mathbb{F}_{\ell}\) endowed with a nondegenerate skew-symmetric bilinear form \(\langle\cdot,\cdot\rangle\). A subspace \(W\subseteq V\) is called \(\operatorname{\mathsf{isotropic}}\) (for \(\langle\cdot,\cdot\rangle\)) if \(\langle w_{1},w_{2}\rangle=0\) for all \(w_{1},w_{2}\in W\). A subspace \(W\subseteq V\) is called \(\operatorname{\mathsf{nondegenerate}}\) (for \(\langle\cdot,\cdot\rangle\)) if \(\langle\cdot,\cdot\rangle\) restricts to a nondegenerate form on \(W\). The \(\operatorname{\mathsf{general}}\)\(\operatorname{\mathsf{symplectic}}\)\(\operatorname{\mathsf{group}}\) of \((V,\langle\cdot,\cdot\rangle)\) is defined as \[\operatorname{GSp}(V,\langle\cdot,\cdot\rangle)\coloneqq\{M\in\operatorname{ GL}(V):\exists\;\;\operatorname{mult}(M)\in\mathbb{F}_{\ell}^{\times}:(Mv,Mw)= \operatorname{mult}(M)\langle v,w\rangle\;\;\forall\;\;v,w\in V\}.\] The map \(M\mapsto\operatorname{mult}(M)\) is a surjective homomorphism from \(\operatorname{GSp}(V,\langle\cdot,\cdot\rangle)\) to \(\mathbb{F}_{\ell}^{\times}\) called the \(\operatorname{\mathsf{similitude}}\)\(\operatorname{\mathsf{character}}\); its kernel is the \(\operatorname{\mathsf{symplectic}}\)\(\operatorname{\mathsf{group}}\), denoted \(\operatorname{Sp}(V,\langle\cdot,\cdot\rangle)\). Usually the bilinear form is understood from the context, in which case one drops \(\langle\cdot,\cdot\rangle\) from the notation; moreover, for our purposes, we will have fixed a basis for \(V\), one in which the bilinear form is represented by the nonsingular skew-symmetric matrix \[J:=\begin{pmatrix}0&I_{2}\\ -I_{2}&0\end{pmatrix},\] where \(I_{2}\) is the \(2\times 2\) identity matrix. By a subquotient\(W\) of a Galois module \(U\), we mean a Galois module \(W\) that admits a surjection \(U^{\prime}\twoheadrightarrow W\) from a subrepresentation \(U^{\prime}\) of \(U\). Since we are chiefly concerned with computing the sets \(\mathsf{LikelyNonsurjectivePrimes}(C;B)\) and \(\mathsf{PossiblyNonsurjectivePrimes}(C)\) for a fixed curve \(C\), we will henceforth, for ease of notation, drop the \(C\) from the notation for these sets. ### Integral characteristic polynomial of Frobenius The theoretical result underlying the whole approach is the following. **Theorem 2.1** (Weil, see [13, Theorem 3]).: _Let \(A\) be an abelian variety of dimension \(g\) defined over \(\mathbb{Q}\) and let \(p\) be a prime of good reduction for \(A\). Then there exists a monic integral polynomial \(P_{p}(t)\in\mathbb{Z}[t]\) of degree \(2g\) with constant coefficient \(p^{g}\) such that for any \(\ell\neq p\), the polynomial \(P_{p}(t)\) modulo \(\ell\) is the characteristic polynomial of the action of \(\operatorname{Frob}_{p}\) on \(T_{\ell}A\). Furthermore, every root of \(P_{p}(t)\) has complex absolute value \(p^{1/2}\)._ The polynomials \(P_{p}(t)\) are computationally accessible by counting points on \(C\) over \(\mathbb{F}_{p^{r}}\)\(r=1,2\). See [14, Chapter 7] for more details. In fact, \(P_{p}(t)\) can be accessed via the frobenius_polynomial command in Sage. In particular, we denote the trace of Frobenius by \(a_{p}\). By the Grothendieck-Lefschetz trace formula, if \(A=\operatorname{Jac}X\), \(p\) is a prime of good reduction for \(X\), and \(\lambda_{1},\dots,\lambda_{2g}\) are the roots of \(P_{p}(t)\), then \[\#X(\mathbb{F}_{p^{r}})=p^{r}+1-\sum_{i=1}^{2g}\lambda_{i}^{r}.\] ### The Weil pairing and consequences on the characteristic polynomial of Frobenius The nondegenerate Weil pairing gives an isomorphism (of Galois modules): \[T_{\ell}A\simeq(T_{\ell}A)^{\vee}\otimes_{\mathbb{Z}_{\ell}}\mathbb{Z}_{\ell} (1). \tag{1}\] The Galois character acting on \(\mathbb{Z}_{\ell}(1)\) is the \(\ell\)-adic cyclotomic character, which we denote by \(\operatorname{cyc}_{\ell}\). The integral characteristic polynomial for the action of \(\operatorname{Frob}_{p}\) on \(\mathbb{Z}_{\ell}(1)\) is simply \(t-p\). The integral characteristic polynomial for the action of \(\operatorname{Frob}_{p}\) on \((T_{\ell}A)^{\vee}\) is the reversed polynomial \[P_{p}^{\vee}(t)=P_{p}(1/t)\cdot t^{2g}/p^{g}\] whose roots are the inverses of the roots of \(P_{p}(t)\). We now record a few easily verifiable consequences of the nondegeneracy of the Weil pairing when \(\dim(A)=2\). **Lemma 2.2**.: 1. _The roots of_ \(P_{p}(t)\) _come in pairs that multiply out to_ \(p\)_. In particular,_ \(P_{p}(t)\) _has no root with multiplicity_ \(3\)_._ 2. \(P_{p}(t)=t^{4}-a_{p}t^{3}+b_{p}t^{2}-pa_{p}t+p^{2}\) _for some_ \(a_{p},b_{p}\in\mathbb{Z}\)_._ 3. _If the trace of an element of_ \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) _is_ \(0\mod\ell\)_, then its characteristic polynomial is reducible modulo_ \(\ell\)_. In particular, this applies to_ \(P_{p}(t)\) _when_ \(a_{p}\equiv 0\pmod{\ell}\)_._ 4. _If_ \(A[\ell]\) _is a reducible_ \(G_{\mathbb{Q}}\)_-module, then_ \(P_{p}(t)\) _is reducible modulo_ \(\ell\)_._ Proof.: Parts (i) and (ii) are immediate from the fact that the non-degenerate Weil pairing allows us to pair up the four roots of \(P_{p}(t)\) into two pairs that each multiply out to \(p\). For part (iii), suppose that \(M\in\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) has \(\operatorname{tr}(M)=0\). Then the characteristic polynomial \(P_{M}(t)\) of \(M\) is of the form \(t^{4}+bt^{2}+c^{2}\). When the discriminant of \(P_{M}\) is \(0\) modulo \(\ell\), the polynomial \(P_{M}\) has repeated roots and is hence reducible. So assume that the discriminant of \(P_{M}\) is nonzero modulo \(\ell\). When \(\ell\neq 2\), the result follows from [1, Theorem 1]. When \(\ell=2\), a direct computation shows that the characteristic polynomial of a trace \(0\) element of \(\operatorname{GSp}_{4}(\mathbb{F}_{2})\) is either \((t+1)^{4}\) or \((t^{2}+t+1)^{2}\), which are both reducible. Part (iv) is immediate from Theorem 2.1 since \(P_{p}(t)\mod\ell\) by definition is the characteristic polynomial for the action of \(\operatorname{Frob}p\) on \(A[\ell]\). ### Maximal subgroups of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) Mitchell [13] classified the maximal subgroups of \(\operatorname{PSp}_{4}(\mathbb{F}_{\ell})\) in 1914. This can be used to deduce the following classification of maximal subgroups of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) with surjective similitude character. **Lemma 2.3** (Mitchell).: _Let \(V\) be a \(4\)-dimensional \(\mathbb{F}_{\ell}\)-vector space endowed with a nondegenerate skew-symmetric bilinear form \(\omega\). Then any proper subgroup \(G\) of \(\operatorname{GSp}(V,\omega)\) with surjective similitude character is contained in one of the following types of maximal subgroups._ 1. _Reducible maximal subgroups_ 1. _Stabilizer of a 1-dimensional isotropic subspace for_ \(\omega\)_._ 2. _Stabilizer of a 2-dimensional isotropic subspace for_ \(\omega\)_._ 2. _Irreducible subgroups governed by a quadratic character_ _Normalizer_ \(G_{\ell}\) _of the group_ \(M_{\ell}\) _that preserves each summand in a direct sum decomposition_ \(V_{1}\oplus V_{2}\) _of_ \(V\)_, where_ \(V_{1}\) _and_ \(V_{2}\) _are jointly defined over_ \(\mathbb{F}_{\ell}\) _and either:_ 1. _both nondegenerate for_ \(\omega\)_; or_ 2. _both isotropic for_ \(\omega\)_._ _Moreover,_ \(M_{\ell}\) _is an index_ \(2\) _subgroup of_ \(G_{\ell}\)_._ 3. _Stabilizer of a twisted cubic_ \(\operatorname{GL}(W)\) _acting on_ \(\operatorname{Sym}^{3}W\simeq V\)_, where_ \(W\) _is a_ \(2\)_-dimensional_ \(\mathbb{F}_{\ell}\)_-vector space._ 4. _Exceptional subgroups See Table A for explicit generators for the groups described below._ 1. _When_ \(\ell\equiv\pm 3\pmod{8}\)_: a group whose image_ \(G_{1920}\) _in_ \(\operatorname{PSp}(V,\omega)\) _has order_ \(1920\)_._ 2. _When_ \(\ell\equiv\pm 5\pmod{12}\) _and_ \(\ell\neq 7\)_: a group whose image_ \(G_{720}\) _in_ \(\operatorname{PSp}(V,\omega)\) _has order_ \(720\)_._ 3. _When_ \(\ell=7\)_: a group whose image_ \(G_{5040}\) _in_ \(\operatorname{PGSp}(V,\omega)\) _has order_ \(5040\)_._ _Remark 2_.: We have chosen to label the maximal subgroups in the classification using invariant subspaces for the symplectic pairing \(\omega\) on \(V\), following the more modern account due to Aschbacher (see [17, Section 3.1]; for a more comprehensive treatment see [10]). For the convenience of the reader, we record the correspondence between Mitchell's original labels and ours below. _Remark 3_.: The maximal subgroups in (1) are the analogues of the Borel subgroup of \(\operatorname{GL}_{2}(\mathbb{F}_{\ell})\). The maximal subgroups in (2) when the two subspaces \(V,V^{\prime}\) in the direct sum decomposition are individually defined over \(\mathbb{F}_{\ell}\) are the analogues of normalizers of the split Cartan subgroup of \(\operatorname{GL}_{2}(\mathbb{F}_{\ell})\). When the two subspaces \(V,V^{\prime}\) are not individually defined over \(\mathbb{F}_{\ell}\) instead, the maximal subgroups in (2) are analogues of the normalizers of the non-split Cartan subgroups of \(\operatorname{GL}_{2}(\mathbb{F}_{\ell})\). \begin{table} \begin{tabular}{|c|c|} \hline **Mitchell’s label** & **Label in Lemma 2.3** \\ \hline Group having an invariant point and plane & 1a \\ \hline Group having an invariant parabolic congruence & 1b \\ \hline Group having an invariant hyperbolic or elliptic congruence & 2a \\ \hline Group having an invariant quadric & 2b \\ \hline \end{tabular} \end{table} Table 1. Dictionary between maximal subgroup labels in [11]/[13] and Lemma 2.3 _Remark 4_.: We briefly explain why the action of \(\operatorname{GL}_{2}(\mathbb{F}_{\ell})\) on \(\operatorname{Sym}^{3}(\mathbb{F}_{\ell}^{2})\) preserves a nondegenerate symplectic form. It suffices to show that the restriction to \(\operatorname{SL}_{2}(\mathbb{F}_{\ell})\) fixes a vector in \(\bigwedge^{2}\operatorname{Sym}^{3}(\mathbb{F}_{\ell}^{2})\). This follows by character theory. If \(W\) is the standard \(2\)-dimensional representation of \(\operatorname{SL}_{2}\), then we have \(\bigwedge^{2}(\operatorname{Sym}^{3}W)\simeq\operatorname{Sym}^{4}W\oplus 1\) as representations of \(\operatorname{SL}_{2}\). _Remark 5_.: One can extract explicit generators of the exceptional maximal subgroups from Mitchell's original work3. Indeed [14, the proof of Theorem 8, page 390] gives four explicit matrices that generate a \(G_{1920}\) (which is unique up to conjugacy in \(\operatorname{PGSp}_{4}(\mathbb{F}_{\ell})\)). Mitchell's description of the other exceptional groups is in terms of certain projective linear transformations called skew perspectivities attached to a direct sum decomposition \(V=V_{1}\oplus V_{2}\) into \(2\)-dimensional subspaces. A skew perspectivity of order \(n\) with axes \(V_{1}\) and \(V_{2}\) is the projective linear transformation that scales \(V_{1}\) by a primitive \(n\)th root of unity and fixes \(V_{2}\). This proof also gives the axes of the skew perspectivities of order \(2\) and \(3\) that generate the remaining exceptional groups [14, pages 390-391]. Table 5 lists generators of (one representative of the conjugacy class of) each of the exceptional maximal subgroup extracted from Mitchell's descriptions. In the file exceptional.m publicly available with our code, we verify that Magma's list of conjugacy classes of maximal subgroups of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) agree with those described in Lemma 2.3 for \(3\leq\ell\leq 47\). Footnote 3: Mitchell’s notation for \(\operatorname{PGSp}_{4}(\mathbb{F}_{\ell})\) is \(A_{\nu}(\ell)\) and for \(\operatorname{PSp}_{4}(\mathbb{F}_{\ell})\) is \(A_{1}(\ell)\). _Remark 6_.: The classification of exceptional maximal subgroups of \(\operatorname{PSp}_{4}(\mathbb{F}_{\ell})\) is more subtle than that of \(\operatorname{PGSp}_{4}(\mathbb{F}_{\ell})\), because of the constraint on the similitude character of matrices in \(\operatorname{PSp}_{4}(\mathbb{F}_{\ell})\). While the similitude character is not well-defined on \(\operatorname{PGSp}_{4}(\mathbb{F}_{\ell})\) (multiplication by a scalar \(c\in\mathbb{F}_{\ell}^{\times}\) scales the similitude character by \(c^{2}\)) it is well-defined modulo squares. The group \(\operatorname{PSp}_{4}(\mathbb{F}_{\ell})\) is the kernel of this natural map: \[1\to\operatorname{PSp}_{4}(\mathbb{F}_{\ell})\to\operatorname{PGSp}_{4}( \mathbb{F}_{\ell})\xrightarrow{\operatorname{mult}}\mathbb{F}_{\ell}^{ \times}/(\mathbb{F}_{\ell}^{\times})^{2}\simeq\{\pm 1\}\to 1.\] An exceptional subgroup of \(\operatorname{PGSp}_{4}(\mathbb{F}_{\ell})\) gives rise to an exceptional subgroup of \(\operatorname{PSp}_{4}(\mathbb{F}_{\ell})\) of either the same size or half the size depending on the image of mult restricted to that subgroup, which in turn depends on the congruence class of \(\ell\). For this reason, the maximal exceptional subgroups of \(\operatorname{PSp}_{4}(\mathbb{F}_{\ell})\) in Mitchell's original classification (also recalled in Dieulefait [19, Section 2.1]) can have order \(1920\)_or_\(960\) and \(720\)_or_\(360\) depending on the congruence class of \(\ell\), and \(2520\) (for \(\ell=7\)). Such an exceptional subgroup gives rise to a _maximal_ exceptional subgroup of \(\operatorname{PGSp}_{4}(\mathbb{F}_{\ell})\) only when mult is surjective (i.e., its intersection with \(\operatorname{PSp}_{4}(\mathbb{F}_{\ell})\) is index \(2\)), which explains the restricted congruence classes of \(\ell\) for which they arise. We now record a lemma that directly follows from the structure of maximal subgroups described above. This lemma will be used in Section 4 to derive a criterion for a subgroup of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) to be the entire group. For an element \(T\) in \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\), let \(\operatorname{tr}(T)\), \(\operatorname{mid}(T)\), \(\operatorname{mult}(T)\) denote the trace of \(T\), the middle coefficient of the characteristic polynomial of \(T\), and the similitude character applied to \(T\) respectively4. For a scalar \(\lambda\), we have Footnote 4: Explicitly, the characteristic polynomial of \(T\) is therefore \(t^{4}-\operatorname{tr}(T)t^{3}+\operatorname{mid}(T)t^{2}-\operatorname{ mult}(T)\operatorname{tr}(T)t+\operatorname{mult}(T)^{2}\). \[\operatorname{tr}(\lambda T)=\lambda\operatorname{tr}(T),\quad\operatorname{ mid}(\lambda T)=\lambda^{2}\operatorname{mid}(T),\quad\operatorname{mult}(\lambda T)= \lambda^{2}\operatorname{mult}(T).\] Hence the quantities \(\operatorname{tr}(T)^{2}/\operatorname{mult}(T)\) and \(\operatorname{mid}(T)/\operatorname{mult}(T)\) are well-defined on \(\operatorname{PGSp}_{4}(\mathbb{F}_{\ell})\). For \(\ell>2\) and \(\ast\in\{720,1920,5040\}\), define (2) **Lemma 2.4**.: 1. _In cases 2a and 2b of Lemma_ 2.3_ 1. _every element in_ \(G_{\ell}\setminus M_{\ell}\) _has trace_ \(0\)_, and,_ 2. _the group_ \(M_{\ell}\) _stabilizes a non-trivial linear subspace of_ \(\overline{\mathbb{F}}_{\ell}^{4}\)_._ 2. _Every element that is contained in a maximal subgroup corresponding to the stabilizer of a twisted cubic has a reducible characteristic polynomial._ 3. _For_ \(\star\in\{1920,720\}\)_, the set_ \(C_{\ell,\ast}\) _defined in_ (2) _equals the reduction modulo_ \(\ell\) _of the elements of the set_ \(C_{\ast}\) _below._ \[C_{1920} =\{(0,-2),(0,-1),(0,0),(0,1),(0,2),(1,1),(2,1),(2,2),(4,2),(4,3),(8, 4),(16,6)\}\] \[C_{720} =\{(0,1),(0,0),(4,3),(1,1),(16,6),(0,2),(1,0),(3,2),(0,-2)\}\] _We also have_ \[C_{7,5040} =\{(0,0),(0,1),(0,2),(0,5),(0,6),(1,0),(1,1),(2,6),(3,2),(4,3),(5,3),(6,3)\}.\] Proof.: 1. In cases 2a and 2b of Lemma 2.3, since any element of the normalizer \(G_{\ell}\) that is not in \(M_{\ell}\) switches elements in the two subspaces \(V_{1}\) and \(V_{2}\) (i.e. maps elements in the subspace \(V_{1}\) in the decomposition \(V_{1}\oplus V_{2}\) to elements in \(V_{2}\) and vice-versa), it follows that any element in \(G_{\ell}\setminus M_{\ell}\) has trace zero. 2. The conjugacy class of maximal subgroups corresponding to the stabilizer of a twisted cubic comes from the embedding \(\operatorname{GL}_{2}(\mathbb{F}_{\ell})\xrightarrow{\iota}\operatorname{GSp} _{4}(\mathbb{F}_{\ell})\) induced by the natural action of \(\operatorname{GL}_{2}(\mathbb{F}_{\ell})\) on the space of monomials of degree \(3\) in \(2\) variables. If \(M\) is a matrix in \(\operatorname{GL}_{2}(\mathbb{F}_{\ell})\) with eigenvalues \(\lambda,\mu\) (possibly repeated), then the eigenvalues of \(\iota(M)\) are \(\lambda^{3},\mu^{3},\lambda^{2}\mu,\lambda\mu^{2}\) and hence the characteristic polynomial of \(\iota(M)\) factors as \(\big{(}T^{2}-(\lambda^{3}+\mu^{3})T+\lambda^{3}\mu^{3}\big{)}\big{(}T^{2}-( \lambda^{2}\mu+\lambda\mu^{2})T+\lambda^{3}\mu^{3}\big{)}\) over \(\mathbb{F}_{\ell}\) which is reducible over \(\mathbb{F}_{\ell}\). 3. This follows from the description of the maximal subgroups given in Table 5. Each case (except \(G_{5040}\) that only occurs for \(\ell=7\)) depends on a choice of a root of a quadratic polynomial. In the file exceptional_statistics.sage, we generate the corresponding finite subgroups over the appropriate quadratic number field to compute \(C_{\ast}\). It follows that the corresponding values for the subgroup \(G_{\ast}\) in \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) can be obtained by reducing these values modulo \(\ell\). Since the group \(G_{5040}\) only appears for \(\ell=7\), we directly compute the set \(C_{7,5040}\). _Remark 7_.: The condition in Lemma 2.4(3) is the analogue of the condition [11, Proposition 19 (iii)] used to rule out exceptional maximal subgroups of \(\operatorname{GL}_{2}(\mathbb{F}_{\ell})\). We end this subsection by including the following lemma, to further highlight the similarities between the above classification of maximal subgroups of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) and the more familiar classification of maximal subgroups of \(\operatorname{GL}_{2}(\mathbb{F}_{\ell})\). This lemma is not used elsewhere in the article and is thus for expositional purposes only. **Lemma 2.5**.: 1. _The subgroup_ \(M_{\ell}\) _in the case (_2a_) _when the two nondegenerate subspaces_ \(V_{1}\) _and_ \(V_{2}\) _are individually defined over_ \(\mathbb{F}_{\ell}\) _is isomorphic to_ \[\{(m_{1},m_{2})\in\operatorname{GL}_{2}(\mathbb{F}_{\ell})^{2}\ |\ \det(m_{1})=\det(m_{2})\}.\] _In particular, the order of_ \(M_{\ell}\) _is_ \(\ell^{2}(\ell-1)(\ell^{2}-1)^{2}\)_._ 2. _The subgroup_ \(M_{\ell}\) _in the case (_2b_) when the two isotropic subspaces_ \(V_{1}\) _and_ \(V_{2}\) _are individually defined over_ \(\mathbb{F}_{\ell}\) _is isomorphic to_ \[\{(m_{1},m_{2})\in\operatorname{GL}_{2}(\mathbb{F}_{\ell})^{2}\ |\ m_{1}^{T}m_{2}= \lambda I,\text{ for some }\lambda\in\mathbb{F}_{\ell}^{\ast}\}.\] _In particular, the order of_ \(M_{\ell}\) _is_ \(\ell(\ell-1)^{2}(\ell^{2}-1)\) 3. _The subgroup_ \(M_{\ell}\) _in the case (_2a_) when the two nondegenerate subspaces_ \(V_{1}\) _and_ \(V_{2}\) _are not individually defined over_ \(\mathbb{F}_{\ell}\) _is isomorphic to_ \[\{m\in\operatorname{GL}_{2}(\mathbb{F}_{\ell^{2}})\ |\ \det(m)\in\mathbb{F}_{\ell}^{*}\}.\] _In particular, the order of_ \(M_{\ell}\) _is_ \(\ell^{2}(\ell-1)(\ell^{4}-1)\)_._ 4. _The subgroup_ \(M_{\ell}\) _in the case (_2b_) when the two isotropic subspaces_ \(V_{1}\) _and_ \(V_{2}\) _are not individually defined over_ \(\mathbb{F}_{\ell}\) _is isomorphic to_ \(\operatorname{GU}_{2}(\mathbb{F}_{\ell^{2}})\)_, i.e.,_ \[\{m\in\operatorname{GL}_{2}(\mathbb{F}_{\ell^{2}})\ |\ m^{T}\iota(m)=\lambda I,\ \text{ for some }\lambda\in\mathbb{F}_{\ell}^{*}\},\] _where_ \(\iota\) _denotes the natural extension of the Galois automorphism of_ \(\mathbb{F}_{\ell^{2}}/\mathbb{F}_{\ell}\) _to_ \(\operatorname{GL}_{2}(\mathbb{F}_{\ell^{2}})\)_. In particular, the order of_ \(M_{\ell}\) _is_ \(\ell(\ell^{2}-1)^{2}\)_._ Proof.: Given a direct sum decomposition \(V_{1}\oplus V_{2}\) of a vector space \(V\) over \(\mathbb{F}_{q}\), we get a natural embedding of \(\operatorname{Aut}(V_{1})\times\operatorname{Aut}(V_{2})\ (\cong \operatorname{GL}_{2}(\mathbb{F}_{q})^{2})\) into \(\operatorname{Aut}(V)\ (\cong\operatorname{GL}_{4}(\mathbb{F}_{q}))\), whose image consists of automorphisms that preserve this direct sum decomposition. We will henceforth refer to elements of \(\operatorname{Aut}(V_{1})\times\operatorname{Aut}(V_{2})\) as elements of \(\operatorname{Aut}(V)\) using this embedding. To understand the subgroup \(M_{\ell}\) of \(\operatorname{GSp}_{4}(\mathbb{F}_{q})\) in cases (1) and (2) where the two subspaces in the direct sum decomposition are individually defined over \(\mathbb{F}_{q}\), we need to further impose the condition that the automorphisms in the image of the map \(\operatorname{Aut}(V_{1})\times\operatorname{Aut}(V_{2})\to\operatorname{Aut }(V)\) preserve the symplectic form \(\omega\) on \(V\) up to a scalar. In (1), without any loss of generality, the two nondegenerate subspaces \(V_{1}\) and \(V_{2}\) can be chosen to be orthogonal complements under the nondegenerate pairing \(\omega\), and so by Witt's theorem, in a suitable basis for \(V_{1}\oplus V_{2}\) obtained by concatenating a basis of \(V_{1}\) and a basis of \(V_{2}\), the nondegenerate symplectic pairing \(\omega\) has the following block-diagonal shape: \[B\coloneqq\begin{bmatrix}0&1&&\\ -1&0&&\\ &&0&1\\ &&-1&0\end{bmatrix}\cdot\] The condition that an element \((m_{1},m_{2})\in\operatorname{Aut}(V_{1})\oplus\operatorname{Aut}(V_{2})\) preserves the symplectic pairing up to a similitude factor of \(\lambda\) is the condition \((m_{1},m_{2})^{T}B(m_{1},m_{2})=\lambda B\), which boils down to \(\det(m_{1})=\lambda=\det(m_{2})\). Similarly, in (2), without any loss of generality, by Witt's theorem, in a suitable basis for \(V_{1}\oplus V_{2}\) obtained by concatenating a basis of the isotropic subspace \(V_{1}\) and a basis of the isotropic subspace \(V_{2}\), the nondegenerate symplectic pairing \(\omega\) has the following block-diagonal shape. \[B\coloneqq\begin{bmatrix}&&0&1\\ &&1&0\\ 0&-1&&\\ -1&0&&\end{bmatrix}\cdot\] The condition that an element \((m_{1},m_{2})\in\operatorname{Aut}(V_{1})\oplus\operatorname{Aut}(V_{2})\) preserves the symplectic pairing up to a similitude factor of \(\lambda\) is the condition \((m_{1},m_{2})^{T}B(m_{1},m_{2})=\lambda B\), which again boils down to \(m_{1}^{T}m_{2}=\lambda I\). If we have a subspace \(W\) defined over \(\mathbb{F}_{q^{2}}\) but not defined over \(\mathbb{F}_{q}\), and we let \(\overline{W}\) denote the conjugate subspace and further assume that \(W\oplus\overline{W}\) gives a direct sum decomposition of \(V\), then we get a natural embedding of \(\operatorname{Aut}(W)\ (\cong\operatorname{GL}_{2}(\mathbb{F}_{q^{2}}))\) into \(\operatorname{Aut}(V)\ (\cong\operatorname{GL}_{4}(\mathbb{F}_{q}))\) whose image consists of automorphisms that commute with the natural involution of \(V\otimes\mathbb{F}_{q^{2}}\) induced by the Galois automorphism of \(\mathbb{F}_{q^{2}}\) over \(\mathbb{F}_{q}\). The proofs of cases (3) and (4) are analogous to the cases (1) and (2) respectively, by using the direct sum decomposition \(W\oplus\overline{W}\) and letting \(m_{2}=\iota(m_{1})\). The condition that \(\det(m_{1})=\det(m_{2})\) in (1) becomes the condition \(\det(m_{1})=\det(m_{2})=\det\overline{m_{1}}=\overline{\det(m_{1})}\), or equivalently, that \(\det(m_{1})\in\mathbb{F}_{q}\) in (3). Similarly, the condition that \(m_{1}^{T}m_{2}=\lambda I\) in (2) becomes the condition that \(m_{1}^{T}\iota(m_{1})=\lambda I\) in (4). ### Image of inertia and (tame) fundamental characters Dieulefait [10] used Mitchell's work described in the previous subsection to classify the maximal subgroups of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) that could occur as the image of \(\rho_{A,\ell}\). This was achieved via an application of a fundamental result of Serre and Raynaud that strongly constrains the action of inertia at \(\ell\), and which we now recall. Fix a prime \(\ell>3\) that does not divide the conductor \(N\) of \(A\). Let \(I_{\ell}\) be an inertia subgroup at \(\ell\). Let \(\psi_{n}\colon I_{\ell}\to\mathbb{F}_{\ell^{n}}^{\times}\) denote a (tame) fundamental character of level \(n\). The \(n\) Galois-conjugate fundamental characters \(\psi_{n,1},\ldots,\psi_{n,n}\) of level \(n\) are given by \(\psi_{n,i}:=\psi_{n}^{\ell^{i}}\). Recall that the fundamental character of level \(1\) is simply the mod \(\ell\) cyclotomic character \(\operatorname{cyc}_{\ell}\), and that the product of all fundamental characters of a given level is the cyclotomic character. **Theorem 2.6** (Serre [11], Raynaud [12], cf. [10][Theorem 2.1]).: _Let \(\ell\) be a semistable prime for \(A\). Let \(V/\mathbb{F}_{\ell}\) be an \(n\)-dimensional Jordan-Holder factor of the \(I_{\ell}\)-module \(A[\ell]\). Then \(V\) admits a \(1\)-dimensional \(\mathbb{F}_{\ell^{n}}\)-vector space structure such that \(\rho_{A,\ell}|_{I_{\ell}}\) acts on \(V\) via the character_ \[\psi_{n,1}^{d_{1}}\cdots\psi_{n,n}^{d_{n}}\] _with each \(d_{i}\) equal to either \(0\) or \(1\)._ On the other hand, the following fundamental result of Grothendieck constrains the action of inertia at semistable primes \(p\neq\ell\). **Theorem 2.7** (Grothendieck [14, Expose IX, Prop 3.5]).: _Let \(A\) be an abelian variety over a number field \(K\). Then \(A\) has semistable reduction at \(p\neq\ell\) if and only if the action of \(I_{p}\subset G_{K}\) on \(T_{\ell}A\) is unipotent of length \(2\)._ Combining these two results allows one fine control of the determinant of a subquotient of \(A[\ell]\); this will be used in Section 3. **Corollary 2.8**.: _Let \(A/\mathbb{Q}\) be an abelian surface, and let \(X_{\ell}\) be a Jordan-Holder factor of the \(\overline{\mathbb{F}}_{\ell}[G_{\mathbb{Q}}]\)-module \(A[\ell]\otimes\overline{\mathbb{F}}_{\ell}\). If \(\ell\) is a semistable prime, then_ \[\det X_{\ell}\simeq\epsilon\cdot\operatorname{cyc}_{\ell}^{x}\] _for some character \(\epsilon\colon G_{\mathbb{Q}}\to\overline{\mathbb{F}}_{\ell}\) that is unramified at \(\ell\) and some \(0\leq x\leq\dim X_{\ell}\). Moreover, \(\epsilon^{120}=1\)._ Proof.: The first part follows immediately from Theorem 2.6. For the fact that \(\epsilon^{120}=1\), every abelian surface attains semistable reduction over an extension \(K/\mathbb{Q}\) with \([K:\mathbb{Q}]\) dividing \(120\) by [13, Theorem 7.2], and so this follows from Theorem 2.7 since there are no nontrivial unramified characters of \(G_{\mathbb{Q}}\). We can now state Dieulefait's classification of maximal subgroups of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) that can occur as the image \(\rho_{A,\ell}(G_{\mathbb{Q}})\) for a semistable prime \(\ell>7\). **Proposition 2.9** ([10]).: _Let \(A\) be the Jacobian of a genus \(2\) curve defined over \(\mathbb{Q}\) with Weil pairing \(\omega\) on \(A[\ell]\). If \(\ell>7\) is a semistable prime, then \(\rho_{A,\ell}(G_{\mathbb{Q}})\) is either all of \(\operatorname{GSp}(A[\ell],\omega)\) or it is contained in one of the maximal subgroups of Types (1) or (2) in Lemma 2.3._ See also [14, Proposition 3.15] for an expanded exposition of why the image of \(G_{\mathbb{Q}}\) cannot be contained in maximal subgroup of Type (3) for a semistable prime \(\ell>7\). _Remark 8_.: However, if \(\ell\) is a prime of additive reduction, or if \(\ell\leq 7\), then the image of \(G_{\mathbb{Q}}\) may also be contained in any of the four types of maximal subgroups described in Lemma 2.3. Nevertheless, by [13, Theorem 6.6], for any prime \(\ell>24\), we have that the exponent of the projective image is bounded \(\exp(\mathbb{P}\rho_{A,\ell})\geq(\ell-1)/12\). Since \(\exp(G_{1920})=2\exp(S_{6})=120\) and \(\exp(G_{720})=\exp(S_{5})=60\), the exceptional maximal subgroups cannot occur as \(\rho_{A,\ell}(G_{\mathbb{Q}})\) for \(\ell>1441\). ### A consequence of the Chebotarev density theorem Let \(K/\mathbb{Q}\) be a finite Galois extension with Galois group \(G=\operatorname{Gal}(K/\mathbb{Q})\) and absolute discriminant \(d_{K}\). Let \(S\subseteq G\) be a nonempty subset that is closed under conjugation. By the Chebotarev density theorem, we know that \[\lim_{x\to\infty}\frac{|\{p\leq x:p\text{ is unramified in }K\text{ and }\operatorname{Frob}_{p}\in S\}|}{|\{p\leq x\}|}=\frac{|S|}{|G|}. \tag{3}\] Let \(p\) be the least prime such that \(p\) is unramified in \(K\) and \(\operatorname{Frob}_{p}\in S\). There are effective versions of the Chebotarev density theorem that give bounds on \(p\). The best known unconditional bounds are polynomial in \(d_{K}\)[13, 14, 15]. Under GRH, the best known bounds are polynomial in \(\log d_{K}\). In particular Bach and Sorenson [1] showed that under GRH, \[p\leq(4\log d_{K}+2.5[K:\mathbb{Q}]+5)^{2}. \tag{4}\] The present goal is to give an effective version of the Chebotarev density theorem in the context of abelian surfaces. We will use a corollary of (4) that is noted in [15] which allows for the avoidance of a prescribed set of primes by taking a quadratic extension of \(K\). We do this because we will take \(K=\mathbb{Q}(A[\ell])\), and \(p\) being unramified in \(K\) is not sufficient to imply that \(p\) is a prime of good reduction for \(A\). Lastly, we will use that by [11, Proposition 6], if \(K/\mathbb{Q}\) is finite Galois, then \[\log d_{K}\leq([K:\mathbb{Q}]-1)\log\operatorname{rad}(d_{K})+[K:\mathbb{Q}] \log([K:\mathbb{Q}]), \tag{5}\] where \(\operatorname{rad}n=\prod_{p\mid n}p\) denotes the radical of an integer \(n\). **Lemma 2.10**.: _Let \(A/\mathbb{Q}\) be a typical principally polarized abelian surface with conductor \(N_{A}\). Let \(q\) be a prime. Let \(S\subseteq\rho_{A,q}(G_{\mathbb{Q}})\) be a nonempty subset that is closed under conjugation. Let \(p\) be the least prime of good reduction for \(A\) such that \(p\neq q\) and \(\rho_{A,q}(\operatorname{Frob}_{p})\in S\). Assuming GRH, we have_ \[p\leq\left(4\left[(2q^{11}-1)\log\operatorname{rad}(2qN_{A})+22q^{11}\log(2q) \right]+5q^{11}+5\right)^{2}.\] Proof.: Let \(K=\mathbb{Q}(A[q])\). Then \(K/\mathbb{Q}\) is Galois and \[[K:\mathbb{Q}]\leq|\operatorname{GSp}_{4}(\mathbb{F}_{q})|=q^{4}(q^{4}-1)(q^{2 }-1)(q-1)\leq q^{11}.\] As \(\operatorname{rad}d_{K}\) is the product of primes that ramify in \(\mathbb{Q}(A[q])\), the criterion of Neron-Ogg-Shafarevich for abelian varieties [11, Theorem 1] implies that \(\operatorname{rad}(d_{K})\) divides \(\operatorname{rad}(qN_{A})\). Let \(\tilde{K}:=K(\sqrt{m})\) where \(m:=\operatorname{rad}(2N_{A})\). Note that the primes that ramify in \(\tilde{K}\) are precisely \(2\), \(q\), and the primes of bad reduction for \(A\). Thus \(\operatorname{rad}(d_{\tilde{K}})=\operatorname{rad}(2qN_{A})\). Moreover \([\tilde{K}:\mathbb{Q}]\leq 2q^{11}\) and by (5), \[\log(d_{\tilde{K}})\leq(2q^{11}-1)\log\operatorname{rad}(2qN_{A})+22q^{11} \log(2q).\] Applying [15, Corollary 6] to the field \(\tilde{K}\), we get that (under GRH) there exists a prime \(p\) satisfying the claimed bound, that does not divide \(m\), and for which \(\rho_{A,q}(\operatorname{Frob}_{p})\in S\). ## 3. Finding a finite set containing all nonsurjective primes In this section we describe Algorithm 3.1 referenced in Theorem 1.1(1). This algorithm produces a finite list PossiblyNonsurjectivePrimes that provably includes all nonsurjective primes \(\ell\). We also prove Theorem 1.2. Since our goal is to produce a finite list (from which we will later remove extraneous primes) it is harmless to include the finitely many bad primes as well as \(2,3,5,7\). Using Proposition 2.9, it suffices to find conditions on \(\ell>7\) for which \(\rho_{A,\ell}(G_{\mathbb{Q}})\) could be contained in one of the maximal subgroups of type (1) and (2) in Lemma 2.3. We first find primes \(\ell\) for which \(\rho_{A,\ell}\) has (geometrically) reducible image (and hence is contained in a maximal subgroup in case (1) of Lemma 2.3 or in a subgroup \(M_{\ell}\) in case (2)). To treat the geometrically irreducible cases, we then make use of the observation from Lemma 2.4 1 that every element outside of an index \(2\) subgroup has trace \(0\). **Algorithm 3.1**.: _Given a typical genus \(2\) curve \(C/\mathbb{Q}\) with conductor \(N\) and Jacobian \(A\), compute a finite list \(\mathsf{PossiblyNonsurjectivePrimes}\) of primes as follows._ 1. _Initialize_ \(\mathsf{PossiblyNonsurjectivePrimes}=[2,3,5,7]\)_._ 2. _Add to_ \(\mathsf{PossiblyNonsurjectivePrimes}\) _all primes dividing_ \(N\)_._ 3. _Add to_ \(\mathsf{PossiblyNonsurjectivePrimes}\) _the good primes_ \(\ell\) _for which_ \(\rho_{A,\ell}\otimes\overline{\mathbb{F}}_{\ell}\) _could be reducible via Algorithms_ 3.3_,_ 3.6_, and_ 3.10_._ 4. _Add to_ \(\mathsf{PossiblyNonsurjectivePrimes}\) _the good primes_ \(\ell\) _for which_ \(\rho_{A,\ell}\otimes\overline{\mathbb{F}}_{\ell}\) _could be irreducible but nonsurjective via Algorithm_ 3.13_._ 5. _Return_ \(\mathsf{PossiblyNonsurjectivePrimes}\)_._ At a very high-level, each of the subalgorithms of Algorithm 3.1 makes use of a set of auxiliary good primes \(p\). We compute the integral characteristic polynomial of Frobenius \(P_{p}(t)\) and use it to constrain those \(\ell\neq p\) for which the image could have a particular shape. _Remark 9_.: Even though robust methods to compute the conductor \(N\) of a genus \(2\) curve are not implemented at the time of writing, the odd-part \(N_{\mathrm{odd}}\) of \(N\) can be computed via genus2red function of PARI and the genus2reduction module of SageMath, both based on an algorithm of Liu [14]. Moreover, [1, Theorem 6.2] bounds the \(2\)-exponent of \(N\) above by \(20\) and hence \(N\) can be bounded above by \(2^{20}N_{\mathrm{odd}}\). While these algorithms can be run only with the bound \(2^{20}N_{\mathrm{odd}}\), it will substantially increase the run-time of the limiting Algorithm 3.10. We now explain each of these steps in detail. ### Good primes that are not geometrically irreducible In this section we describe the conditions that \(\ell\) must satisfy for the base-extension \(\overline{A[\ell]}\coloneqq A[\ell]\otimes_{\mathbb{F}_{\ell}}\overline{ \mathbb{F}}_{\ell}\) to be reducible. In this case, the representation \(\overline{A[\ell]}\) is an extension \[0\to X_{\ell}\to\overline{A[\ell]}\to Y_{\ell}\to 0 \tag{6}\] of a (quotient) representation \(Y_{\ell}\) by a (sub) representation \(X_{\ell}\). Recall that \(N_{\mathrm{sq}}\) denotes the largest square divisor of \(N\). **Lemma 3.2**.: _Let \(\ell\) be a prime of good reduction for \(A\) and suppose that \(\overline{A[\ell]}\) sits in sequence (6). Let \(p\neq\ell\) be a good prime for \(A\) and let \(f\) denote the order of \(p\) in \((\mathbb{Z}/N_{\mathrm{sq}}\mathbb{Z})^{\times}\). Then there exists \(0\leq x\leq\dim X_{\ell}\) and \(0\leq y\leq\dim Y_{\ell}\) such that \(\mathrm{Frob}_{p}^{\mathrm{gcd}(f,120)}\) acts on \(\det X_{\ell}\) by \(p^{\mathrm{gcd}(f,120)x}\), respectively on \(\det Y_{\ell}\) by \(p^{\mathrm{gcd}(f,120)y}\)._ Proof.: Since \(\ell\) is a good prime and \(X_{\ell}\) is composed of Jordan-Holder factors of \(\overline{A[\ell]}\), Corollary 2.8 constrains its determinant. We have \(\det X_{\ell}=\epsilon\operatorname{cyc}_{\ell}^{x}\) for some character \(\epsilon\colon G_{\mathbb{Q}}\to\overline{\mathbb{F}}_{\ell}\) unramified at \(\ell\), and \(0\leq x\leq\dim X_{\ell}\), and \(\epsilon^{120}=1\). Hence \(\mathrm{Frob}_{p}^{120}\) acts on \(\det X_{\ell}\) by \(\operatorname{cyc}_{\ell}(\mathrm{Frob}_{p})^{120x}=p^{120x}\). In fact, we can do slightly better. Since \(\det\overline{A[\ell]}\simeq\operatorname{cyc}_{\ell}^{2}\), we have \(\det Y_{\ell}\simeq\epsilon^{-1}\operatorname{cyc}_{\ell}^{2-x}\). Since the conductor is multiplicative in extensions, we conclude that \(\operatorname{cond}(\epsilon)^{2}\mid N\). By class field theory, the character \(\epsilon\) factors through \(\left(\mathbb{Z}/\operatorname{cond}(\epsilon)\mathbb{Z}\right)^{\times}\), and hence through \(\left(\mathbb{Z}/N_{\mathrm{sq}}\mathbb{Z}\right)^{\times}\), sending \(\mathrm{Frob}_{p}\) to \(p\pmod{N_{\mathrm{sq}}}\). Since \(p^{f}\equiv 1\pmod{N_{\mathrm{sq}}}\), we have that \(\epsilon(\mathrm{Frob}_{p})^{\mathrm{gcd}(f,120)}=1\), and we see that \(\mathrm{Frob}_{p}^{\mathrm{gcd}(f,120)}\) acts on \(\det X_{\ell}\) by \(p^{\mathrm{gcd}(f,120)x}\). Exchanging the roles of \(X_{\ell}\) and \(Y_{\ell}\), we deduce the analogous statement for \(Y_{\ell}\). This is often enough information to find all \(\ell\) for which \(\overline{A[\ell]}\) has a nontrivial subquotient. Namely, by Theorem 2.1, every root of \(P_{p}(t)\) has complex absolute value \(p^{1/2}\). Thus the \(\gcd(f,120)\)-th power of each root has complex absolute value \(p^{\mathrm{gcd}(f,120)/2}\), and hence is never _integrally_ equal to \(1\) or \(p^{\mathrm{gcd}(f,120)}\). Since Lemma 3.2 guarantees that this equality must hold modulo \(\ell\) for any good prime \(\ell\) for which \(\overline{A[\ell]}\) is reducible with a \(1\)-dimensional subquotient, we always get a nontrivial condition on \(\ell\). Some care must be taken to rule out \(\ell\) for which \(\overline{A[\ell]}\) only has \(2\)-dimensional subquotient(s). #### 3.1.1. Odd-dimensional subquotient Let \(p\) be a good prime. Given a polynomial \(P(t)\) and an integer \(f\), write \(P^{(f)}(t)\) for the polynomial whose roots are the \(f\)th powers of roots of \(P(t)\). Universal formulas for such polynomials in terms of the coefficients of \(P(t)\) are easy to compute, and are implemented in our code in the case where \(P\) is a degree \(4\) polynomial whose roots multiply in pairs to \(p^{\alpha}\), and \(f\mid 120\). **Algorithm 3.3**.: _Given a typical genus \(2\) Jacobian \(A/\mathbb{Q}\) of conductor \(N\), let \(f\) denote the order of \(p\) in \(\left(\mathbb{Z}/N_{\mathrm{sq}}\mathbb{Z}\right)^{\times}\) and write \(f^{\prime}=\gcd(f,120)\). Compute an integer \(M_{\mathrm{odd}}\) as follows._ 1. _Choose a nonempty finite set_ \(\mathcal{T}\) _of auxiliary good primes_ \(p\nmid N\)_._ 2. _For each_ \(p\)_, compute_ \[R_{p}:=P_{p}^{(f^{\prime})}(1).\] 3. _Let_ \(M_{\mathrm{odd}}=\gcd_{p\in\mathcal{T}}(pR_{p})\) _over all auxiliary primes._ _Return the list of prime divisors \(\ell\) of \(M_{\mathrm{odd}}\)._ **Proposition 3.4**.: _Any good prime \(\ell\) for which \(\overline{A[\ell]}\) has an odd-dimensional subrepresentation is returned by Algorithm 3.3._ Proof.: Since \(\overline{A[\ell]}\) is \(4\)-dimensional and has an odd-dimensional subrepresentation, it has a \(1\)-dimensional subquotient. For any \(p\in\mathcal{T}\), Lemma 3.2 shows that \(\mathrm{Frob}_{p}^{f^{\prime}}\) acts on \(\det X_{\ell}\) by either \(p^{f^{\prime}}\) or by \(1\). Thus, the action of \(\mathrm{Frob}_{p}^{f^{\prime}}\) on \(\overline{A[\ell]}\) has an eigenvalue that is congruent to \(p^{f^{\prime}}\) or \(1\) modulo \(\ell\), and so \(P_{p}^{(f^{\prime})}(t)\) has a root that is congruent to \(1\) or \(p^{f^{\prime}}\) modulo \(\ell\). Since the roots of \(P^{(f^{\prime})}(t)\) multiply in pairs to \(p^{f^{\prime}}\), we have \(P_{p}^{(f^{\prime})}(p^{f^{\prime}})=p^{2f^{\prime}}P_{p}^{(f^{\prime})}(1)\). Hence \(\ell\) divides \(p\cdot P_{p}^{(f^{\prime})}(1)=pR_{p}\). Using Theorem 2.1, we can give a theoretical bound on the "worst case" of this step of the algorithm using only one auxiliary prime \(p\). Of course, taking the greatest common divisor over multiple auxiliary primes will likely remove extraneous factors, and in practice this step of the algorithm runs substantially faster than other steps. **Proposition 3.5**.: _Algorithm 3.3 terminates. More precisely, if \(p\) is any good prime for \(A\), then_ \[0\neq|M_{\mathrm{odd}}|\ll p^{240}\] _where the implied constant is absolute._ Proof.: This follows from the fact that the coefficient of \(t^{i}\) in \(P_{p}^{(f^{\prime})}(t)\) has magnitude on the order of \(p^{(2-i)f^{\prime}}\) and \(f^{\prime}\leq 120\). #### 3.1.2. Two-dimensional subquotients We now assume that \(\overline{A[\ell]}\) is reducible, but does not have any odd-dimensional subquotients. In particular, it has an irreducible subrepresentation \(X_{\ell}\) of dimension \(2\), with irreducible quotient \(Y_{\ell}\) of dimension \(2\). If \(\overline{A[\ell]}\) is reducible but indecomposable, then \(X_{\ell}\) is the unique subrepresentation of \(\overline{A[\ell]}\) and \(Y_{\ell}^{\vee}\otimes\mathrm{cyc}_{\ell}\) is the unique subrepresentation of \(\overline{A[\ell]}^{\vee}\otimes\mathrm{cyc}_{\ell}\). The isomorphism \(T_{\ell}A\simeq(T_{\ell}A)^{\vee}\otimes\mathrm{cyc}_{\ell}\) from (1) yields an isomorphism \(\overline{A[\ell]}\simeq(\overline{A[\ell]})^{\vee}\otimes\mathrm{cyc}_{\ell}\) and hence \(X_{\ell}\simeq Y_{\ell}^{\vee}\otimes\mathrm{cyc}_{\ell}\). Otherwise, \(\overline{A[\ell]}\simeq X_{\ell}\oplus Y_{\ell}\) and so the nondegeneracy of the Weil pairing gives \[X_{\ell}\oplus Y_{\ell}\simeq(X_{\ell}^{\vee}\otimes\mathrm{cyc}_{\ell}) \oplus(Y_{\ell}^{\vee}\otimes\mathrm{cyc}_{\ell})\,.\] Therefore either: 1. \(X_{\ell}\simeq Y_{\ell}^{\vee}\otimes\mathrm{cyc}_{\ell}\) and \(Y_{\ell}\simeq X_{\ell}^{\vee}\otimes\mathrm{cyc}_{\ell}\), or 2. \(X_{\ell}\simeq X_{\ell}^{\vee}\otimes\mathrm{cyc}_{\ell}\) and \(Y_{\ell}\simeq Y_{\ell}^{\vee}\otimes\mathrm{cyc}_{\ell}\) and \(\overline{A[\ell]}\simeq X_{\ell}\oplus Y_{\ell}\). We call the first case related \(2\)-dimensional subquotients and the second case self-dual \(2\)-dimensional subrepresentations. We will see that the ideas of Lemma 3.2 easily extend to treat the related subquotient case; we will use the validity of Serre's conjecture to treat the self-dual case. In the case that \(\overline{A[\ell]}\) is decomposable, the above two cases correspond respectively to the index \(2\) subgroup \(M_{\ell}\) in cases (2a) (the isotropic case) and (2b) (the nondegenerate case) of Lemma 2.3. #### 3.1.3. Related two-dimensional subquotients Let \(p\) be a good prime. Let \(P_{p}(t)\coloneqq t^{4}-at^{3}+bt^{2}-pat+p^{2}\) be the characteristic polynomial of \(\operatorname{Frob}_{p}\) acting on \(\overline{A[\ell]}\). Suppose that \(\alpha\) and \(\beta\) are the eigenvalues of \(\operatorname{Frob}_{p}\) acting on the subrepresentation \(X_{\ell}\). Then, since \(X_{\ell}\simeq Y_{\ell}^{\vee}\otimes\operatorname{cyc}_{\ell}\), the eigenvalues of the action of \(\operatorname{Frob}_{p}\) on \(Y_{\ell}\) are \(p/\alpha\) and \(p/\beta\). The action of \(\operatorname{Frob}_{p}\) on \(\det X_{\ell}\) is therefore by a product of two of the roots of \(P_{p}(t)\) that do not multiply to \(p\). Note that there are four such pairs of roots of \(P_{p}(t)\) that do not multiply to \(p\). Let \(Q_{p}(t)\) be the quartic polynomial whose roots are the products of pairs of roots of \(P_{p}(t)\) that do not multiply to \(p\). By design, the roots of \(Q_{p}(t)\) have complex absolute value \(p\), but are not equal to \(p\). (It is elementary to work out that \[Q_{p}(t)=t^{4}-(b-2p)t^{3}+p(a^{2}-2b+2p)t^{2}-p^{2}(b-2p)t+p^{4}\] and is a quartic whose roots multiply in pairs to \(p^{2}\).) **Algorithm 3.6**.: _Given a typical genus \(2\) Jacobian \(A/\mathbb{Q}\) of conductor \(N\), let \(f\) denote the order of \(p\) in \(\left(\mathbb{Z}/N_{\operatorname{sq}}\mathbb{Z}\right)^{\times}\) and write \(f^{\prime}=\gcd(f,120)\). Compute an integer \(M_{\text{related}}\) as follows._ 1. _Choose a finite set_ \(\mathcal{T}\) _of auxiliary good primes_ \(p\nmid N\)_;_ 2. _For each_ \(p\)_, compute the product_ \[R_{p}:=Q_{p}^{(f^{\prime})}(1)Q_{p}^{(f^{\prime})}(p^{f^{\prime}})\] 3. _Let_ \(M_{\text{related}}=\gcd_{p\in\mathcal{T}}(pR_{p})\)_._ _Return the list of prime divisors \(\ell\) of \(M_{\text{related}}\)._ **Proposition 3.7**.: _Any good prime \(\ell\) for which \(\overline{A[\ell]}\) has related two-dimensional subquotients is returned by Algorithm 3.6._ Proof.: Proceed similarly as in the proof of Proposition 3.4 -- in particular, \(\ell\) divides \(Q_{p}^{(f^{\prime})}(1)\), \(Q_{p}^{(f^{\prime})}(p^{f^{\prime}})\), or \(Q_{p}^{(f^{\prime})}(p^{2f^{\prime}})\) and hence \(\ell\) divides \(pR_{p}\) since \(Q_{p}^{(f^{\prime})}(p^{2f^{\prime}})=p^{4f^{\prime}}Q_{p}^{(f^{\prime})}(1)\). A theoretical "worst case" analysis yields the following. **Proposition 3.8**.: _Algorithm 3.6 terminates. More precisely, if \(q\) is the smallest surjective prime for \(A\), then a good prime \(p\) for which \(R_{p}\) is nonzero is bounded by a function of \(q\). Assuming GRH,_ \[p\ll q^{22}\log^{2}(qN),\] _where the implied constants are absolute and effectively computable. Moreover, for such a prime \(p\),_ \[|M_{\text{related}}|\ll p^{961}\ll q^{21142}\log^{1922}(qN),\] _where the implied constants are absolute._ Proof.: By Serre's open image theorem for genus \(2\) curves, such a prime \(q\) exists, and by Lemma 2.10, the prime \(p\) can be chosen such that \(R_{p}\) is nonzero modulo \(q\). Finally, \[M_{\text{related}}\leq pR_{p}=pQ^{(f^{\prime})}(1)Q^{(f^{\prime})}(p^{f^{ \prime}})\ll p^{8f^{\prime}+1}\ll p^{961},\] since the coefficient of \(t^{i}\) in \(Q^{(f^{\prime})}(t)\) has magnitude on the order of \(p^{(4-i)f^{\prime}}\) and \(f^{\prime}\leq 120\) #### 3.1.4. Self-dual two-dimensional subrepresentations In this case, both subrepresentations \(X_{\ell}\) and \(Y_{\ell}\) are absolutely irreducible \(2\)-dimensional Galois representations with determinant the cyclotomic character \(\operatorname{cyc}_{\ell}\). It follows that the representations are odd (i.e., the determinant of complex conjugation is \(-1\).) Therefore, by the Khare-Wintenberger theorem (formerly Serre's conjecture on the modularity of mod-\(\ell\) Galois representations) [12, 13, 14], both \(X_{\ell}\) and \(Y_{\ell}\) are modular; that is, for \(i=1,2\), there exist newforms \(f_{i}\in S^{\operatorname{new}}_{k_{i}}(\Gamma_{1}(N_{i}),\epsilon_{i})\) such that \[X_{\ell}\cong\overline{\rho}_{f_{1},\ell}\text{ and }Y_{\ell}\cong\overline{ \rho}_{f_{2},\ell}.\] Furthermore, by the multiplicativity of Artin conductors, we obtain the divisibility \(N_{1}N_{2}\mid N\). **Lemma 3.9**.: _Both \(f_{1}\) and \(f_{2}\) have weight two and trivial Nebentypus; that is, \(k_{1}=k_{2}=2\), and \(\epsilon_{1}=\epsilon_{2}=1\)._ Proof.: From Theorem 2.6, we have that \(X_{\ell}|_{I_{\ell}}\) and \(Y_{\ell}|_{I_{\ell}}\) must each be conjugate to either of the following subgroups of \(\operatorname{GL}_{2}(\mathbb{F}_{\ell})\): \[\begin{pmatrix}1&*\\ 0&\operatorname{cyc}_{\ell}\end{pmatrix}\text{ or }\begin{pmatrix}\psi_{2}&0 \\ 0&\psi_{2}^{\ell}\end{pmatrix}.\] The assertion of weight \(2\) now follows from [1, Proposition 3]. (Alternatively, one may use Proposition 4 of _loc. cit._, observing that \(X_{\ell}\) and \(Y_{\ell}\) are finite and flat as group schemes over \(\mathbb{Z}_{\ell}\) because \(\ell\) is a prime of good reduction.) From Section 1 of _loc. cit._, the Nebentypus \(\epsilon_{i}\) of \(f_{i}\) satisfies, for all \(p\vdash\ell N\), \[\det X_{\ell}(\operatorname{Frob}_{p})=p\cdot\epsilon_{i}(p),\] where this equality is viewed inside \(\overline{\mathbb{F}}_{\ell}^{\times}\). The triviality follows. We therefore have newforms \(f_{i}\in\mathcal{S}_{2}^{\operatorname{new}}(\Gamma_{0}(N_{i}))\) such that \[\overline{A[\ell]}\simeq\overline{\rho}_{f_{1},\ell}\oplus\overline{\rho}_{f _{2},\ell}. \tag{7}\] We may assume without loss of generality that \(N_{1}\leq\sqrt{N}\). Let \(p\vdash N\) be an auxiliary prime. We obtain from equation (7) that the integral characteristic polynomial of Frobenius factors: \[P_{p}(t)\equiv(t^{2}-a_{p}(f_{1})t+p)(t^{2}-a_{p}(f_{2})t+p)\mod\ell;\] here we use the standard property that, for \(f\) a normalised eigenform with trivial Nebentypus, \(\rho_{f,\ell}(\operatorname{Frob}_{p})\) satisfies the polynomial equation \(t^{2}-a_{p}(f)t+p\) for \(p\neq\ell\). In particular, we have \[\operatorname{Res}(P_{p}(t),t^{2}-a_{p}(f_{1})t+p)\equiv 0\mod\ell.\] This serves as the basis of the algorithm to find all primes \(\ell\) in this case. **Algorithm 3.10**.: _Given a typical genus \(2\) Jacobian \(A/\mathbb{Q}\) of conductor \(N\), compute an integer \(M_{\text{self-dual}}\) as follows._ 1. _Compute the set_ \(S\) _of divisors_ \(d\) _of_ \(N\) _with_ \(d\leq\sqrt{N}\)_._ 2. _For each_ \(d\in S\)_:_ 1. _compute the_ Hecke__\(L\)_-polynomial_ \[Q_{d}(t):=\prod_{f}(t^{2}-a_{p}(f)t+p),\] _where the product is taken over the finitely many newforms in_ \(\mathcal{S}_{2}^{\operatorname{new}}(\Gamma_{0}(d))\)_;_ 2. _choose a finite set_ \(\mathcal{T}\) _of auxiliary primes_ \(p\vdash N\)_;_ 3. _for each auxiliary prime_ \(p\)_, compute the resultant_ \[R_{p}(d):=\operatorname{Res}(P_{p}(t),Q_{d}(t));\] _._ 4. _Take the greatest common divisor_ \[M(d):=\gcd_{p\in\mathcal{T}}(pR_{p}(d)).\] 3. _Let_ \(M_{\text{self-dual}}:=\prod_{d\in S}M(d)\)_._ _Return the list of prime divisors \(\ell\) of \(M_{\text{self-dual}}\)._ **Proposition 3.11**.: _Any good prime \(\ell\) for which \(\overline{A[\ell]}\) has self-dual two-dimensional subrepresentations is returned by Algorithm 3.10._ Proof.: If \(\ell\) is in \(\mathcal{T}\) for any \(d\in S\), then \(\ell\) is in the output because \(M_{\text{self-dual}}\) is a multiple of \(M(d)\) which in turn is a multiple of any element of \(\mathcal{T}\). Otherwise, as explained before Algorithm 3.10, there is some \(N_{1}\in S\) and some newform \(f_{1}\in\mathcal{S}_{2}^{\text{new}}(\Gamma_{0}(N_{1}))\) such that \(\operatorname{Res}(P_{p}(t),t^{2}-a_{p}f_{1}t+p)\equiv 0\pmod{\ell}\) for every \(p\in\mathcal{T}\). In particular, \(R_{p}(N_{1})\equiv 0\pmod{\ell}\), so \(\ell\) divides \(M(N_{1})\) and \(M_{\text{self-dual}}\). We can again do a "worst case" theoretical analysis of this algorithm to conclude the following. As this indicates, this is by far the limiting step of the algorithm. **Proposition 3.12**.: _Algorithm 3.10 terminates. More precisely, if \(q\) is the smallest surjective prime for \(A\), then a good prime \(p\) for which \(R_{p}(d)\) is nonzero is bounded by a function of \(q\). Assuming GRH, \(p\ll q^{22}\log^{2}(qN)\), where the implied constant is absolute and effectively computable. Moreover, for such a prime \(p\), we have_ \[|R_{p}(d)|\ll(2p^{1/2})^{8\dim\mathcal{S}_{2}^{\text{new}}(\Gamma_{0}(d))}\ll (4p)^{(d+1)/3},\] _and so all together_ \[|M_{\text{self-dual}}|\ll(4q)^{N^{1/2+\epsilon}},\] _where the implied constants are absolute._ Proof.: As in Proposition 3.8, we use Serre's open image theorem and the Effective Chebotarev Theorem. If \(R_{p}(d)\) is zero integrally, then in particular \(R_{p}(d)\equiv 0\pmod{q}\) and \(P_{p}(t)\) is reducible modulo \(q\). Since \(\operatorname{GSp}_{4}(\mathbb{F}_{q})\) contains elements that do not have reducible characteristic polynomial, Lemma 2.10 implies that such elements are the image of \(\operatorname{Frob}_{p}\) for \(p\) bounded as claimed. The resultant \(R_{p}(d)\) is the product of the pairwise differences of the roots of \(P_{p}(t)\) and \(Q_{d}(t)\), which all have complex absolute value \(p^{1/2}\). Hence the pairwise differences have absolute value at most \(2p^{1/2}\). Moreover \(\dim\mathcal{S}_{2}^{\text{new}}(\Gamma_{0}(d))\leq(d+1)/12\) by [10, Theorem 2]. Since there are \(8\dim\mathcal{S}_{2}^{\text{new}}(\Gamma_{0}(d))\) such terms multiplied to give \(R_{p}(d)\), the bound for \(R_{p}(d)\) follows. Since \(M_{\text{self-dual}}=\prod\limits_{d\leq\sqrt{N}}pR_{p}(d)\), it suffices to bound \[\sum\limits_{\begin{subarray}{c}d|N\\ d\leq\sqrt{N}\end{subarray}}\frac{d+4}{3}\leq\sum\limits_{\begin{subarray}{c }d|N\\ d\leq\sqrt{N}\end{subarray}}\frac{\sqrt{N}+4}{3}\leq\sigma_{0}(N)\frac{\sqrt{N} +4}{3}.\] Since \(\sigma_{0}(N)\ll N^{\epsilon}\) by [1, (31) on page 296], we obtain the claimed bound. _Remark 10_.: The polynomial \(Q_{d}(t)\) in step (2) of Algorithm 3.10 is closely related to the characteristic polynomial \(H_{d}(t)\) of the Hecke operator \(T_{p}\) acting on the space \(S_{2}(\Gamma_{0}(d))\), which may be computed via modular symbols computations. One may recover \(Q_{d}(t)\) from \(H_{d}(t)\) by first homogenizing \(H\) with an auxiliary variable \(z\) (say) to obtain \(H_{d}(t,z)\), and setting \(t=1+pz^{2}\) (an observation we made in conjunction with Joseph Wetherell). In our computation of nonsurjective primes for the database of genus \(2\) curves with conductor at most \(2^{20}\) (including those in the LMFDB), we only needed to use polynomials \(Q_{d}(t)\) for level up to \(2^{10}\) (since step (1) of the Algorithm has a \(\sqrt{N}\) term). We are grateful to Andrew Sutherland for providing us with a precomputed dataset for these levels resulting from the creation of an extensive database of modular forms going well beyond what was previously available [1]. _Remark 11_.: Our Sage implementation uses two auxiliary primes in Step 2(b) of the above algorithm. Increasing the number of such primes yields smaller supersets at the expense of longer runtime. ### Good primes that are geometrically irreducible Let \(\phi\) be any quadratic Dirichlet character \(\phi\colon(\mathbb{Z}/N\mathbb{Z})^{\times}\to\{\pm 1\}\). Our goal in this subsection is to find all good primes \(\ell\) governed by \(\phi\), by which we mean that \[\operatorname{tr}(\rho_{A,\ell}(\operatorname{Frob}_{p}))\equiv a_{p}\equiv 0 \mod\ell\] whenever \(\phi(p)=-1\). We will consider the set of all quadratic Dirichlet character \(\phi\colon(\mathbb{Z}/N\mathbb{Z})^{\times}\to\{\pm 1\}\). Using the structure theorem for finite abelian groups and the fact that \(\phi\) factors through \((\mathbb{Z}/N\mathbb{Z})^{\times}/((\mathbb{Z}/N\mathbb{Z})^{\times})^{2}\), this set has the structure of an \(\mathbb{F}_{2}\)-vector space of dimension \[d(N)\mathrel{\mathop{:}}=\omega(N)+\begin{cases}0&\colon\,v_{2}(N)=0\\ -1&\colon\,v_{2}(N)=1\\ 0&\colon\,v_{2}(N)=2\\ 1&\colon\,v_{2}(N)\geq 3,\end{cases}\] where \(\omega(m)\) denotes the number of prime factors of \(m\) and \(v_{2}(m)\) is the \(2\)-adic valuation of \(m\). In particular, \(d(N)\leq\omega(N)+1\). **Algorithm 3.13**.: _Given a typical genus \(2\) Jacobian \(A/\mathbb{Q}\) of conductor \(N\), compute an integer \(M_{\text{quad}}\) as follows._ 1. _Compute the set_ \(S\) _of quadratic Dirichlet characters_ \(\phi\colon(\mathbb{Z}/N\mathbb{Z})^{\times}\to\{\pm 1\}\)_._ 2. _For each_ \(\phi\in S\)_:_ 1. _Choose a nonempty finite set_ \(\mathcal{T}\) _of "auxiliary" primes_ \(p\nmid N\) _for which_ \(a_{p}\neq 0\) _and_ \(\phi(p)=-1\)_._ 2. _Take the greatest common divisor_ \[M_{\phi}\mathrel{\mathop{:}}=\gcd_{p\in\mathcal{T}}(pa_{p}),\] _over all auxiliary primes_ \(p\)_._ 3. _Let_ \(M_{\text{quad}}\mathrel{\mathop{:}}=\prod_{\phi\in S}M_{\phi}\)_._ _Return the list of prime divisors \(\ell\) of \(M_{\text{quad}}\)._ **Proposition 3.14**.: _Any good prime \(\ell\) for which \(\overline{A[\ell]}\) is governed by a quadratic character is returned by Algorithm 3.13._ Proof.: Suppose that \(\overline{A[\ell]}\) is governed by the quadratic character \(\phi\colon(\mathbb{Z}/N\mathbb{Z})^{\times}\to\{\pm 1\}\). Then for every good prime \(p\neq\ell\) for which \(\phi(p)=-1\), the prime \(\ell\) must divide the integral trace of Frobenius \(a_{p}\). Hence \(\ell\) divides \(M_{\phi}\) and \(M_{\text{quad}}\). **Proposition 3.15**.: _Algorithm 3.13 terminates. More precisely, if \(q\) is the smallest surjective prime for \(A\), then a good prime \(p\) for which \(\phi(p)=-1\) and \(a_{p}\) is nonzero is bounded by a function of \(q\). Assuming GRH, \(p\ll 2^{2d(N)}q^{22}\log^{2}(qN)\), where the implied constant is absolute and effectively computable. Moreover, we have_ \[\prod_{\phi\in S}\prod_{\begin{subarray}{c}\ell\text{ \ of \(\mathbb{P}\rho_{A,q}\), we conclude in the same way as [14, Proof of Lemma 21] that for any \(\alpha\in V^{\vee}\), there exists an \(X_{\alpha}\in\operatorname{GSp}_{4}(\mathbb{F}_{q})\) with \(\operatorname{tr}(X_{\alpha})\neq 0\) such that \((\alpha,X_{\alpha})\) is in the image of \(\rho_{V}\times\rho_{A,\ell}\). Apply the effective Chebotarev density theorem to the Galois extension corresponding to \(\rho_{V}\times\rho_{A,q}\). This has degree at most \(2^{d(N)}|\operatorname{GSp}_{4}(\mathbb{F}_{q})|\) and is unramified outside of \(qN\). Therefore, assuming GRH and combining (4) and (5), there exists a prime \[p_{\alpha}\ll 2^{2d(N)}q^{22}\log^{2}(qN)\] for which \((\alpha,X_{\alpha})=(\rho_{V}(\operatorname{Frob}_{p_{\alpha}}),\rho_{A,q}( \operatorname{Frob}_{p_{\alpha}}))\). Let \(\phi\) be a character not in the kernel of \(\alpha\). Any exceptional prime \(\ell\) governed by \(\phi\) must divide \(p_{\alpha}a_{p_{\alpha}}\), which is nonzero because it is nonzero modulo \(q\). This proves that the algorithm terminates, since every \(\phi\) is not in the kernel of precisely half of all \(\alpha\in V^{\vee}\). We now bound the size of the product of all \(\ell\) governed by a character in \(S\). If \(\ell\) is governed by \(\phi\), then \(\ell\) divides the quantity \[p|a_{p}|\leq p^{3/2}\ll 2^{3d(N)}q^{33}\log^{3}(qN).\] Taking the product over all nonzero \(\alpha\) in \(V\) (of which there are \(2^{d(N)}-1\)), each \(\ell\) will show up half the time, so we obtain: \[\left(\prod_{\begin{subarray}{c}\ell\text{ governed}\\ \text{by }\phi\in S\end{subarray}}\ell\right)^{2^{d(N)-1}}\ll\left(2^{3d(N)}q^{33 }\log^{3}(qN)\right)^{2^{d}(N)-1},\] which implies the result by taking the \((2^{d(N)-1})\)th root of both sides. Putting all of these pieces together, we obtain the following. Proof of Theorem 1.1(1).: If \(\rho_{A,\ell}\) is nonsurjective, \(\ell>7\), and \(\ell+N\), then Proposition 2.9 implies that \(\rho_{A,\ell}(G_{\mathbb{Q}})\) must be in one of the maximal subgroups of Type (1) or (2) listed in Lemma 2.3. If it is contained in one of the reducible subgroups, i.e. the subgroups of Type (1), then \(\rho_{A,\ell}(G_{\mathbb{Q}})\) (and, hence, \(\rho_{A,\ell}(G_{\mathbb{Q}})\otimes\overline{\mathbb{F}}_{\ell}\)) is reducible, and so \(\ell\) is added to PossiblyNonsurjectivePrimes in Step (3) by Propositions 3.4, 3.7, and 3.11. If \(\rho_{A,\ell}(G_{\mathbb{Q}})\) is contained in one of the index \(2\) subgroups \(M_{\ell}\) of an irreducible subgroup of Type (2) listed in Lemma 2.3, then again \(\ell\) is added to PossiblyNonsurjectivePrimes in Step (3), since \(M_{\ell}\otimes\overline{\mathbb{F}}_{\ell}\) is always reducible by Lemma 2.4(1b). Hence we may assume that \(\rho_{A,\ell}(G_{\mathbb{Q}})\) is contained in one of the irreducible maximal subgroups \(G_{\ell}\) of Type (2) listed in Lemma 2.3, but not in the index \(2\) subgroup \(M_{\ell}\). The normalizer character \[G_{\mathbb{Q}}\xrightarrow{\rho_{A,\ell}}G_{\ell}\to G_{\ell}/M_{\ell}=\{ \pm 1\}\] is nontrivial and unramified outside of \(N\), and so it corresponds to a quadratic Dirichlet character \(\phi\colon(\mathbb{Z}/N\mathbb{Z})^{\times}\to\{\pm 1\}\). Lemma 2.4(1a) shows that \(\operatorname{tr}(g)=0\) in \(\mathbb{F}_{\ell}\) for any \(g\in G_{\ell}\smallsetminus M_{\ell}\). Consequently, \(\ell\) is governed by \(\phi\) (in the language of Section 3.2), so \(\ell\) is added to PossiblyNonsurjectivePrimes in Step (4) by Proposition 3.14. ### Bounds on Serre's open image theorem In this section we combine the theoretical worst case bounds in the Algorithms 3.3, 3.6, 3.10, and 3.13 to give a bound on the smallest surjective good prime \(q\), and the product of all nonsurjective primes, thereby establishing Theorem 1.2. **Corollary 3.16**.: _Let \(A/\mathbb{Q}\) be a typical genus \(2\) Jacobian of conductor \(N\). Assuming GRH, we have_ \[\prod_{\ell\text{ nonsurjective}}\ell\ll\exp(N^{1/2+\epsilon}),\] _where the implied constant is absolute and effectively computable._ Proof.: Let \(q\) be the smallest surjective good prime for \(A\), which is finite by Serre's open image theorem. Multiplying the bounds in Propositions 3.5, 3.8, 3.12, and 3.15 by the conductor \(N\), the product of all nonsurjective primes is bounded by a function of \(q\) and \(N\) of the following shape \[\prod_{\ell\text{ nonsurjective}}\ell\ll q^{N^{1/2+\epsilon}}. \tag{8}\] On the other hand, since \(q\) is the smallest surjective prime by definition, the product of all primes less than \(q\) divides the product of all nonsurjective primes. Using [10, Lemme 11], we have \[\exp(q)\ll\prod_{\ell<q}\ell\leq\prod_{\ell\text{ nonsurjective}}\ell\ll q^{N^{1/2+ \epsilon}}.\] Combining the first and last terms, we have \(q\ll N^{1/2+\epsilon}\log(q)\), whence \(q\ll N^{1/2+\epsilon}\). Plugging this back into (8) yields the claimed bound. ## 4. Testing surjectivity of \(\rho_{A,\ell}\) In this section we establish Theorem 1.1(2). The goal is to weed out any extraneous nonsurjective primes in the output \(\mathsf{PossiblyNonsurjectivePrimes}\) of Algorithm 3.1 to produce a smaller list \(\mathsf{LikelyNonsurjectivePrimes}(B)\) containing all nonsurjective primes (depending on a chosen bound \(B\)) by testing the characteristic polynomials of Frobenius elements up to the bound \(B\). If \(B\) is sufficiently large (quantified in Section 5), the list \(\mathsf{LikelyNonsurjectivePrimes}(B)\) is provably the list of nonsurjective primes. **Algorithm 4.1**.: _Given an integer \(B\) and the output \(\mathsf{PossiblyNonsurjectivePrimes}\) of Algorithm 3.1 run on the typical hyperelliptic genus \(2\) curve with equation \(y^{2}+h(x)y=f(x)\), output a sublist \(\mathsf{LikelyNonsurjectivePrimes}(B)\) of \(\mathsf{PossiblyNonsurjectivePrimes}\) as follows._ 1. _Initialize_ \(\mathsf{LikelyNonsurjectivePrimes}(B)\) _as_ \(\mathsf{PossiblyNonsurjectivePrimes}\)_._ 2. _Remove_ \(2\) _from_ \(\mathsf{LikelyNonsurjectivePrimes}(B)\) _if the size of the Galois group of the splitting field of_ \(4f+h^{2}\) _is_ \(720\)_._ 3. _For each good prime_ \(p<B\)_, while_ \(\mathsf{LikelyNonsurjectivePrimes}(B)\) _is nonempty:_ 1. _Compute the integral characteristic polynomial_ \(P_{p}(t)\) _of_ \(\mathrm{Frob}_{p}\)_._ 2. _For each prime_ \(\ell\) _in_ \(\mathsf{LikelyNonsurjectivePrimes}(B)\)_, run Tests_ 4.4_(i)_,_ (ii)_, and_ (iii) _on_ \(P_{p}(t)\) _to rule out_ \(\rho_{A,\ell}(G_{\mathbb{Q}})\) _being contained in one of the exceptional maximal subgroups._ 3. _For each prime_ \(\ell\) _in_ \(\mathsf{LikelyNonsurjectivePrimes}(B)\)_, run Tests_ 4.5_(i) _and_ (ii) _on_ \(P_{p}(t)\) _to rule out_ \(\rho_{A,\ell}(G_{\mathbb{Q}})\) _being contained in one of the nonexceptional maximal subgroups._ 4. _For a given prime_ \(\ell\)_, if each of the 5 tests Tests_ 4.4_(i)-(iii) _and Tests_ 4.5_(i)-(ii) _have succeeded for some prime_ \(p\)_, remove_ \(\ell\) _from_ \(\mathsf{LikelyNonsurjectivePrimes}(B)\)_._ 4. _Return_ \(\mathsf{LikelyNonsurjectivePrimes}(B)\)_._ _Remark 12_.: In our implementation of Step 3 of this algorithm, we have chosen to only use primes \(p\) of good reduction for the curve as auxiliary primes, which is a stronger condition than being a good prime for the Jacobian \(A\). More precisely, the primes that are good for the Jacobian but bad for the curve are precisely the prime factors of the discriminant \(4f+h^{2}\) of a minimal equation for the curve that do not divide the conductor \(N_{A}\) of the Jacobian. At such a prime, the reduction of the curve consists of two elliptic curves \(E_{1}\) and \(E_{2}\) intersecting transversally at a single point. Since there are many auxiliary primes \(p<B\) to choose from, excluding bad primes for the curve is not a serious restriction, but allows us to access the characteristic polynomial of Frobenius directly by counting points on the reduction of the curve. This is not strictly necessary: one could use the characteristic polynomials of Frobenius for the elliptic curves \(E_{1}\) and \(E_{2}\), which can be computed using the genus2reduction module of SageMath. We briefly summarize the contents of this section. In Section 4.1, we first prove a purely group-theoretic criterion for a subgroup of \(\mathrm{GSp}_{4}(\mathbb{F}_{\ell})\) to equal the whole group. Then in Section 4.2, we explain Test 4.4 and Test 4.5, whose validity follows immediately from Lemma 2.4(3) and Proposition 4.2 respectively. The main idea of these tests is to use auxiliary good primes \(p\neq\ell\) to generate characteristic polynomials in the image of \(\rho_{A,\ell}\). If we find enough types of characteristic polynomials to rule out each proper maximal subgroup of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) (cf. Proposition 4.2), then we can conclude that \(\rho_{A,\ell}\) is surjective. In Section 4.3, we prove Theorems 1.1(2) and 1.3 that justify this algorithm. ### A group-theoretic criterion We now use the classification of maximal subgroups of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) described in Section 2.4 to deduce a group-theoretic criterion for a subgroup \(G\) of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) to be the whole group. This is analogous to [10, Proposition 19 (i)-(ii)]. **Proposition 4.2**.: _Fix a prime \(\ell\neq 2\) and a subgroup \(G\subseteq\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) with surjective similitude character. Assume that \(G\) is not contained in one of the exceptional maximal subgroups described in Lemma 2.3(4). Then \(G=\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) if and only if there exists matrices \(X,Y\in G\) such that_ 1. _the characteristic polynomial of_ \(X\) _is irreducible; and_ 2. \(\operatorname{trace}Y\neq 0\) _and the characteristic polynomial of_ \(Y\) _has a linear factor with multiplicity one._ Proof.: The 'only if' direction follows from Proposition 5.1 below, where we show that a _nonzero_ proportion of elements of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) satisfy the conditions in (a) and (b). Now assume that the group \(G\) has elements \(X\) and \(Y\) as in the statement of the proposition. We have to show that \(G=\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\). By assumption, \(G\) is not a subgroup of a maximal subgroup of type (4). For each of the remaining types of maximal subgroups in Lemma 2.3, we will use one of the elements \(X\) or \(Y\) to rule out \(G\) being contained in a subgroup of that type. 1. By Lemma 2.2 (iv), every element of a subgroup of type (1) has a reducible characteristic polynomial. The same is true for elements of type (3) by Lemma 2.4 (2). This is violated by the element \(X\), so \(G\) cannot be contained in a subgroup of type (1) or type (3). 2. Recall the notation used in the description of a type (2) maximal subgroups in Lemma 2.3. By Lemma 2.41a, every element in \(G_{\ell}\setminus M_{\ell}\) has trace \(0\). By Lemma 2.2 (iii), an element with irreducible characteristic polynomial automatically has nonzero trace. Hence both \(X\) and \(Y\) have nonzero trace, and so cannot be contained in \(G_{\ell}\setminus M_{\ell}\). We now consider two cases 1. If the two lines are individually defined over \(\mathbb{F}_{\ell}\), then every element in \(M_{\ell}\) preserves a two-dimensional subspace and hence has a reducible characteristic polynomial. This is violated by the element \(X\). 2. If the two lines are permuted by \(G_{\mathbb{F}_{\ell}}\), then the action of \(M_{\ell}\) on the corresponding subspaces \(V\) and \(V^{\prime}\) are conjugate. Therefore, every \(\mathbb{F}_{\ell}\)-rational eigenvalue for the action of \(\operatorname{Frob}_{p}\) on \(V\), also appears as an eigenvalue for the action on \(V^{\prime}\), with the same multiplicity. This is violated by the element \(Y\). Hence \(G\) cannot be contained in a maximal subgroup of type (2). Since any subgroup of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) that is not contained in a proper maximal subgroup of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) must equal \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\), we are done. _Remark 13_.: [1, Corollary 2.2] gives a very similar criterion for a subgroup \(G\) of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) to contain \(\operatorname{Sp}_{4}(\mathbb{F}_{\ell})\), namely that it contains a transvection, and also an element with irreducible characteristic polynomial (and hence automatically nonzero trace). ### Surjectivity tests #### 4.2.1. Surjectivity test for \(\ell=2\) **Proposition 4.3**.: _Let \(A\) be the Jacobian of the hyperelliptic curve \(y^{2}+h(x)y=f(x)\) defined over \(\mathbb{Q}\). Then \(\rho_{A,2}\) is surjective if and only if the size of the Galois group of the splitting field of \(4f+h^{2}\) is \(720\)._ Proof.: This follows from the fact that \(\operatorname{GSp}_{4}(\mathbb{F}_{2})\cong S_{6}\) which is a group of size \(720\), and that the representation \(\rho_{A,2}\) is the permutation action of the Galois group on the six roots of \(4f+h^{2}\). #### 4.2.2. Surjectivity tests for \(\ell\neq 2\) The tests to rule out the exceptional maximal subgroups rely on the existence of the finite lists \(C_{1920}\) and \(C_{720}\) (independent of \(\ell\)), and \(C_{7,5040}\) given in Lemma 2.4(3). **Test 4.4** (Tests for ruling out exceptional maximal subgroups of \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) for \(\ell\neq 2\)).: Given a polynomial \(P_{p}(t)=t^{4}-a_{p}t+b_{p}t^{2}-pa_{p}t+p^{2}\) and \(\ell\geq 2\), 1. \(P_{p}(t)\) passes Test 4.4 (i) if \(\ell\equiv\pm 1\pmod{8}\) or \((a_{p}^{2}/p,b_{p}/p)\bmod\ell\) lies outside of \(C_{1920}\bmod\ell\). 2. \(P_{p}(t)\) passes Test 4.4 (ii) if \(\ell\equiv\pm 1\pmod{12}\) or \((a_{p}^{2}/p,b_{p}/p)\bmod\ell\) lies outside of \(C_{720}\bmod\ell\). 3. \(P_{p}(t)\) passes Test 4.4 (iii) if \(\ell\neq 7\) or \((a_{p}^{2}/p,b_{p}/p)\bmod\ell\) lies outside of \(C_{7,5040}\). **Test 4.5** (Tests for ruling out non-exceptional maximal subgroups for \(\ell\neq 2\)).: Given a polynomial \(P_{p}(t)=t^{4}-a_{p}t+b_{p}t^{2}-pa_{p}t+p^{2}\) and \(\ell\geq 2\), 1. \(P_{p}(t)\) passes Test 4.5 (i) if \(P_{p}(t)\) modulo \(\ell\) is irreducible. 2. \(P_{p}(t)\) passes Test 4.5 (ii) if \(P_{p}(t)\) modulo \(\ell\) has a linear factor of multiplicity \(1\) and has nonzero trace. For any one of the five tests above, say that the test succeeds if a given polynomial \(P_{p}(t)\) passes the corresponding test. _Remark 14_.: We call an auxiliary prime \(p\) a witness for a given prime \(\ell\) if the polynomial \(P_{p}(t)\) passes one of our tests for \(\ell\). The verbose output of our code prints witnesses for each of our tests for each prime \(\ell\) in PossiblyNonsurjectivePrimes but not in LikelyNonsurjectivePrimes\((B)\). ### Justification for surjectivity tests Considering Tests 4.4 and 4.5, we define \[C_{\alpha} =\{M\in\operatorname{GSp}_{4}(\mathbb{F}_{\ell}):P_{M}(t)\text{ is irreducible}\}\] \[C_{\beta} =\{M\in\operatorname{GSp}_{4}(\mathbb{F}_{\ell}):\operatorname{ tr}(M)\neq 0\text{ and }P_{M}(t)\text{ has a linear factor of multiplicity }1\}\] \[C_{\gamma_{1}} =\left\{M\in\operatorname{GSp}_{4}(\mathbb{F}_{\ell}):\left( \frac{\operatorname{tr}(M)^{2}}{\operatorname{mult}(M)},\frac{\operatorname{ mid}(M)}{\operatorname{mult}(M)}\right)\not\!\ell\,C_{\ell,1920}\text{ or }\ell\equiv\pm 1\pmod{8}\right\}\] \[C_{\gamma_{2}} =\left\{M\in\operatorname{GSp}_{4}(\mathbb{F}_{\ell}):\left( \frac{\operatorname{tr}(M)^{2}}{\operatorname{mult}(M)},\frac{\operatorname{ mid}(M)}{\operatorname{mult}(M)}\right)\not\!\ell\,C_{\ell,720}\text{ or }\ell\equiv\pm 1\pmod{12}\right\}\] \[C_{\gamma_{3}} =\left\{M\in\operatorname{GSp}_{4}(\mathbb{F}_{\ell}):\left( \frac{\operatorname{tr}(M)^{2}}{\operatorname{mult}(M)},\frac{\operatorname{ mid}(M)}{\operatorname{mult}(M)}\right)\not\!\ell\,C_{\ell,5040}\text{ or }\ell\neq 7\right\}\] \[C_{\gamma} =C_{\gamma_{1}}\cap C_{\gamma_{2}}\cap C_{\gamma_{3}}.\] Proof of Theorem 1.1(2) and Theorem 1.3.: Let \(B>0\). Since LikelyNonsurjectivePrimes\((B)\) is a sublist of PossiblyNonsurjectivePrimes, which contains all nonsurjective primes by Theorem 1.1(1), any prime not in PossiblyNonsurjectivePrimes is surjective. Now consider \(\ell\in\operatorname{PossiblyNonsurjectivePrimes}\) and **not** in LikelyNonsurjectivePrimes\((B)\). If \(\ell=2\), then by Proposition 4.3, \(\rho_{A,2}\) is surjective. If \(\ell>2\), this means that we found primes \(p_{1},p_{2},p_{3},p_{4},p_{5}\leq B\) each distinct from \(\ell\) and of good reduction for \(A\) for which \(\rho_{A,\ell}(\operatorname{Frob}_{p_{1}})\in C_{\alpha}\), \(\rho_{A,\ell}(\operatorname{Frob}_{p_{2}})\in C_{\beta}\), \(\rho_{A,\ell}(\operatorname{Frob}_{p_{3}})\in C_{\gamma_{1}}\), \(\rho_{A,\ell}(\operatorname{Frob}_{p_{4}})\in C_{\gamma_{2}}\), and \(\rho_{A,\ell}(\operatorname{Frob}_{p_{4}})\in C_{\gamma_{3}}\). Note that by (1), the similitude factor \(\operatorname{mult}(\rho_{A,\ell}(\operatorname{Frob}_{p}))\) is \(p\). Therefore, by Lemma 2.4(3), it follows that \(\rho_{A,\ell}(G_{\mathbb{Q}})\) is not contained in an exceptional maximal subgroup. The surjectivity of \(\rho_{A,\ell}\) now follows from Proposition 4.2. Finally, we will show that if \(B\) is sufficiently large (as quantified by Theorem 1.3), then any prime \(\ell\) in PossiblyNonsurjectivePrimes is nonsurjective. Since the sets \(C_{\alpha}\), \(C_{\beta}\), \(C_{\gamma_{1}}\), \(C_{\gamma_{2}}\) and \(C_{\gamma_{3}}\) are nonempty by Proposition 5.1 below and closed under conjugation, it follows by Lemma 2.10, there exist primes \(p_{1},p_{2},p_{3},p_{4},p_{5}\leq B\) as above. _Remark 15_.: If we assume both GRH and AHC, Ram Murty and Kumar Murty [14, p. 52] noted (see also [13, Theorem 2.3]) that the bound (4) can be replaced with \(p\ll\frac{(\log d_{K})}{|S|}\). Proposition 5.1, which follows, shows that the sets \(C_{\alpha}\), \(C_{\beta}\), and \(C_{\gamma}\) have size at least \(\frac{|\mathrm{GSp}_{4}(\mathbb{F}_{\ell})|}{10}\). This can be used to prove the ineffective version of Theorem 1.3 which relies on AHC noted in the introduction in a manner similar to the proof of Theorem 1.3. ## 5. The probability of success In this section we prove Theorem 1.4, by studying the probability that a matrix chosen uniformly at random from \(\mathrm{GSp}_{4}(\mathbb{F}_{\ell})\) is contained in each of \(C_{\alpha}\), \(C_{\beta}\), and \(C_{\gamma}\) defined in Section 4.3. Let \(\alpha_{\ell}\), \(\beta_{\ell}\), and \(\gamma_{\ell}\) respectively be the probabilities that a matrix chosen uniformly at random from \(\mathrm{GSp}_{4}(\mathbb{F}_{\ell})\) is contained in \(C_{\alpha}\), \(C_{\beta}\), or \(C_{\gamma}\). **Proposition 5.1**.: _Let \(M\) be a matrix chosen uniformly at random from \(\mathrm{GSp}_{4}(\mathbb{F}_{\ell})\) with \(\ell\) odd. Then_ 1. _The probability that_ \(M\in C_{\alpha}\) _is given by_ \[\alpha_{\ell}=\frac{1}{4}-\frac{1}{2(\ell^{2}+1)}.\] 2. _The probability that_ \(M\in C_{\beta}\) _is given by_ \[\beta_{\ell}=\frac{3}{8}-\frac{3}{4(\ell-1)}+\frac{1}{2(\ell-1)^{2}}.\] 3. _The probability that_ \(M\in C_{\gamma}\) _is_ \[\gamma_{\ell}\geq 1-\frac{3\ell}{\ell^{2}+1}.\] _Remark 16_.: Magma code that directly verifies the sizes of \(C_{\alpha},C_{\beta},C_{\gamma}\) (i.e. computes \(\alpha_{\ell},\beta_{\ell},\gamma_{\ell}\)) for small \(\ell\) may be found in helper_scripts/SanityCheckProbability.m in the repository. [15] characterizes all conjugacy classes of elements of \(\mathrm{GSp}_{4}(\mathbb{F}_{\ell})\) for \(\ell\) odd, grouping them into \(26\) different types. For each type \(\gamma\), Shinoda further computes the number \(N(\gamma)\) of conjugacy classes of type \(\gamma\) and the size of the centralizer \(|C_{\mathrm{GSp}_{4}(\mathbb{F}_{\ell})}(\gamma)|\), which is the size of the centralizer \(|C_{\mathrm{GSp}_{4}(\mathbb{F}_{\ell})}(M)|\) of \(M\) in \(\mathrm{GSp}_{4}(\mathbb{F}_{\ell})\) for any \(M\) in a conjugacy class of type \(\gamma\). The size \(|C(\gamma)|\) of any conjugacy class of type \(\gamma\) can then easily be computed as \[|\mathcal{C}(\gamma)|=\frac{|\mathrm{GSp}_{4}(\mathbb{F}_{\ell})|}{|C_{\mathrm{ GSp}_{4}(\mathbb{F}_{\ell})}(\gamma)|}\] and the probability that a uniformly chosen \(M\in\mathrm{GSp}_{4}(\mathbb{F}_{\ell})\) has conjugacy type \(\gamma\) is then given by \[\frac{N(\gamma)|\mathcal{C}(\gamma)|}{|\mathrm{GSp}_{4}(\mathbb{F}_{\ell})|}= \frac{N(\gamma)}{|C_{\mathrm{GSp}_{4}(\mathbb{F}_{\ell})}(\gamma)|}. \tag{9}\] To prove Proposition 5.1, we will need to examine a handful of types of conjugacy classes of \(\mathrm{GSp}_{4}(\mathbb{F}_{\ell})\). There is only a single conjugacy type \(\gamma\) whose characteristic polynomials are irreducible. This type is denoted \(K_{0}\) in [15] where it is shown there that \(N(K_{0})=\frac{(\ell-1)(\ell^{2}-1)}{4}\) and \(|C_{\mathrm{GSp}_{4}(\mathbb{F}_{\ell})}(K_{0})|=(\ell-1)(\ell^{2}+1)\). While there is only one way for a polynomial to be irreducible, there are several ways for a quartic polynomial to have a root of odd order. However, only some of these can occur if \(f(t)\) is the characteristic polynomial of a matrix \(M\in\mathrm{GSp}_{4}(\mathbb{F}_{\ell})\) and we only need to concern ourselves with the following three possibilities: 1. \(f(t)\) splits completely over \(\mathbb{F}_{\ell}\); 2. \(f(t)\) has two roots over \(\mathbb{F}_{\ell}\), both of which occur with multiplicity one; and 3. \(f(t)\) has two simple roots and one double root over \(\mathbb{F}_{\ell}\). Cases (a) and (b) correspond to the conjugacy types \(H_{0}\) and \(J_{0}\) in [15] respectively. In contrast, there are two types of conjugacy classes for which \(f(t)\) has two simple roots and one double root, which are denoted \(E_{0}\) and \(E_{1}\) in [15]. The number of conjugacy classes and centralizer size for each of these conjugacy types is given by Table 2, along with the associated probability that a uniform random \(M\in\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) has conjugacy type \(\gamma\) computed using (9). Proof of Proposition 5.1.: Part (i) is simply the entry in Table 2 in the last column corresponding to the "\(K_{0}\) (Irreducible)" type. We now establish part (ii). As indicated in the discussion above Table 2, the only conjugacy classes of matrices in \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) whose characteristic polynomials have some linear factors of odd multiplicity are those of the types \(H_{0},J_{0},E_{0},E_{1}\). However, for part (ii) since we are only interested in matrices \(M\) also having non-zero trace, it is insufficient to simply sum over the rightmost entries in the bottom four rows of Table 2. From [15, Table 2], we see that the elements of \(E_{0}\) and \(E_{1}\) have trace \(\frac{c(a+1)^{2}}{a}\) for some \(c,a\in\mathbb{F}_{\ell}^{\times}\) with \(a\neq\pm 1\). In particular, it follows that elements of types \(E_{0}\) and \(E_{1}\) have nonzero traces. The elements of \(J_{0}\) have trace \(\frac{(c+a)(c+a^{\ell})}{c}\) where \(c\in\mathbb{F}_{\ell}^{\times}\) and \(a\in\mathbb{F}_{\ell^{2}}\setminus\mathbb{F}_{\ell}\). Therefore, the elements of \(J_{0}\) also have nonzero trace. It remains to analyze which conjugacy classes of Type \(H_{0}\) have nonzero trace. Following [15], the \(\frac{(\ell-1)(\ell-3)^{2}}{8}\) conjugacy classes of type \(H_{0}\) correspond to quadruples of distinct elements in \(a_{1},a_{2},b_{1},b_{2}\in\mathbb{F}_{\ell}^{\times}\) satisfying \(a_{1}b_{1}=a_{2}b_{2}\) modulo the action of swapping any of \(a_{1}\) with \(b_{1}\), \(a_{2}\) with \(b_{2}\), or \(a_{1},b_{1}\) with \(a_{2},b_{2}\). The eigenvalues of any matrix in the conjugacy class are \(a_{1}\), \(a_{2}\), \(b_{1}\), and \(b_{2}\). Consequently the matrix has trace zero only if either \(a_{2}=-a_{1}\) and \(b_{2}=-b_{1}\) or \(b_{1}=-a_{2}\) and \(b_{2}=-a_{1}\). This accounts for \(\frac{(\ell-1)(\ell-3)}{4}\) of the \(\frac{(\ell-1)(\ell-3)^{2}}{8}\) conjugacy classes of type \(H_{0}\), leaving \(\frac{(\ell-1)(\ell-3)(\ell-5)}{8}\) conjugacy classes with non-zero trace. As a result, the probability that a matrix \(M\in\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) chosen uniformly at random has non-zero trace and totally split characteristic polynomial is \[\frac{(\ell-1)(\ell-3)(\ell-5)}{8(\ell-1)^{3}}=\frac{1}{8}-\frac{3}{4(\ell-1) }+\frac{1}{(\ell-1)^{2}}. \tag{10}\] To obtain part (ii), we add (10) to the entries in the rightmost column of the final three rows of Table 2, getting \[\left(\frac{1}{8}-\frac{3}{4(\ell-1)}+\frac{1}{(\ell-1)^{2}} \right)+\left(\frac{1}{4}-\frac{1}{2(\ell+1)}\right)+\left(\frac{1}{2\ell( \ell^{2}-1)}-\frac{1}{\ell(\ell-1)(\ell^{2}-1)}\right)+\left(\frac{1}{2\ell} -\frac{1}{\ell(\ell-1)}\right)\\ =\frac{3}{8}-\frac{3}{4(\ell-1)}+\frac{1}{2(\ell-1)^{2}}.\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline Type \(\gamma\) in [15] & \(N(\gamma)\) & \(|C_{\operatorname{GSp}_{4}(\mathbb{F}_{\ell})}(\gamma)|\) & Associated Probability \\ \hline \(K_{0}\) (Irreducible) & \(\frac{(\ell-1)(\ell^{2}-1)}{4}\) & \((\ell^{2}+1)(\ell-1)\) & \(\frac{1}{4}-\frac{1}{2(\ell^{2}+1)}\) \\ \hline \(H_{0}\) (Split) & \(\frac{(\ell-1)(\ell-3)^{2}}{8}\) & \((\ell-1)^{3}\) & \(\frac{1}{8}-\frac{1}{2(\ell-1)}+\frac{1}{2(\ell-1)^{2}}\) \\ \hline \(J_{0}\) (Two Simple Roots) & \(\frac{(\ell-1)^{3}}{4}\) & \((\ell+1)(\ell-1)^{2}\) & \(\frac{1}{4}-\frac{1}{2(\ell+1)}\) \\ \hline \(E_{0}\) (One Double Root) & \(\frac{(\ell-1)(\ell-3)}{2}\) & \(\ell(\ell-1)^{2}(\ell^{2}-1)\) & \(\frac{1}{2\ell(\ell^{2}-1)}-\frac{1}{\ell(\ell-1)(\ell^{2}-1)}\) \\ \hline \(E_{1}\) (One Double Root) & \(\frac{(\ell-1)(\ell-3)}{2}\) & \(\ell(\ell-1)^{2}\) & \(\frac{1}{2\ell}-\frac{1}{\ell(\ell-1)}\) \\ \hline \end{tabular} \end{table} Table 2. Number of conjugacy classes and centralizer sizes for each conjugacy class type in [15]. To prove (iii), we start by noting that for any pair \((u,v)\), the cardinality of the set \[\{t^{4}-at^{3}+bt^{2}-amt+m^{2}:a,b\in\mathbb{F}_{\ell},m\in\mathbb{F}_{\ell}^{ \times}\text{ and }\big{(}\tfrac{a^{2}}{m},\tfrac{b}{m}\big{)}=(u,v)\}\] is at most \(\ell-1\). By [1, Theorem 3.5], the number of matrices in \(\operatorname{GSp}_{4}(\mathbb{F}_{\ell})\) with a given characteristic polynomial is at most \((\ell+3)^{8}\). Assuming \(\ell\neq 7\), by combining these observations, and noting that \(|C_{\ell,720}\cup C_{\ell,1920}|\leq 14\), we obtain the bound \[\gamma_{\ell}\geq 1-\frac{14(\ell-1)(\ell+3)^{8}}{|\operatorname{GSp}_{4}( \mathbb{F}_{\ell})|}.\] For \(\ell>17\), this implies the claimed bound. For \(3\leq\ell\leq 17\), we directly check the claim using Magma. **Lemma 5.2**.: _Let \(C/\mathbb{Q}\) be a typical genus \(2\) curve with Jacobian \(A\) and suppose \(\ell\) is an odd prime such that \(\rho_{A,\ell}\) is surjective. For any \(\epsilon>0\), there exists an effective constant \(B_{0}\) (with \(B_{0}>\ell N_{A}\)) such that for any \(B>B_{0}\) and each \(\delta\in\{\alpha,\beta,\gamma\}\), we have_ \[\left|\frac{|\{p\text{ prime}:B\leq p\leq 2B\text{ and }\rho_{A,\ell}(\operatorname{Frob}_{p})\in C_{ \delta}\}|}{|\{p\text{ prime}:B\leq p\leq 2B\}|}-\delta_{\ell}\right|<\epsilon.\] Proof.: Let \(G=\operatorname{Gal}(\mathbb{Q}(A[\ell])/\mathbb{Q})\) and \(S\subseteq G\) be any subset that is closed under conjugation. By taking \(B\) to be sufficiently large, we have that \(B>\ell N_{A}\) and can make \[\left|\frac{|\{p\text{ prime}:B\leq p\leq 2B\text{ and }\operatorname{Frob}_{p}\in S\}|}{|\{p \text{ prime}:B\leq p\leq 2B\}|}-\frac{|S|}{|G|}\right|\] arbitrarily small by (3). Moreover, the previous statement can be made effective by using an effective version of the Chebotarev density theorem. The result then follows because each of the sets \(C_{\alpha}\), \(C_{\beta}\), and \(C_{\gamma}\) is closed under conjugation. For positive integers \(n\) and \(B>\ell N_{A}\), let \(P(B,n)\) be the probability that \(n\) primes \(p_{1},\ldots,p_{n}\) (possibly non-distinct) chosen uniformly at random in the interval \([B,2B]\) have the property that \[\rho_{A,\ell}(\operatorname{Frob}_{p_{i}})\not\in C_{\alpha}\text{ for each }i\quad\text{or}\quad\rho_{A,\ell}(\operatorname{Frob}_{p_{i}})\not\in C_{ \beta}\text{ for each }i\quad\text{or}\quad\rho_{A,\ell}(\operatorname{Frob}_{p_{i}})\not \in C_{\gamma}\text{ for each }i.\] **Corollary 5.3**.: _Suppose \(C\) and \(\ell\) are as in Lemma 5.2 and let \(n\) be a positive integer. For any \(\epsilon>0\), there exists an effective constant \(B_{0}\) (with \(B_{0}>\ell N_{A}\)) such that for all \(B>B_{0}\), we have_ \[P(B,n)<(1-\alpha_{\ell})^{n}+(1-\beta_{\ell})^{n}+(1-\gamma_{\ell})^{n}+\epsilon.\] Proof.: For \(\delta\in\{\alpha,\beta,\gamma\}\), let \(X_{\delta}\) be the event that none of the \(\rho_{A,\ell}(\operatorname{Frob}_{p_{i}})\) are contained in \(C_{\delta}\). We then have \[P(X_{\alpha}\cup X_{\beta}\cup X_{\gamma})\leq P(X_{\alpha})+P(X_{\beta})+P(X_ {\gamma})\] The result then follows by Lemma 5.2, which shows that there exists a \(B_{0}\) such that the probabilities of \(X_{\alpha}\), \(X_{\beta}\), and \(X_{\gamma}\) can be made arbitrarily close to \((1-\alpha_{\ell})^{n}\), \((1-\beta_{\ell})^{n}\), and \((1-\gamma_{\ell})^{n}\) respectively. Proof of Theorem 1.4.: The claim made by Theorem 1.4 is that \(P(B,n)<3\cdot\big{(}\frac{9}{10}\big{)}^{n}\) for \(B\) sufficiently large. By Proposition 5.1, we have \(1-\alpha_{\ell}\leq\frac{4}{5}\), \(1-\beta_{\ell}\leq\frac{7}{8}\), and \(1-\gamma_{\ell}\leq\frac{9}{10}\) for all \(\ell\) odd. The result then follows from Corollary 5.3 because \(\big{(}\frac{4}{5}\big{)}^{n}+\big{(}\frac{7}{8}\big{)}^{n}+\big{(}\frac{9}{10} \big{)}^{n}<3\cdot\big{(}\frac{9}{10}\big{)}^{n}\). ## 6. Results of computation and interesting examples We report on the results of running our algorithm on a dataset of 1,823,592 typical genus \(2\) curves with conductor bounded by \(2^{20}\) that are part of a new dataset of approximately \(5\) million curves currently being prepared for addition into the LMFDB. Running our algorithm on all of these curves in parallel took about \(45\) hours on MIT's Lovelace computer (see the Introduction for the hardware specification of this machine). We first show in Table 3 how many of these curves were nonsurjective at particular primes, indicating also if this can be explained by the existence of a rational torsion point of that prime order. We found 31 as the largest nonsurjective prime, which occurred for the curve \[y^{2}+(x+1)y=x^{5}+23x^{4}-48x^{3}+85x^{2}-69x+45 \tag{11}\] of conductor \(7^{2}\cdot 31^{2}\) and discriminant \(7^{2}\cdot 31^{9}\) (the prime 2 was also nonsurjective here). The Jacobian of this curve does not admit a nontrivial rational 31-torsion point, so unlike many other instances of nonsurjective primes we observed, this one cannot be explained by the presence of rational torsion. One could ask if it might be explained by the existence of a \(\mathbb{Q}\)-rational 31-isogeny (as suggested by Algorithm 3.1, since 31 is returned by Algorithm 3.6). This seems to be the case - see forthcoming work of van Bommel, Chidambaram, Costa, and Kieffer [vBCCK22] where the isogeny class of this curve (among others) is computed. We also observed (see Table 4) that the vast majority of curves had less than 3 nonsurjective primes. Instructions for obtaining the entire results file may be found in the README.md file of the repository. _Remark 17_.: It would be interesting to know if there is a uniform upper bound on the largest prime \(\ell\) that could occur as a nonsurjective prime for the Jacobian of a genus 2 curve defined over \(\mathbb{Q}\) \begin{table} \begin{tabular}{|c|c|c|c|} \hline No. of nonsurj. primes & No. of curves & Example curve & Nonsurj. primes of example \\ \hline 0 & 211,620 & 743.a.743.1 & – \\ 1 & 1,455,473 & 1923.a.1923.1 & 5 (torsion) \\ 2 & 155,186 & 976.a.999424.1 & 2, 29(torsion) \\ 3 & 1,313 & 15876.a.15876.1 & 2, 3, 5 \\ \hline \end{tabular} \end{table} Table 4. Frequency count of nonsurjective primes in the dataset, with examples from the LMFDB dataset. \begin{table} \begin{tabular}{|c|c|c|c|} \hline nonsurj. prime & No. of curves w/ torsion & No. of curves w/o torsion & Example curve \\ \hline 2 & 1,100,706 & 462,616 & 464.a.464.1 \\ 3 & 79,759 & 98,750 & 277.a.277.2 \\ 5 & 12,040 & 10,809 & 16108.b.64432.1 \\ 7 & 1,966 & 2,213 & 295.a.295.2 \\ 11 & 167 & 210 & 4288.b.548864.1 \\ 13 & 108 & 310 & 439587.d.439587.1 \\ 17 & 22 & 61 & 1996.b.510976.1 \\ 19 & 10 & 20 & 1468.6012928 \\ 23 & 2 & 8 & 1696.1736704 \\ 29 & 1 & 5 & 976.999424 \\ 31 & 0 & 1 & 47089.1295541485872879 \\ \hline \end{tabular} \end{table} Table 3. Nonsurjective primes in the dataset, and whether they are explained by torsion, with examples from the LMFDB dataset if available, else a string of the form “conductor.discrimnant”. analogous to the conjectural bound of 37 for the largest nonsurjective prime for elliptic curves defined over \(\mathbb{Q}\) (see e.g. [1, Introduction]). As the example of (11) shows, this bound - if it exists - would have to be at least 31. We conclude with a few examples that illustrate where Algorithm 3.1 fails when the abelian surface has extra (geometric) endomorphisms. **Example 6.1**.: The Jacobian \(A\) of the genus \(2\) curve \(3125.\)a.\(3125.1\) on the LMFDB given by \(y^{2}+y=x^{5}\) has \(\operatorname{End}(A_{\mathbb{Q}})=\mathbb{Z}\) but \(\operatorname{End}(A_{\overline{\mathbb{Q}}})=\mathbb{Z}[\zeta_{5}]\). Let \(\phi\) be the Dirichlet character of modulus \(5\) defined by the Legendre symbol \[\phi\colon(\mathbb{Z}/5\mathbb{Z})^{\times}\to\{\pm 1\},\qquad 2\mapsto-1.\] In this case, Algorithm 3.13 fails to find an auxilliary prime \(p<1000\) for which \(a_{p}\neq 0\) and \(\phi(p)=-1\). This is consistent with the endomorphism calculation, since the trace of \(\rho_{A,\ell}(\operatorname{Frob}_{p})\) is \(0\) for all primes \(p\) that do not split completely in \(\mathbb{Q}(\zeta_{p})\) and any inert prime in \(\mathbb{Q}(\sqrt{5})\) automatically does not split completely in \(\mathbb{Q}(\zeta_{5})\). **Example 6.2**.: The modular curve \(X_{1}(13)\) (\(169.\)a.\(169.1\)) has genus \(2\) and its Jacobian \(J_{1}(13)\) has CM by \(\mathbb{Z}[\zeta_{3}]\) over \(\mathbb{Q}\). As in [14, Claim 2, page 45], for any prime \(\ell\) that splits as \(\pi\overline{\pi}\) in \(\mathbb{Q}(\zeta_{3})\), the representation \(J_{1}(13)[\ell]\) splits as a direct sum \(V_{\pi}\oplus V_{\overline{\pi}}\) of two \(2\)-dimensional subrepresentations that are dual to each other. (A similar statement holds for \(J_{1}(13)[\ell]\otimes_{\mathbb{F}_{\ell}}\overline{\mathbb{F}}_{\ell}\), and so this representation is never absolutely irreducible.) As expected, Algorithm 3.6 fails to find an auxiliary prime \(p<1000\) for which \(R_{p}\) is nonzero. **Example 6.3**.: The first (ordered by conductor) curve whose Jacobian \(J\) admits real multiplication over \(\overline{\mathbb{Q}}\) is the curve \(529.\)a.\(529.1\); indeed, this Jacobian is isogenous to the Jacobian of the modular curve \(X_{0}(23)\). Since there is a single Galois orbit of newforms - call it \(f\) - of level \(\Gamma_{0}(23)\) and weight \(2\), we have that \(J\) is isogenous to the abelian variety \(A_{f}\) associated to \(f\), and thus we expect the integer \(M_{\text{self-dual}}\) output by Algorithm 3.10 to be zero for any auxiliary prime, which is indeed the case.
2303.03745
At Your Fingertips: Extracting Piano Fingering Instructions from Videos
Piano fingering -- knowing which finger to use to play each note in a musical piece, is a hard and important skill to master when learning to play the piano. While some sheet music is available with expert-annotated fingering information, most pieces lack this information, and people often resort to learning the fingering from demonstrations in online videos. We consider the AI task of automating the extraction of fingering information from videos. This is a non-trivial task as fingers are often occluded by other fingers, and it is often not clear from the video which of the keys were pressed, requiring the synchronization of hand position information and knowledge about the notes that were played. We show how to perform this task with high-accuracy using a combination of deep-learning modules, including a GAN-based approach for fine-tuning on out-of-domain data. We extract the fingering information with an f1 score of 97\%. We run the resulting system on 90 videos, resulting in high-quality piano fingering information of 150K notes, the largest available dataset of piano-fingering to date.
Amit Moryossef, Yanai Elazar, Yoav Goldberg
2023-03-07T09:09:13Z
http://arxiv.org/abs/2303.03745v1
# At Your Fingertips: ###### Abstract Piano fingering--knowing which finger to use to play each note in a musical piece, is a hard and important skill to master when learning to play the piano. While some sheet music is available with expert-annotated fingering information, most pieces lack this information, and people often resort to learning the fingering from demonstrations in online videos. We consider the AI task of automating the extraction of fingering information from videos. This is a non-trivial task as fingers are often occluded by other fingers, and it is often not clear from the video which of the keys were pressed, requiring the synchronization of hand position information and knowledge about the notes that were played. We show how to perform this task with high-accuracy using a combination of deep-learning modules, including a GAN-based approach for fine-tuning on out-of-domain data. We extract the fingering information with an f1 score of 97%. We run the resulting system on 90 videos, resulting in high-quality piano fingering information of 150K notes, the largest available dataset of piano-fingering to date. Music Performance Analysis, Music Performance Datasets, Piano Fingering Analysis, Sound and Music Computing. ## I Introduction Learning to play the piano is a difficult task which takes years to master. One of the challenging aspects when learning a new piece is the fingering choice, in which the player has to choose what finger to use for each note. While beginner booklets contain many fingering suggestions, advanced pieces seldom do. Automatic prediction of piano-fingering can be a useful tool for piano learners, to ease the learning process of new pieces. As manually labeling fingerings for different sheet music is an exhausting and expensive task1, previous work [2, 3, 4, 5, 1] used very few tagged pieces for evaluation, with minimal or no training data. Footnote 1: [1] reported expert labeling time of 3-12 seconds per note. In this paper, we propose an automatic, low-cost method for detecting piano-fingering from piano playing performances captured on video, which allows training of modern, data-hungry neural networks. We introduce a novel pipeline that adapts and combines several deep learning methods such as Object-Detection [6, 7, 8, 9], Pose-Estimation [10, 11, 12], and GANs [13, 14], and results in an automatically labeled piano-fingering dataset. Our method serves two purposes: (1) an automatic "transcript" method that detects piano-fingering from video and MIDI files, when these are available, (Section \(\lx@sectionsign\)IV-A), and (2) running the method on dozens of videos, it yields a large dataset for training models generalizing to new pieces (Section \(\lx@sectionsign\)IV-B). Given a video and a MIDI file, our system produces a probability distribution over the fingers for each played note. Running this process on large corpora of piano performances played by different artists yields a total of 90 automatically finger-tagged pieces (containing 155,107 notes in total) resulting in the first public large scale piano-fingering dataset, which we name "_Automatic Piano Fingering Dataset_ (APFD)". This dataset can grow over time, as more videos are uploaded to YouTube. We provide empirical evidence that APFD is valuable, by manually inspecting its labels quality and its usefulness for predictions by fine-tuning a model on a manually created dataset, which achieves state-of-the-art results on a piano-fingering task (\(\lx@sectionsign\)IV-B). The process of extracting piano-fingering from videos alone is difficult as it needs to detect keyboard presses, which are often subtle for the naked eye. We therefore turn to MIDI files which were simultaneously recorded with videos to obtain this information. The extraction process works as follows: We begin by creating a virtual keyboard, and map each key to match their location on the video (\(\lx@sectionsign\)III-B). Then, we identify the hands (\(\lx@sectionsign\)III-C), and the fingers given the hands' bounding boxes (\(\lx@sectionsign\)III-D) for each frame of the video. Next, we align between the MIDI file and its corresponding video (\(\lx@sectionsign\)III-F) and finally, for every played note, we assign a distribution over the fingers that played it (\(\lx@sectionsign\)III-E). Even though methods for hand detection and pose estimation were extensively studied in the computer-vision literature Fig. 1: Illustration of the overall system output. We manage to automatically detect which fingers pressed each key of the piano. 15, 16, 10, 11, 12], we find that in practice, state-of-the-art models do not generalize, and perform poorly in our scenario. We address these weaknesses by fine-tuning an object detection model (SSIII-C) on a new dataset that we manually annotated, and introduce a novel algorithm for adapting models on out-of-domain data. We release this new hands dataset and the trained model in [17], alongside the rest of the resources developed in this work, and a demonstration of the output from the entire process in [https://youtu.be/Gfs1UWQhr5Q](https://youtu.be/Gfs1UWQhr5Q). ## II Background In this section, we present a literature review on the piano-fingering task, which was studied for many years. A shortcoming of most of these works is the lack of actual data for the different methods evaluation, which we aim to remedy by using our automatic method on videos. piano-fingering was previously studied in multiple disciplines, such as music theory and computer animation [2, 3, 4, 5, 18, 1]. Early works have mainly focused on search-based algorithms, with limited amounts of data and many unrealistic constraints. A recent work [1] is the first work to produce a reasonable-sized corpus of piano fingering, with rigorous labeling by multiple professional players. Although the size of this dataset is enough for training modern statistical models, it is still considered relatively small, and more data can significantly boost performance. The fingering prediction task is formalized as follows: Given a sequence of notes, associate each note with a finger from the set \(\{1,2,3,4,5\}\times\{L,R\}\). This is subject to constraints such as the positions of each hand, anatomical plausibility of transitioning between two fingers, the hands' size, etc. Each fingering sequence has a cost of "effort" which is desired to be minimized. One naive approach to finding the best optimal fingering sequence to play all the notes with is to exhaustively evaluate all possible transitions between one note to another which is not computationally feasible. By defining a transition matrix corresponding to the probability or "effort" of transitioning from one note to another - one can calculate a cost function, which specifies the predicted sequence likelihood. Using a search algorithm on top of the transitions allows to find a globally optimal solution. But this solution is not practical, due to the exponential complexity, therefore heuristics and pruning are employed to reduce the space complexity. The transition matrix can be manually defined by heuristics or personal estimation [2, 3], or instead of relying on a pre-defined set of rules, a Hidden Markov Model (HMM) can be used to learn the transitions [19, 20]. In practice, [19] leave the parameter learning to future work, and instead they manually fine-tune the transition matrix. On top of the transition matrix, practitioners suggested using dynamic programming algorithms to solve the search [3]. Another option to solve the huge search space is to use a search algorithm such as Tabu search [21] or variable neighborhood search [22], to find a global plausible solution [23, 24]. These works are either limited by the defined transition rules, or by making different assumptions to facilitate the search space. Such assumptions come in the form of limiting the predictions to a single hand, limiting the evaluation pieces to contain no chords, rests or substantial lengths during which player can take their hand off the keyboard. Furthermore, all of these works have small evaluation sets, which in practice makes it hard to compare different approaches, and does not allow to use more advanced models such as neural networks. Previous works also constrained their models with manually defined rules that try to imitate human behaviour as these methods require lots of training data to achieve good performance, and fingering labeling is particularly expensive and hard to obtain. One way to automatically gather rich fingering data with full hand pose estimation is by using motion capture (MOCAP) gloves when playing the piano. [18] suggest a rule-based and data-based hybrid method, initially estimating fingering decisions using a Directed Acyclic Graph (DAG) based on rule-based comfort constraints which are smoothed using data recorded from limited playing sessions with motion capture gloves. As MOCAP requires special equipment and may affect the comfort of the player, other work, [25] tried to automatically detect piano fingering from video and MIDI files. The pianist's fingernails were laid with colorful markers, which were detected by a computer vision program. As some occlusions can occur, they used some rules to correct the detected fingering. In practice, they implemented the system with a camera capturing only 2 octaves (out of 8) and performed a very limited evaluation. The rules they used are simple (such as: restricting one finger per played note, two successive notes cannot be played with the same finger), but far from capturing real-world scenarios. Previous methods for automatically collecting data [25, 18] were costly, as apart of the equipment needed to play a piece, the data-collectors had to pay the participating pianists. In our work, we rely solely on videos from YouTube, meaning costs remain minimal with the ability to easily scale up to new videos. Recently, [1] released a relatively large dataset of manually labeled piano-fingering by one to six annotators, consisting of 150 pieces, with partially annotated scores (324 notes per piece on average) with a total of 48,726 notes matched with 100,044 tags from multiple annotators, named PIG. This is the largest annotated piano-fingering corpus to date and a valuable resource for this field. The authors propose multiple methods for modeling the task of piano-fingering, including HMMs and neural networks, and report the best performance with an HMM-based model. In this work, we use PIG as a gold standard for comparison and show how a baseline model pre-trained on our dataset, can improve and achieve state-of-the-art results when fine-tuned on PIG. ## III Our Approach: Extracting Fingering from Online Videos There is a genre of online videos in which people upload piano performances where both the piano and the hands are visible. On some channels, people not only include the video but also the MIDI file recorded while playing the piece. We propose an algorithm for detecting the piano-keyboard, and the fingers playing the different notes, and estimating for each note the fingers that pressed the matching piano key. This process is applied on an entire video, which makes a global inference over the notes. Running this algorithm on multiple videos, in turn, enables the creation of a large dataset of pieces and their fingering information. The final output we produce is demonstrated in Figure 1, where we colored both the fingers and the played notes based on the pose-estimation model (SSIII-D) and the predicted fingers that played them (SSIII-E). Note that the ring fingers of both hands as well as the index finger of the left hand and the middle finger of the right hand do not press any note in this particular frame, but may play a note in others. We get the information of played notes from the MIDI events. ### _Data Source_ We extract videos from youtube.com, played by different piano players on a specific channel containing both video and MIDI files. In these videos, the piano is filmed in an angle horizontal to the keyboard, from which both the keyboard and hands are displayed (as can be seen in Figure 1). **MIDI files**: Musical Instrument Digital Interface (MIDI) is a standard format for the interchange of musical information between electronic musical instruments. It consists of a sequence of events describing actions to carry out. In the setup of piano recording, it records what note was played in what time, for how long and its pressure strength (velocity). We only use videos that come along with MIDI files, as we use it as the source for the played notes and their timestamp.2 Footnote 2: A potential method for increasing the amount of data would be to use a Wurz/MIDI system, which converts audio signals to MIDI, but these systems achieve poor performances on real-world scenarios [26, 27, 28]. ### _Keyboard and Boundaries Detection_ To allow a correct fingering assignment, we first have to find the keyboard and the bounding boxes of the keys. We detect the keyboard as the largest continuous bright area in the video and identify key boundaries using standard image processing techniques, taking into account the expected number of keys their predictable location and clear boundaries. For robustness and in order to handle the interfering hands that periodically hide parts of the piano, we combine information from multiple random frames by averaging the predictions from each frame. ### _Hands Detection_ A straightforward approach for getting fingers locations in an image is to use a pose estimation model directly on the entire image. In practice, common methods for full-body pose estimation such as OpenPose [29] containing hand pose estimation [11], make assumptions about the wrist and elbow locations to automatically approximate the hands' locations. In the case of piano playing, the elbow does not appear in the videos, therefore these systems don't work. Instead, we turn to a two-steps approach where we start by detecting the hands, crop them, and pass the hand frames to a pose estimation model. Object Detection [6, 7, 8, 9], and specifically Hand Detection [15, 30, 11] are well studied subjects. However, we found results to be hard to replicate, due to missing code, compilation problems, private datasets and, most importantly, different lenient assumptions (e.g. no separation between left and right hands). Therefore, we created a small dataset with random frames from different videos, corresponding to 476 hands in total, evenly split between left and right3. We then fine-tuned a pre-trained object detection model (Inception v2 [32], based on Faster R-CNN [33], trained on COCO challenge [34]) on our new dataset. The fine-tuned model works well and some hand detection bounding boxes are presented in Figure 1. Footnote 3: The data was labeled by the first author using _labelImg_[31]. Having an accurate hand-detection model, we perform hand detection on every frame in the video. If more than two hands are detected, we choose the two bounding boxes with the highest probability. When two hands are detected with the same label ("left-left" or "right-right"), we discard the model's label, and instead choose the leftmost bounding box to have the label "left" and the other to have the label "right" - which is the most common position of hands on the piano. ### _Finger Pose Estimation_ Having the bounding box of each hand is not sufficient, as in order to assign fingers to notes we need the hand's pose. How can we detect fingers that pressed the different keys? We turn to pose estimation, a well-studied subject in computer vision [10, 11, 12] in order to tackle this part. **Domain mismatch**: The videos we use contain different visual effects, are usually very dark, high-contrast, and blurry due to motion blur. These are real-world conditions, which standard datasets and models rarely encounter [35, 36, 11]. Therefore off-the-shelf pose estimation models often fail in our scenario. Some failure examples are presented in Figure 2c where the first pose is estimated correctly, but the rest either have wrong finger positions, shorter, or broken fingers. **GAN-based adaptation**: Given the observation that pose-estimation works well on well-lit images, how can we adapt it to work as well for other lighting situations? We propose a new algorithm for adapting a pose-estimation model to out-of-domain inputs, described in Algorithm 1. We seek a mapping \(G:S\to T\) that transforms the source domain (the well-lit videos) into the target domain (the challenging-lit scenarios). We obtain this transformation function using a CycleGAN [14]. Then, we use a trained pose estimation model \(PE\) on the source inputs to get a silver tag \(\hat{y_{i}}\), use the transformation \(G\) to get a target-input \(\hat{x_{i}}\), and align the prediction to the adapted input, resulting in a tuple \((\hat{x_{i}},\hat{y_{i}})\). We use this procedure iteratively and collect silver data, which is then used to fine-tune the original pose-estimation model on. This algorithm results in a new model, which was fine-tuned on data from the target domain (in our case, extremely colorful images) and therefore is robust to this data. We note that this model's execution time is equivalent to the pre-trained pose-estimation model, as we use the transformation function offline, rather then using it for every prediction. We also benefit of better generalization as we keep good performance on the source domain, and gain major performance on the target domain. This scenario resembles to sim2real works [37, 38, 39] where one wants to transfer a model from simulations to the real-world. These works learn a mapping function \(G^{\prime}:T\to S\) that transfers instances from the target domain \(T\) (the real-world) into the source domain \(S\) (the simulation), where the mapping is usually achieved by employing a CycleGan [14]. Then, models which are trained on the source domain are used on the transformation of the target domain \(G_{1}(x_{i})\) and manage to generalize on the target domain. We stress that we differ from these works conceptually as our method uses the transformation from the source to target domain, instead of the other way around. Furthermore, our method benefits from a performance boost, as at test-time, we only have to use the classification function, and not the transformation function. We employ Algorithm 1 on 21,746 images we automatically collected from videos, and use the convolutional-pose-machines pose estimation [10]4. Some examples can be seen in Figure 1(b), and 1(d) demonstrating the performance on different lighting scenarios. Footnote 4: In practice, instead of using the transformation by the CycleGAN once, we select 15 checkpoints of the model, and perform the augmentation with all of them. ### _Pressed Finger Estimation_ Given that we know which notes were pressed in any given frame (see SSIII-F below), there is uncertainty as to which finger pressed them. This uncertainty either comes from imperfect pose estimation, or multiple fingers located on top of a single note. We wish to estimate the finger press by calculating the probability of a specific finger that was used, given the hand pose and the pressed note: \(argmax_{i}\ P(x_{i}|n_{k},pose)\) where \(i\in[1,10]\) for 10 fingers from both hands, \(n_{k}\in[1,88]\) corresponding to the played key and \(pose\) correspond to the hand pose. We model the probability of each finger being pressed as a gaussian distribution where the mean \(\mu\) is the center of the key and the variance \(\sigma\) is its width. Each finger gets a score for how much it "fits" to play the given key, calculated by the probability density function \(x_{i}\), with a given finger \(i\), on the key \(k\) as: \(g_{k}(x_{i})=\mathcal{N}(x_{i}|\mu_{k},\sigma_{k}^{2})\). Then, to derive a distribution over fingers we normalize the scores across all fingers \(P(x_{i}|n_{k},pose)=\frac{g_{k}(x_{i})}{\sum_{j}g_{k}(x_{j},x_{i})}\). As most keyboard presses last more than one frame, we make use of multiple frames to overcome some of the errors from previous steps and to provide a more accurate prediction. To achieve this, we collect the frames that were used during a key press. We treat the first frame as the main signal point, and assign each successive frame an exponentially declining weight. Finally, we normalize the weighted sum of probabilities to achieve a probability distribution for all frames. In our dataset, we release all probabilities for each played note, along with the maximum likelihood finger estimation. We define the "confidence" score of the extraction from a single piece, as the product of the highest probability for a finger for each note. ### _Video and MIDI Alignment_ We consider the MIDI and video files to be complementary, as they were recorded simultaneously. The MIDI file is the source for key pressing info: what key was pressed, when, and for how long. The video is the source for the piano, hands, and fingers locations. These two sources are not synchronized, but as they depict the same piano performance, a perfect alignment exist (up to the video frequency resolution). We begin by extracting the audio track from the video, and treat the first audio peak as the beginning of the piece, clipping the prior part of the video and aligning it with the first event from the MIDI file. In practice, we find the starting point as the first signal point that surpasses a threshold. This heuristic achieves a reasonable alignment, but we observe some alignment mismatch of 80-200ms which affect the entire procedure performance. We tackle the misalignment by using the signal from the final system confidence (Section III-E), where every piece gets a confidence score from our system. We look for an alignment that maximizes the system confidence over the entire piece: \[align(MIDI,Vid)=argmax_{i}score(MIDI_{t_{0}},Vid_{t_{i}})\] where \(MIDI_{t_{0}}\) is the starting time of the MIDI file, and \(Vid_{t_{j}}\) is the alignment time of the video. \(Vid_{t_{0}}\) is obtained by the heuristic alignment described in the previous paragraph. We use the confidence score as a proxy of the alignment precision and search the alignment that maximizes the confidence score of the system. More specifically, given the initial offset from the audio-MIDI alignment, we take a window of 1 second in frames (usually 25) from each side and compute the score of the final system on the entire piece. We choose the offset that results in the best confidence score as the alignment offset. ### _The Resulting Dataset:_ APFD We follow the procedure described in this section, and use it to label 90 piano pieces from 42 different composers with 155,107 notes in total. On average, each piece contains 1,723 notes. ## IV Evaluation In this section, we provide two evaluations that show that (1) our system for detecting fingering from videos is accurate and (2) the dataset we achieved by repeating this process over dozens of videos is of good quality. ### _Finger Press Estimation Evaluation_ We manually annotated five random pieces from our dataset by marking the pressing finger for each played note in the video. Then, by using our system (SSIII-E), we estimate what finger was used for each key. We use the confidence score produced by the model as a threshold to use or discard the key prediction of the model, and report precision, recall and F1 scores of multiple thresholds. As discussed in Section III-D, current pose-estimation models perform poorly, and we propose a new method for domain adaptation. In Table I we report the entire system performance on the original pose-estimation [10] model (Pretrained) using multiple frames, as well as the results using fine-tuned, both by using just the first frame (FT Single Frame) and multiple frames (FT Multiple Frames), which performs best. When considering a high confidence score (\(>\)90%) both the pre-trained and fine-tuned models correctly mark all of the considered notes (which consist of between 34-36% of the data). However, when considering decreasing confidences5, our method achieves higher precision and higher recall. With no confidence threshold (i.e using all fingering predictions), the pre-trained model achieves 93% F1, while the fine-tuned one achieves 97% F1, a 57% error reduction. Footnote 5: For brevity we consider here only a threshold of 0.5, but we observe the same phenomenon with more thresholds. ### _Automatic Piano Fingering Prediction_ In Section IV-A we showed that our method for extracting piano-fingering from videos is accurate. By running this process on dozens of videos we created the largest dataset for piano-fingering to date. Is this dataset useful? We begin by training a standard sequence tagging model on APFD and perform evaluations on the subset we manually annotated and on PIG [1]. Then, by fine-tuning the pre-trained model on PIG, we achieve improvements in performance over the previous state-of-the-art which was trained solely on PIG, suggesting that our dataset is indeed accurate and useful. We model the piano-fingering as a sequence labeling task, where given a sequence of notes \(n_{1},n_{2},...,n_{n}\) we need to predict a sequence of fingering: \(y_{1},y_{2},...,y_{n}\), where \begin{table} \begin{tabular}{l|c c c c c|c c c} \hline Confidence & \multicolumn{3}{c|}{c \(>0.9\)} & \multicolumn{3}{c|}{c \(>0.5\)} & \multicolumn{3}{c}{c \(>0\)} \\ Metrics & p & \(r\) & \(\Omega\) & p & \(r\) & \(\Omega\) & p & \(r\) & \(\Omega\) \\ \hline Pretrained & **1.0** & 0.34 & 0.51 & 0.95 & 0.78 & 0.86 & 0.88 & **1.0** & 0.93 \\ FT Single Frame & 0.99 & **0.36** & **0.53** & 0.97 & **0.8** & **0.88** & 0.9 & **1.0** & 0.95 \\ FT Multiple Frames & **1.0** & 0.35 & 0.52 & **0.98** & **0.8** & **0.88** & **0.94** & **1.0** & **0.97** \\ \hline \end{tabular} \end{table} TABLE I: Precision, Recall, F1 scores for 6,894 notes we manually tagged across 5 different pieces, at different confidence points. The first row depict the performance of the system with the vanilla pose estimation. The second and the third rows present the results on the fine-tuned (FT) pose estimation model (§III-D) on a single and multiple frames accordingly. Fig. 2: Output of CycleGAN on 5 images, with 5 different checkpoints. The colors drawn on the hands depict their pose estimation. \(y_{i}\in\{1,2,3,4,5\}\) corresponding to 5 fingers of one hand. We employ a standard sequence tagging technique, by embedding each note and using a BiLSTM on top of it. On every contextualized note we then use a Multi-Layer-Perceptron (MLP) to predict the label. The model is trained to minimize cross-entropy loss. This is the same model used in [1], referred to as DNN (LSTM). For evaluation, we use the match rate between the prediction and the ground truth. For cases where there is a single ground truth, this is equivalent to accuracy measurement. When more than one labeling is available, we simply average the accuracies with each labeling.6 The results are summarized in Table II. Footnote 6: This matches the _general match rate_ evaluation metric in [1]. We begin by experimenting on our dataset. APFD is composed of 90 pieces, which we split into 75/10 for training and development sets respectively, and use the 5 manually annotated pieces as a test set. We note that the development set is silver data (automatically annotated) and is likely to contain mistakes. We run the model and achieve 73.2% accuracy. Next, to test how useful our data is to a real-world gold dataset we wish to inspect its usefulness with a transfer-learning approach. By first pre-training a simple model on our data and then fine-tune the model on PIG. If our data is indeed of quality we'd expect to see performance improvements on PIG, especially as this data is relatively small. We begin by training the LSTM model solely on PIG,7 which results in 64.1% accuracy. This result is higher than the neural model of the same architecture that was reported by [1] (61.3%) but 0.4% lower than their best performing model (an HMM based). We continue by using the model trained on APFD and then fine-tune it on PIG, which leads to 66.8% accuracy, an improvement of 2.7 points over the previous SOTA. We attribute this gain in performance to our dataset, which both increases the number of training examples and allows to train bigger neural models which excel with more training examples. We also experiment in the opposite direction and fine-tune the model trained on PIG with our data, which result in 73.6% accuracy, which is better than training on our data alone, achieving 73.2% accuracy. Footnote 7: [1] didn’t use a development set, therefore in this work, we leave 1 piece from the training set and make it a development set. ## V Discussion and Future Work In this work, we present an automatic method for detecting piano-fingering from MIDI and video files of a piano performance. We employ this method on a large set of videos, and create the first large scale piano-fingering dataset, containing 90 unique pieces, with 155,107 notes in total. We show that this dataset--although noisy-is valuable, by pre-training a model on it and fine-tuning it on a gold dataset, where we achieve state-of-the-art results. In future work, we intend to improve the data collection by ameliorating the pose-estimation model to better handle the high speed movements and the proximity of the hands, which often cause errors in estimating their pose. Furthermore, we intend to design improved neural models that can take previous fingering predictions into account, in order to have a better global fingering transition.
2303.12890
Scale space radon transform-based inertia axis and object central symmetry estimation
Inertia Axes are involved in many techniques for image content measurement when involving information obtained from lines, angles, centroids... etc. We investigate, here, the estimation of the main axis of inertia of an object in the image. We identify the coincidence conditions of the Scale Space Radon Transform (SSRT) maximum and the inertia main axis. We show, that by choosing the appropriate scale parameter, it is possible to match the SSRT maximum and the main axis of inertia location and orientation of the embedded object in the image. Furthermore, an example of use case is presented where binary objects central symmetry computation is derived by means of SSRT projections and the axis of inertia orientation. To this end, some SSRT characteristics have been highlighted and exploited. The experimentations show the SSRT-based main axis of inertia computation effectiveness. Concerning the central symmetry, results are very satisfying as experimentations carried out on randomly created images dataset and existing datasets have permitted to divide successfully these images bases into centrally symmetric and non-centrally symmetric objects.
Aicha Baya Goumeidane, Djemel Ziou, Nafaa Nacereddine
2023-03-22T20:07:27Z
http://arxiv.org/abs/2303.12890v1
# Scale space radon transform-based inertia axis and object central symmetry estimation ###### Abstract Inertia Axes are involved in many techniques for image content measurement when involving information obtained from lines, angles, centroids... etc. We investigate, here, the estimation of the main axis of inertia of an object in the image. We identify the coincidence conditions of the Scale Space Radon Transform (SSRT) maximum and the inertia main axis. We show, that by choosing the appropriate scale parameter, it is possible to match the SSRT maximum and the main axis of inertia location and orientation of the embedded object in the image. Furthermore, an example of use case is presented where binary objects central symmetry computation is derived by means of SSRT projections and the axis of inertia orientation. To this end, some SSRT characteristics have been highlighted and exploited. The experimentations show the SSRT-based main axis of inertia computation effectiveness. Concerning the central symmetry, results are very satisfying as experimentations carried out on randomly created images dataset and existing datasets have permitted to divide successfully these images bases into centrally symmetric and non-centrally symmetric objects. SSRT, Radon Transform, moments, axis of inertia, central symmetry. ## 1 Introduction Computing the main axis of inertia can be very useful to characterize some aspects of the object. Indeed, it has been involved in many works focusing on symmetry evaluation like the works presented in [1, 2, 3, 4] or for pedestrian detection and for real time estimation of body posture purposes in [5] and in [6], respectively. In addition, the authors in [7] and in [8], have involved the main axis of inertia computation in medical applications for cross-sectional clinical images analysis purpose and for vessel centerline extraction in retinal image one. In industrial applications, computing axis of inertia has been used, for example, for image processing-based automatic monitoring of industrial process in [9] and for non destructive testing of wood in [10]. Other works proposed in [11, 12, 13] have made use of the axis of inertia to the aim of leaf plant description, ship characterization or finger print location. Recently, authors in [14] have proposed a transform called the Scale Space Radon Transform (SSRT), which can be viewed as a significant generalized form of the Radon transform. They have shown that this transform can be used to detect elegantly and accurately thick lines and ellipses through an embedded kernel tuned by a scale space parameter. When the scale space parameter is an appropriate one, the maximum of SSRT represents the centerlines of the linear/elliptical structures presented in the image [15, 16]. In this paper we propose to investigate the ability of the SSRT to provide, through its maximum in the SSRT space, the main axis of inertia of the processed object. Moreover, on the basis of some SSRT characteristics and of the axis of inertia orientation, as application, a method to check central symmetry of binary objects is presented. The remainder of this paper is organized as follows. In Sect.2, the SSRT and the main axis of inertia computation via geometric moments are introduced. In Sect.3, the relationship between the SSRT maximum and the geometric moments-based main axis of inertia parameters is highlighted. in Sect.4 Computing object symmetry with respect to a point with the SSRT is presented on the basis of some SSRT properties that are demonstrated. Experimentations and results are provided in Sect.5. We finish the paper by drawing the main conclusions. ## 2 Methods and material ### Scale Space Radon Transform The Scale Space Radon Transform (SSRT), \(\check{S_{f}}\), of an image \(f\), is a matching of a kernel and an embedded parametric shape in this image. If the parametric shape is a line parametrized by the location parameter \(\rho\) and the angle \(\theta\) and the kernel is a Gaussian one, then \(\check{S_{f}}\) is given by [14] \[\check{S_{f}}(\rho,\theta,\sigma)=\frac{1}{\sqrt{2\pi}\sigma}\int_{\mathcal{X }}\int_{\mathcal{Y}}f(x,y)e^{-\frac{(x\cos\theta+y\sin\theta-\rho)^{2}}{2 \sigma^{2}}}dxdy \tag{1}\] Here, \(\sigma\) is the scale space parameter. It returns that the Radon Transform (RT) \(\check{R_{f}}\)[17] is a special case of \(\check{S_{f}}\) (when \(\sigma\to 0\))[14], as the matching between an embedded thin structure in an image and the Dirac distribution \(\delta\) of an implicit parametric shape in \(\check{R_{f}}\), is replaced by a matching between an embedded parametric shape of arbitrary thickness in an image and a kernel tuned by a scale parameter in \(\check{S_{f}}\). In fact, such replacement allows the transform, through the kernel, to handle correctly embedded shapes even if they are not filiform, unlike \(\check{R_{f}}\). The role of the chosen kernel is to control the parametric shape position inside the embedded object via the scale parameter and then, the detection is reduced to maxima detection in the SSRT space. The directional Figure 1: Directional Gaussian \(G_{\theta,\rho,\sigma}\) used in SSRT computation and the unidimensional Gaussians \(g(\sigma)\). Gaussian in (1), \(G_{\theta,\rho,\sigma}(x,y)=\frac{1}{\sqrt{2\pi\sigma}}e^{-(x\cos\theta+y\sin \theta-\rho)^{2}/2\sigma^{2}}\), can be viewed in Fig.1, where the cross section of it is the Gaussian \(g(\sigma)=\frac{1}{\sqrt{2\pi\sigma}}e^{-(x\cos\theta+y\sin\theta-\mu_{\rho})^ {2}/2\sigma^{2}}\) of which mean value \(\mu_{\rho}\) belongs to the line \(\Delta\) with equation \(x\cos\theta+y\sin\theta-\rho=0\), as seen in the same figure. Furthermore, the authors in [18] have shown that there is a relationship between RT and SSRT, expressed as follows \[\check{S_{f}}=\check{R_{f}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, ## 3 The Scale Space Radon Transform maximum and the main inertia axis We show in the following that the SSRT maximum provides the main axis of inertia. So, to this end, we need to find the line parameters \(\hat{\theta}\) and \(\hat{\rho}\) maximizing \(\tilde{S_{f}}\) in (1). Under the continuity assumption of \(\tilde{S_{f}}\), \(\hat{\theta}\) and \(\hat{\rho}\) should be the solution of first order derivatives \(\frac{\partial\tilde{S_{f}}}{\partial\theta}=0\) and \(\frac{\partial\tilde{S_{f}}}{\partial\rho}=0\), respectively. Let begin by \(\theta\) \[\frac{\partial\tilde{S_{f}}}{\partial\theta}=\int_{\mathcal{X}}\int_{ \mathcal{Y}}f(x,y)\frac{\partial e^{-(x\cos\theta+y\sin\theta-\rho)^{2}/2\sigma ^{2}}}{\partial\theta}dxdy=0 \tag{8}\] Equation (8) is highly non linear in \(\theta\) and \(\rho\). However, the region of interest of this function is around the line \(z=x\cos\theta+y\sin\theta-\rho=0\). We propose, then, an approximation of the function \(g(z)=e^{-z^{2}/2\sigma^{2}}\) around z=0 by the Maclaurin series, \[g(z)\simeq\sum_{n=0}^{\infty}(-1)^{n}\frac{1}{n!}\left(\frac{z^{2}}{2\sigma^{ 2}}\right)^{n}=1+\sum_{n=1}^{\infty}(-1)^{n}\frac{1}{n!}\left(\frac{z^{2}}{2 \sigma^{2}}\right)^{n} \tag{9}\] We prove in the following the convergence of the proposed approximation. The parameter \(\sigma\) being always greater than 1, as we will see later, the convergence of the serie in (9) is proven by the alternating series test, where \(g(z)\simeq\sum_{n=0}^{\infty}(-1)^{n}a_{n}\), as follows 1. As \(z\sim 0\) since the approximation is done around \(z=0\) and \(\sigma>1\), then, consequently \(\lim_{n\rightarrow+\infty}\frac{1}{n!}\left(\frac{z^{2}}{2\sigma^{2}}\right)^ {n}=0\). 2. \(a_{n+1}=\frac{1}{n+1!}(\frac{z^{2}}{2\sigma^{2}})^{n+1}=\frac{1}{n!}(\frac{z^{ 2}}{2\sigma^{2}})^{n}\frac{1}{n+1}\frac{z^{2}}{2\sigma^{2}}=\frac{z^{2}}{(n+1) 2\sigma^{2}}a_{n}<a_{n}\), \(\forall n\geq 0\). From 1 and 2, the serie in (9) converges for \(z<1\) and \(\frac{z}{\sqrt{2}\sigma}<1\). Consequently, the derivative can be approximated and rewritten as follows \[\frac{\partial g(z)}{\partial\theta}\simeq\sum_{n=1}^{N}(-1)^{n}\frac{1}{n-1!}\frac{2}{(2\sigma^{2})^{n}}z^{2n-1}\frac{\partial z}{\partial\theta}\quad \text{and}\quad\frac{\partial z}{\partial\theta}=-x\sin\theta+y\cos\theta \tag{10}\] For \(N\) equals to 1 (\(N=1\)) and using (10), (8) becomes \[\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)\left(x\cos\theta+y\sin\theta-\rho \right)\left(-x\sin\theta+y\cos\theta\right)dxdy=0 \tag{11}\] By developing we obtain \[\begin{split}\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)\left[-x^ {2}\cos\theta\sin\theta+xy\cos^{2}\theta-xy\sin^{2}\theta\right.\\ \left.+y^{2}\sin\theta\cos\theta-\rho\left(-x\sin\theta+y\cos \theta\right)\right]dxdy=0\end{split} \tag{12}\] And then \[\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)(-x^{2}(\sin 2\theta)/2+y^{2}(\sin 2 \theta)/2+xy\cos 2\theta-\rho(-x\sin\theta+y\cos\theta))dxdy=0 \tag{13}\] Rewriting this equation using the geometric moments yields \[-m_{20}(\sin 2\theta)/2+m_{02}(\sin 2\theta)/2+m_{11}\cos 2\theta-\int_{\mathcal{ X}}\int_{\mathcal{Y}}f(x,y)\rho(-x\sin\theta+y\cos\theta)dxdy=0 \tag{14}\] Consider now the first order condition \(\frac{\partial F}{\partial\rho}\). Using again the Maclaurin development around \(z=0\) we have \[\frac{\partial g(z)}{\partial\rho}\simeq\sum_{n=1}^{N}(-1)^{n}\frac{1}{n-1!} \frac{2}{(2\sigma^{2})^{n}}z^{2n-1}\frac{\partial z}{\partial\rho}\quad\text{ where}\quad\frac{\partial z}{\partial\rho}=-1 \tag{15}\] Setting \(\frac{\partial g(z)}{\partial\rho}=0\) and taking \(N=1\) in (15), leads to the following equation \[\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)(x\cos\theta+y\sin\theta-\rho)dxdy=0 \tag{16}\] Then \(\rho=\frac{\int_{\mathcal{X}}\int_{\mathcal{Y}}xf(x,y)dxdy}{\int_{\mathcal{X}} \int_{\mathcal{Y}}f(x,y)dxdy}\cos\theta+\frac{\int_{\mathcal{X}}\int_{ \mathcal{Y}}yf(x,y)dxdy}{\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)dy}\sin\theta\) and is rewritten using (6) as: \[\rho=x_{c}\cos\theta+y_{c}\sin\theta \tag{17}\] Involving (4) and the non-centred moments expressions, (17) becomes \[\rho=m_{10}\cos\theta+m_{01}\sin\theta \tag{18}\] Replacing \(\rho\) by its expression in (14) yields \[\begin{split}-m_{20}(\sin 2\theta)/2+m_{02}(\sin 2\theta)/2+m_{11 }\cos 2\theta-\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)\left[(m_{10}\cos \theta+m_{01}\sin\theta)\right.\\ \left.(-x\sin\theta+y\cos\theta)\right]dxdy=0\end{split} \tag{19}\] Contracting (19) provides \[\frac{1}{2}\sin 2\theta\left(-m_{20}+m_{02}-m_{01}^{2}\right.\left.+m_{10}^{2} \right)+\cos 2\theta\left(m_{11}-m_{01}m_{10}\right)=0 \tag{20}\] Then, if \(m_{20}-m_{02}+m_{01}^{2}-m_{10}^{2}\neq 0\) we can write \[\tan 2\hat{\theta}=\frac{2(m_{11}-m_{01}m_{10})}{m_{20}-m_{02}+m_{01}^{2}-m_{10} ^{2}} \tag{21}\] If \(m_{20}-m_{02}+m_{01}^{2}-m_{10}^{2}=0\), then \(\theta=\pi/4\) or \(\theta=-\pi/4\) depending on the sign of the numerator. For the following and before going further, let us go back to (17) to estimate \(\hat{\rho}\) the location parameter that maximizes \(\tilde{S_{f}}\) and then derive a necessary relationship between \(\hat{\rho}\) and \(\hat{\theta}\). So, according to (17), any critical line \(\rho=x\cos\theta+y\sin\theta\) maximizing \(\tilde{S_{f}}\) passes through the point \((x_{c},y_{c})\), whatever the angle \(\theta\). When this angle is \(\hat{\theta}\), the corresponding \(\rho\) i.e. \(\hat{\rho}\) is given by \(\hat{\rho}=x_{c}\cos\hat{\theta}+y_{c}\sin\hat{\theta}\), which means that \((x_{c},y_{c})\) verifies the equation of the line \[\hat{\rho}=x\cos\hat{\theta}+y\sin\hat{\theta} \tag{22}\] Consequently, derivating the SSRT with respect to \(\theta\) and \(\rho\) provides, in the image domain, a line of which equation is represented by (22). Now, before showing the relationship between the orientation \(\phi\) in (3) and the angle \(\hat{\theta}\) in (21), and to be in accordance with our proposition which states that the SSRT maximum defines the inertia principal axis, we must prove that the computed critical point \((\hat{\theta},\hat{\rho})\) is a maximum. To this end, let us compute \(H\) the Hessian of \(\tilde{S_{f}}\). \[H=\begin{pmatrix}H_{11}&H_{12}\\ H_{21}&H_{22}\end{pmatrix}=\begin{pmatrix}\frac{\partial^{2}}{\partial\theta^{ 2}}\tilde{S_{f}}&\frac{\partial^{2}}{\partial\theta\partial\rho}\tilde{S_{f} }\\ \frac{\partial^{2}}{\partial\rho\partial\theta}\tilde{S_{f}}&\frac{\partial^{ 2}}{\partial\rho^{2}}\tilde{S_{f}}\end{pmatrix} \tag{23}\] To be a maximum, the critical point (\(\hat{\theta}\),\(\hat{\rho}\)) must verify two conditions: 1) \(H_{11}\) at (\(\hat{\theta}\),\(\hat{\rho}\)) is negative, 2) the determinant of \(H\), \(\det(H)\) at (\(\hat{\theta}\),\(\hat{\rho}\)) is positive. So let us write \(\tilde{S_{f}}\) as \(\tilde{S_{f}}(\rho,\theta)=\frac{1}{\sqrt{2\pi\sigma}}\int_{\mathcal{X}}\int_ {\mathcal{Y}}f(x,y)e^{-B}dxdy\), where \(B=(x\cos\theta+y\sin\theta-\rho)^{2}/2\sigma^{2}\). Hence, let us compute the second order partial derivatives at the critical point \((\hat{\theta},\hat{\rho})\). 1. Computing \(\frac{\partial^{2}\tilde{S_{f}}}{\partial\theta^{2}}\) at the critical point \[\frac{\partial^{2}\tilde{S_{f}}}{\partial\theta^{2}}=\frac{-2}{ \sqrt{2\pi}\sigma}\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)e^{-B}\left[-2 \left(x\cos\theta+y\sin\theta-\rho\right)^{2}\left(-x\sin\theta+y\cos\theta \right)^{2}+\right.\] \[\left.\left(-x\sin\theta+y\cos\theta\right)^{2}-\left(x\cos \theta+y\sin\theta-\rho\right)\left(x\cos\theta+y\sin\theta\right)\right]dxdy\] At the critical point, we have \(\hat{\rho}=x\cos\hat{\theta}+y\sin\hat{\theta}\) and then B=0. Consequently, \[\frac{\partial^{2}\tilde{S_{f}}}{\partial\theta^{2}}=\frac{-2}{\sqrt{2\pi} \sigma}\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)(-x\sin\hat{\theta}+y\cos \hat{\theta})^{2}dxdy<0\] (25) 2. Computing \(\frac{\partial^{2}\tilde{S_{f}}}{\partial\rho^{2}}\) at the critical point \[\frac{\partial^{2}\tilde{S_{f}}}{\partial\rho^{2}}=\frac{2}{\sqrt{2\pi}\sigma }\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)e^{-B}(2(x\cos\theta+y\sin\theta- \rho)^{2}-1)dxdy\] (26) At the critical point \[\frac{\partial^{2}\tilde{S_{f}}}{\partial\rho^{2}}=\frac{-2}{\sqrt{2\pi} \sigma}\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)dxdy=\frac{-2}{\sqrt{2\pi}\sigma}\] (27) 3. Computing \(\frac{\partial^{2}\check{S_{f}}}{\partial\theta\partial\rho}\) at the critical point \[\begin{split}\frac{\partial^{2}\check{S_{f}}}{\partial\theta \partial\rho}=\frac{2}{\sqrt{2\pi\sigma}}\int_{\mathcal{X}}\int_{\mathcal{Y}}f (x,y)e^{-B}(-2(x\cos\theta+y\sin\theta-\rho)(x\sin\theta+y\cos\theta)\\ +(-x\sin\theta+y\cos\theta))dxdy\end{split}\] (28) At the critical point \[\frac{\partial^{2}\check{S_{f}}}{\partial\theta\partial\rho}=\frac{2}{\sqrt{2 \pi\sigma}}\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)(-x\sin\hat{\theta}+y\cos \hat{\theta})dxdy\] (29) As \(\check{S_{f}}\) is a \(C^{2}\) function, then the mixed partial derivatives in (23) are the same. It follows that the determinant \(det(H)\) equals to \[\begin{split} det(H)=\frac{4}{2\pi\sigma}\left[\int_{\mathcal{X}} \int_{\mathcal{Y}}f(x,y)(-x\sin\hat{\theta}+y\cos\hat{\theta})^{2}dxdy-\right. \\ \left.\left(\int_{\mathcal{X}}\int_{\mathcal{Y}}f(x,y)(-x\sin \hat{\theta}+y\cos\hat{\theta})dxdy\right)^{2}\right]\end{split}\] (30) We recall that \(0\leq f(x,y)\leq 1\) and \(\int\int f(x,y)dxdy=1\), then \(f(x,y)\) can be considered as a probability density function of two variables and \(v(x,y)=(-x\sin\hat{\theta}+y\cos\hat{\theta})\) a 2D random variable. Thus, \(det(H)\) in (30) is a variance up to a positive constant, which makes it non-negative. However, in our case this variance could not be null as \(v(x,y)\) is not constant. Then \(det(H)>0\). Consequently, \(H_{11}<0\) and \(\det(H)>0\) for the critical point \((\hat{\theta},\hat{\rho})\), which makes it a maximum. To finish, as shown in Fig.2, the relationship between the angle \(\hat{\theta}\) that maximizes \(\check{S_{f}}\) and the orientation angle \(\phi^{*}\) of the object under investigation is given by: \(\phi^{*}=\hat{\theta}-\pi/2\). Let us now compute \(\tan 2\phi^{*}\) as: \(\tan 2\phi^{*}=\tan(2\hat{\theta}-\pi)=\frac{\tan 2\hat{\theta}-\tan\pi}{1+ \tan 2\hat{\theta}\tan\pi}=\tan 2\hat{\theta}\). Thus \[\tan 2\phi^{*}=\frac{2(m_{11}-m_{01}m_{10})}{m_{20}-m_{02}+m_{01}^{2}-m_{10}^{2}} \tag{31}\] Figure 2: Relationship between the angle \(\hat{\theta}\) and the orientation angle \(\phi^{*}\) This means that the angle \(\phi\) computed with the geometric moments in (7) and \(\phi^{*}\) the one computed after maximizing \(\tilde{S_{f}}\), are the same. Indeed, the maximum of \(\tilde{S_{f}}\) is a line of slop \(\phi^{*}=\phi\) passing through \((x_{c},y_{c})\) as it is the case of the inertia main axis. It follows that there is an exact match of these two lines. Lastly, it is worth to note that the spatial coincidence between the line maximizing \(\tilde{S_{f}}\) and the geometric moments-based principal axis holds only when the application of the SSRT provides one maximum. For this reason, \(\sigma\) must be chosen to meet the aforementioned requirement. The easiest way to fulfil this condition, is to set \(\sigma\) to the greatest distance separating two points of the binary pattern under investigation. We can see in Fig.3 the SSRT and its maxima. We can notice, in Fig.3.a, that when the parameter \(\sigma\) is an appropriate one, there is only one SSRT maximum and one line corresponding to this maximum. When \(\sigma\) is set to a smaller value, the SSRT space in Fig.b, shows three maxima, but none of them corresponds to the axis of inertia. Moreover, the error induced by the approximation used to prove the previously mentioned spatial coincidence is evaluated in the appendix. ## 4 Example of application : Measuring central symmetry of binary objects The central symmetry is the symmetry with respect to a point and it is equivalent to a rotational symmetry of \(2\) folds [21]. So, a binary object is said to be centrally symmetric if its rotated version around its centroid by \(\pi\) is identical to it[21]. It results that objects with \(2n\) folds rotational symmetry, where \(n\in\mathbb{N}\), are centrally symmetric. Let \(f\) be a centrally symmetric binary image having its center as axes origin \(O(0,0)\). It results that if a point of coordinates \((x,y)\) is rotated by \(\pi\) around \(O(0,0)\) then, its new coordinates are \(\begin{pmatrix}\cos\pi&-\sin\pi\\ \sin\pi&\cos\pi\end{pmatrix}\begin{pmatrix}x\\ y\end{pmatrix}=\begin{pmatrix}-x\\ -y\end{pmatrix}\). Applying the central symmetry definition previously stated leads to \(f(x,y)=f(-x,-y)\). Looking at the SSRT \(\tilde{S_{f}}\) of centrally symmetric objects centred in \(f\) in Fig.4, we can notice the presence of reflection symmetry, which definition can be found in[22], over the axe \(\rho=0\) in the SSRT sinograms. This symmetry is the result of the reflection symmetry of the SSRT projections, \(\tilde{S_{f}}_{\theta}(\rho)\), as seen in the same figure. In the following and for the sake of object central symmetry verification, we show some important SSRT characteristics. **Proposition 4.1**.: _The SSRT is injective, which means that for two \(L^{1}(\mathbb{R}^{2})\) measurable functions \(f\) and \(k\), \(\tilde{S_{f}}=\tilde{S_{k}},\ \implies\ f=k\)_ Proof.: Let be \(f\) and \(k\) two measurable \(L^{1}(\mathbb{R}^{2})\) functions. According to (2) we can write \(\tilde{S_{f}}=\tilde{S_{k}}\implies\ \tilde{R_{f}}\mathbin{\raisebox{-1.29pt}{ \raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{ \raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{ \raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{ \raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{ \raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{ \raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{ \raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt}{ \raisebox{-1.29pt{-1.29pt}{\raisebox{-1.29pt}{\raisebox{-1.29pt{-1.29pt}{ \raisebox{-1.29pt{-1.29pt}{\raisebox{-1.29pt{-1.29pt{-1.29pt{-1.29 pt}{\raisebox{-1.29pt{-{-1.29pt}}{\raisebox{-{-2 -1.29pt{-{-{-{-{-{-{-------------------------------------------------------- - is \(L^{1}\) function [24], then \((\check{R_{f}}-\check{R_{k}})\,\raisebox{-1.72pt}{\hbox{\tiny\raisebox{-2.58pt}{ \tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox {-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{ \tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\tiny \raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox -2.58pt{\tiny\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt} \tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox -2.58pt{\tiny\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt} \tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\tiny \raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\tiny \raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox -2.58pt{\tiny\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt} \tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox -2.58pt{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{ \raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox \raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt }\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{ \raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{- 2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt} \tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{- 2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\raisebox{\raisebox{-2.58pt} \tiny{\raisebox{-2.58pt}\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt }\tiny{\raisebox{-2.58pt}\raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt} \tiny{\raisebox{-2.58pt}\raisebox{\raisebox{-2.58pt}\raisebox{-2.58pt{\raisebox{-2.58pt} \raisebox{-2.58pt\raisebox{-2.58pt}\raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt }\raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt{\raisebox{-2.58pt} \raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt }\raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt }\raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt{\raisebox{-2}}\raisebox{-2.58pt }\raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt{\raisebox{2}}\raisebox{-2.58pt }\raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt{\raisebox{2}}}\raisebox{ -2.58pt{\raisebox{-2.58pt}}\raisebox{-2.58pt{\raisebox{-2.58pt}}\raisebox{\raisebox{-.58pt}\raisebox{-2.58pt}\raisebox{-2.58pt}{\raisebox{-2.58pt}\raisebox{\raisebox{-.58pt}\raisebox{-2.58pt{\raisebox{-2.58pt}}\raisebox{-2.58pt{\raisebox{-2.58pt} \raisebox{-2.58pt{\raisebox{2}}}\raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt }\raisebox{-2.58pt{\raisebox{-2.58pt}\raisebox{-2.58pt{\raisebox{2}}}\raisebox{ -2.58pt{\raisebox{-2.58pt}}\raisebox{\raisebox{-2.58pt}\raisebox{-2.58pt}\raisebox{ {-2.58pt}\raisebox{-2.58pt}\raisebox{-2.58pt\raisebox{-2.58pt}\raisebox{-2.58pt }\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt }\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt} \raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt} {\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt} \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}\raisebox{-2.58pt} {\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt }{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt} \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt} \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt} \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt} \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt} \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt} \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt} \raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny\raisebox{-2.58pt}\raisebox{-2.58pt}{\tiny \raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt} \tiny{\raisebox{-2.58pt}\tiny{-2.58pt}\tiny{\raisebox{-2.58pt}\tiny{\raisebox{-2.58pt} \tiny{-2.58 From \(1/\) and \(2/\)\(f(x,y)=f(-x,-y)\iff\forall\theta\in[0\)\(\pi]\), \(\forall\rho\in[-\rho_{m}+\rho_{m}]\dot{S}_{f\theta}(\rho)=\dot{S}_{f\theta}(-\rho)\). At this stage, the problem of central symmetry measurement is going to be moved from a 2D domain (the image \(f\)) to a 1D domain (the projections \(\dot{S}_{f\theta}(\rho)\)).To begin, let us consider that the binary object is centred in the image \(f\) around its centroid \((x_{c},y_{c})=(0,0)\). To this end, we exploit, the **proposition 4.2** that stipulates that if for each direction \(\theta\), the projection \(\ddot{S}_{f\theta}(\rho)\), where \(\rho\in[-\rho_{max}\)\(\rho_{max}]\), is reflectionally symmetric over the axis \(\rho=0\), then \(f(x,y)=f(-x,-y)\). To the aim of evaluating the reflection symmetry of the projection \(\ddot{S}_{f\theta}(\rho)\), the latter is compared to its reflected form over the axis \(\rho=0\), noted \(\ddot{S}_{f\theta}^{\ rf}\), which means that \(\ddot{S}_{f\theta}^{\ rf}(\rho)=\ddot{S}_{f\theta}(-\rho),\ \forall\rho\). This comparison is done via a difference measure carried out on the projection and its reflected version. If this measure is null then the two projections are the same and consequently \(\ddot{S}_{f\theta}(\rho)\) is reflectionally symmetric. Let us call this measure \(D\) computed as the ratio of the mean value of the \(m\%\) highest values of \(|\ddot{S}_{f\theta}(\rho)-\ddot{S}_{f\theta}^{\ rf}|\) and \(\ddot{S}_{f\theta}(\rho)\) maximum value \(M_{S_{f}}\). Let us designate the set consisting in the \(m\%\) highest values of \(|\ddot{S}_{f\theta}-\ddot{S}_{f\theta}^{\ rf}|\) by \(d_{m}\), and its mean value by \(\bar{d_{m}}\), then \(D\) is computed as \[D(\ddot{S}_{f\theta},\ddot{S}_{f\theta}^{\ rf})=\frac{\bar{d_{m}}}{M_{S_{f}}} \tag{32}\] When the centrally symmetric object is centred in the image \(f\) and the origin of axes \((0,0)\) is the image center, then \(f(x,y)=f(-x,-y)\) and the center of orientation coincides with the object centroid \((x_{c},y_{c})=(0,0)\). This leads to reflection symmetries of SSRT projections over the axes \(\rho_{\theta}^{(x_{c},y_{c})}=x_{c}\cos\theta+y_{c}\sin\theta=0,\ \forall\ \theta\). However, objects are rarely centred in the image. Hopefully and thanks to the shifting property of the SSRT, when the object is shifted in the image, each projection \(\ddot{S}_{f\theta}\) undergoes a shift of amount equals to \(\rho_{\theta}^{(x_{c},y_{c})}=x_{c}\cos\theta+y_{c}\sin\theta\)[18]. Consequently, a non-centred centrally symmetric object SSRT projections shapes do not change compared to the ones of the same object centred in the image. in fact, parts of SSRT projections of the mentioned object keep, fortunately, a reflection symmetry over the axes \(\rho_{\theta}^{(x_{c},y_{c})}\), as we can see in Fig.5(a). Consequently, a circular shift operation on each projection of an amount equals to \(-\rho_{\theta}^{(x_{c},y_{c})}=-(x_{c}\cos\theta+y_{c}\sin\theta)\) makes all projections symmetric over the axis \(\rho=0\) as illustrated, in Fig.5(b). Thus, to apply the proposed reflection symmetry verification method on SSRT projections, the latter must be subjected to such translations to make them reflectionally symmetric over the axis \(\rho=0\) if the object has central symmetry. ### Sampling the projections for central symmetry measurement Central symmetry measurement requires the exploitation of a number of \(\ddot{S_{f}}\) projections. We provide in the following how we have chosen this number. **Proposition 4.3**.: _For each \(L^{1}(\mathbb{R}^{2})\) measurable function \(f\), such as \(f(x,y)=f(-x,-y)\), then the reconstructed \(\hat{f}\) with \(\ddot{S_{f}}\) fulfils \(\hat{f}(x,y)=\hat{f}(-x,-y)\), whatever the number of used projections_ Proof.: Let \(b_{R}\) be the back-projection of \(\ddot{R_{f}}\). It is given by [23], \(b_{R}(x,y)=1/\pi\int_{0}^{\pi}\ddot{R_{f}}(x\cos\theta+y\sin\theta,\theta)\ d\theta=1/\pi\int_{0}^{\pi}\ddot{R_{f}}( \rho,\theta)\ d\theta\), as \(x\cos\theta+y\sin\theta\) is nothing but \(\rho\). With a finite number of equispaced views, \(b_{R}(x,y)\) can be approximated by the summation \(b_{R}(x,y)=1/M\sum_{i}^{M}\tilde{R_{f}}(\rho,\theta_{i})\)[26], where \(M\) is the number of the used projections. Similarly to \(b_{R}\), let \(b_{S}\) be the back-projection for a finite number of projection \(M\) of \(\tilde{S_{f}}\). Then, \(b_{S}(x,y)=1/M\sum_{i}^{M}\tilde{S_{f}}(\rho,\theta_{i})\). As \(\tilde{S_{f}}(\rho,\theta_{i})\) expresses the projection, then \(\tilde{S_{f}}(\rho,\theta_{i})=\tilde{S_{f}}_{\theta_{i}}(\rho)\). Moreover, as \(f(x,y)=f(-x,-y)\iff\tilde{S_{f}}_{\theta_{i}}(\rho)=\tilde{S_{f}}_{\theta_{i} }(-\rho)\lor\rho\;\theta_{i}\), then it follows that \(\tilde{S_{f}}(x\cos\theta_{i}+y\sin\theta_{i},\theta_{i})=\tilde{S_{f}}(-x \cos\theta_{i}-y\sin\theta_{i},\theta_{i})\) whatever \(\theta_{i}\) and consequently, \(b_{S}(x,y)=b_{S}(-x,-y)\) whatever \(M\). Moreover, \(b_{S}\) can be written by introducing the relationship between \(\tilde{R_{f}}\) and \(\tilde{S_{f}}\) in (2), i.e \(b_{S}(x,y)=1/M\sum_{i}^{M}\sum_{r}g(r-\rho)\tilde{R_{f}}(\rho,\theta_{i})=\sum _{r}g(r-\rho)1/M\sum_{i}^{M}\tilde{R_{f}}(\rho,\theta_{i})\). We have, therefore, \(b_{S}(x,y)=\sum_{r}g(r-\rho)b(\rho,\theta_{i})\). Consequently, \(b_{S}=b_{R}\;\raisebox{-1.72pt}{\includegraphics[height=14.226378pt]{.}}\;g\). Furthermore, \(b_{R}=\hat{f}\;\raisebox{-1.72pt}{\includegraphics[height=14.226378pt]{.}}\;h\), where \(h=\frac{1}{\sqrt{x^{2}+y^{2}}}\)[23]. It follows that, \(b_{S}=\hat{f}\;\raisebox{-1.72pt}{\includegraphics[height=14.226378pt]{.}}\;h \;\raisebox{-1.72pt}{\includegraphics[height=14.226378pt]{.}}\;g\) and therefore, \(\hat{F}(w)=B(w)_{S}/\left(H(w)G(w)\right)\) for \(G(w)\neq 0\) and \(H(w)\neq 0\), with \(B_{S}(w)\), \(H(w)\), \(\hat{F}(w)\) and \(G(w)\) the Fourier Transform of \(b_{S}\), \(h\), \(\hat{f}\) and \(g\), respectively. In addition, \(B(w)_{S}\) and \(H(w)\) are real valued function and even in terms of \(w\) because \(h\) is centrally symmetric, by its expression and so is \(b_{S}\) whatever the projections number \(M\). Concerning \(G(w)\) it is equal to \(e^{-2\pi^{2}\sigma^{2}w^{2}}\) as shown in [18] and is, therefore, real valued function and even in terms of \(w\). Consequently, \(\hat{F}(w)\) is a real even function in terms of \(w\) because the product/quotient of two real even functions yields a real even function leading to \(\hat{f}(x,y)=\hat{f}(-x,-y)\) and this holds whatever \(M\). It results that using one reflectionnaly symmetric projection to construct \(\hat{f}\) guarantees the central symmetry of the latter, but does not guarantee, unfortunately, the central symmetry of \(f\). Indeed, if this projection of which orientation, says \(\gamma\), corresponds to the orientation of an existing reflection symmetry in \(f\), which also produces a reflection symmetry in the projection \(\tilde{R_{f}}_{\gamma}\), as seen in [22] and subsequently in \(\tilde{S_{f}}_{\gamma}(\rho)\) Figure 5: From left to right (a): Centrally symmetric object not centred in the image, a randomly chosen direction projection with mirror symmetry of a part of the projection over the axis \(\rho_{\theta}^{x,y_{c}}\) and its SSRT sinogram. (b): The same projection (in blue) and its shifted version (orange colored) with mirror symmetry of the projection and the resulting SSRT sinogram after subjecting all projections to shift operations. **Input:** Binary image \(f\), the scale \(\sigma_{sym}\), threshold \(\epsilon\), \(\rho\) and \(\theta\) steps **Output:**\(Sym\) Compute \(\sigma\), \(\tilde{R_{f}}\), \(g(\sigma)\), \(\tilde{S_{f}}\) with (2) and \((\hat{\theta},\hat{\rho})=\underset{(\theta,\rho)}{\operatorname{argmax}}\ \tilde{S_{f}}\) Compute \(g(\sigma_{sym})\), \(\tilde{S_{f}}\), consider \(\tilde{S_{f}}_{\theta_{1}}\), shift if necessary and reflect it Compute \(D\) (32) for \(\tilde{S_{f}}_{\theta_{1}}\) **if**\(D\leq\epsilon\)**then** Consider \(\tilde{S_{f}}_{\theta_{2}}\), shift if necessary and reflect it, compute \(D\) (32) for \(\tilde{S_{f}}_{\theta_{2}}\) **if**\(D\leq\epsilon\)**then** **Algorithm 1**Central symmetry checking then, \(\hat{f}\) will be centrally symmetric even if \(f\) has only one reflection symmetry. To avoid this situation, we propose to sample the SSRT space into three equispaced projections, \(S_{f\theta}\), \(\bar{S}_{f\theta+\pi/3}\) and \(\bar{S}_{f\theta+2\pi/3}\). Concerning the choice of the angle \(\theta\), it is going to be related the orientation of principal axes of inertia. In fact, these axes are used to characterize dispersion of bodies by representing the spatial distribution of their mass [27]. Furthermore, the axis of reflectional symmetry of a flat object is going to be one of its inertia axes. So, to avoid the axis of reflection symmetry orientation, if it exists, we propose to deviate from such orientation by taking \(\theta\) equals to \(\hat{\theta}\) computed in Sect.3, added to \(\Delta\theta\). So, if we call \(\theta_{1}=\hat{\theta}+\Delta\theta\), \(\theta_{2}=\theta_{1}+\pi/3\) and \(\theta_{2}=\theta_{1}+2\pi/3\), then the symmetry is ascertained if the measure \(D\) computed for the three orientation is below a threshold \(\epsilon\). It is worth to note, that the SSRT scale parameter \(\sigma_{sym}\) used to compute symmetry is not the same as the one used to compute the axis of inertia. In fact, the last one has as purpose to produce smooth projections curves which makes the method less sensitive to turbulences that may be caused by non-smooth object edges or by noise. The operations related to the central symmetry measurement can be summarized in Algorithm 1, where the output is a boolean variable \(Sym\) equals to 1 if the object is centrally symmetric and to 0 if not. ## 5 Experiments In order to evaluate the inertia axes estimation with the SSRT, we compare inertia axes obtained with the geometric moments and those computed with the SSRT maxima. In order to guarantee the obtention of one maximum on the SSRT, the scale parameter \(\sigma\) should be tuned correctly. This condition is fulfilled by setting it to the biggest distance separating two processed object points, for binary images. For gray-scale ones, set \(\sigma\) to the diagonal length of the image has provided the desired results. We can see in Fig.6 examples of patterns extracted from the dataset in [28], on which are depicted red lines corresponding to SSRT maxima related to each image and the ones in dashed cyan representing the geometric moments-based principal inertia axes. We can see that, in each case, the two lines match. The same remarks can be done for real images in Fig.7, where the geometric moments-based axes of inertia and SSRT-based axes of inertia have been computed directly on gray scale images. We can see, in this case, that the two axes coincide. We can remark, however, that in some cases (white fox, swans, white lion) there is a slight deviation of the SSRT line from the inertia axis. This is due to the fact that SSRT is estimated on a sampled space. This makes \(\rho\) and \(\theta\) taking values only in the SSRT discrete parameters space, unlike the axis of inertia computed with the geometric moments of which slope and position are freed from such constraints Furthermore, to test our proposed central symmetry measurement method, a set of 100 000 images, composed of generated bars of randomly chosen numbers, widths, orientations and positions, are used. A sample of this dataset is depicted in Fig.8. On each image, inertia axis is computed with the SSRT maximum, followed by computing the SSRT with \(\sigma_{sym}\) and finally, three projections are selected according to the inertia axis orientation and then shifted if necessary, as seen in Set.4. We point out here, that three parameters have to be set. The first one is \(\epsilon\), the difference measure threshold that defines how the SSRT projection and its reflected version are far from each other, the second one is \(\Delta\theta\) the angle added to deviate from principal direction of inertia and finally \(\sigma_{sym}\), the scale of the SSRT for symmetry measurement, which should be a trade-off between projection smoothing and the fidelity guarantee of object representation. These three parameters are chosen to be equal to 0.03 for \(\epsilon\), as images are composed of perfect binary bars and we want the SSRT projection and its reflection form comparison to be an accurate one, to 5\({}^{\circ}\) for the angle deviation \(\Delta\theta\) which seemed to be adequate for the experiments, and to 1 for the scale \(\sigma_{sym}\). Applying our method on this collection of dataset, has divided it into a group of 730 images containing centrally symmetric objects and another one consisting of 99270 images. A sample of the first group is given in Fig.9. To evaluate the performance of our method, an evaluation procedure is performed on its outcomes by comparing them to a ground truth. The latter is obtained by dividing the dataset into two groups, on the basis of the direct application of the central symmetry definition of an object, given in Set 4. This division is performed as follows; for every image \(f\) in the dataset, another one, \(f_{r}\), is generated by rotating the object, noted \(f_{O}\) around its centroid by \(\pi\). If the outcome of the measure \(E_{m}=area(f_{O}^{sub})/area(f_{O})\), where \(f_{O}^{sub}\) is the object created by the operation \(|f-f_{r}|\), is less then a threshold \(t\), where \(t\) is set to 0.1, then the object in \(f\) is said to be centrally symmetric and assigned to central symmetry group. It is worth mentioning that the direct application of object central symmetry definition to divide the dataset and obtain the ground truth is made possible by the fact that the latter is composed of perfect binary structures. Afterwards, the ground truth so obtained, is compared to the outcomes of our method application. Hence, all images found to be centrally symmetric by our method have been also approved as centrally symmetric by the reference dataset division, except two (2) of them while 34 images detected as non-centrally images by our method have been found to be centrally symmetric by the reference division. If we look in Fig.10 (b), we will see that images "erroneously" assigned by our method to centrally symmetric group, have a missed threadlike part inside their objects, which makes them not completely centrally symmetric even if their overall shape seem to be coarsely symmetric. However, this could just express insensitivity to impulse noise of the proposed method as we can see later. Regarding the images affected "erroneously" to non-centrally symmetric group as images in Fig.10 (a), we can see visually that the are not perfectly centrally symmetric. In turns out Figure 6: Geometric moments-based main axis of inertia in cyan dashed line and SSRT maximum-based line in red, computed on images that, assigning an image to a particular group is handled by \(\epsilon\) tuning. Indeed, the more \(\epsilon\) is close to zero the more the affectation operation is rigorous. At the light of these numerical results, and if we consider the processes of dividing the dataset as a binary classification, then the precision rate \(A\) of our classification where \(A=\frac{TP}{TP+FP}\) with \(TP\) the true positive number i.e. the number of images with centrally symmetric objects that have been correctly assigned as such and \(FP\) are the false negative number which is the number of images that have been erroneously assigned to the centrally symmetric group. Then, \(A=\frac{728}{730}=0.997\) which represents a satisfying result. It is important to mention that the precision \(A\) could slightly change by increasing \(\epsilon\) to make the symmetry measurement more flexible or decreasing \(t\) to make the ground truth creation more severe. To finish with this dataset, corrupting centrally symmetric images by impulse noise with density equal to 0.1, has allowed to test the robustness of the method against noise. To face noise in images we have increased \(\sigma_{sym}\) to 10 and observe the outcomes. Corrupting with noise images composed of filiform bars have subjected the SSRT projections to important modifications leading to partial lost of their reflection symmetries as shown on Fig.11, and therefore, the method fails in de Figure 8: A sample of the randomly generated images Figure 7: Geometric moments-based main axis of inertia and SSRT based axis of inertia computed on real images tecting their central symmetries in such images, unlike the other ones. It follows that the way we compare SSRT projections and their reflected versions should be adjusted to face projections behaviour when the image is subjected to noise, and then enhance robustness against noise for patterns of all possible shapes. We can see in Fig.12 the example of images in Fig.9 checked as centrally symmetric even with impulse noise. The last experiment is carried out on the dataset in [28]. This dataset is composed of 100 images containing patterns with several rotational symmetries and is displayed in Fig.13. We recall that an \(n\) folds rotational symmetric object is an object that looks the same after being subjected to rotation around its centroid by \(2\pi/n\), where \(n\in\mathbb{N}\)[21]. Hence, in this dataset, some of images are \(2n\) folds rotational symmetric and hence, centrally symmetric and the others are \(2n+1\) folds rotational symmetric and then, non centrally symmetric. Furthermore, these images are not binary ones, they must be thresholded. Consequently, here, \(\epsilon\) is increased to 0.1 to deal with irregularities that may appear in the object after thresholding. From the 100 images belonging to the mentioned dataset, the proposed method has permitted to distinguish, successfully, between the centrally symmetric ( \(2n\) folds rotational symmetric) and non-centrally symmetric images (\(2n+1\) folds rotational symmetric). The latter are depicted in Fig.14. Figure 11: From left to right: Filifom bar, complete match between its SSRT projection \(\hat{S_{f}}_{\theta_{1}}\) and its reflected version \(S_{f\theta_{1}}^{\ \ rf}\), noisy filiform bar, mismatch between \(S_{f\theta_{1}}\) in bleu and its reflected version \(S_{f\theta_{1}}^{\ \ rf}\) in red Figure 10: Results of evaluation. (a): Images erroneously assigned to non-centrally symmetric group (b): Images erroneously assigned to centrally symmetric group ## 6 Conclusion In this paper, we have proposed to investigate the ability of the Scale Space Radon Transform to provide the main axis of inertia by means of its maximum, when the corresponding scale is chosen correctly. Mathematical expressions of the parameters of the SSRT for a line, obtained by derivation, have shown to give the same expressions of the line parameters of the main axis of inertia computed with the geometric moments. Furthermore, experimental results have shown that axes of inertia and the SSRT maxima-based lines computed on gray scale and binary images are almost overlapped, which indicates that they match. In addition, the proposed central symmetry measurement method tested on two datasets, has shown its effectiveness, by permitting, therefore, to pick out the centrally symmetric objects from the other ones. However, investigating a more effective difference measure to compare the SSRT projections and their reflection versions and increase, consequently, the robustness to noise will be appreciated. Moreover, the application of the method in 3D, will be the subject of future works to detect automatically Centro-symmetric structures in volumes. Figure 12: Example of centrally symmetric images corrupted with salt & pepper noise and checked with the proposed method as centrally symmetric ones Figure 13: Images of the dataset in [28]
2305.16978
Meandering microstrip leaky-wave antenna with dual-band linear-circular polarization and suppressed open stopband
This paper proposes a dual-band frequency scanning meandering microstrip leaky-wave antenna with linear polarization in the Ku-band and circular polarization in the K-band. This is achieved by making use of two spatial harmonics for radiation. The unit cell of the periodic microstrip antenna contains three meanders with mitred corners. To ensure circular polarization, a theoretical formulation is developed taking into account the delay caused by microstrip length intervals. It defines the unit cell geometry by determining the length of the meanders to ensure that axial ratio remains below 3 dB throughout the operational band. Moreover, the meanders are used to provide better control over scanning rate (the ratio of change of angle of maximum radiation with frequency) and reduce spurious radiation of harmonics by ensuring single harmonic operation within the operational band. To guarantee continuous scanning through broadside direction, open stopband is suppressed using mitered angles. The antenna is designed on a 0.254-mm substrate making it suitable for conformal applications. The fabricated antenna shows a backward to forward beam steering range of 72 deg (-42 deg to 30 deg) in the K-band (19.4-27.5 GHz) with circular polarization and of 75 deg (-15 deg to 60 deg) in the Ku-band (11-15.5 GHz) with linear polarization.
Pratik Vadher, Giulia Sacco, Denys Nikolayev
2023-05-26T14:34:44Z
http://arxiv.org/abs/2305.16978v2
# Meandering Microstrip Leaky-Wave Antenna ###### Abstract This paper proposes a dual-band frequency scanning meandering microstrip leaky-wave antenna with linear polarization in the Ku-band and circular polarization in the K-band. This is achieved by making use of two spatial harmonics for radiation. The unit cell of the periodic microstrip antenna contains three meanders with mircted corners. To ensure circular polarization, a theoretical formulation is developed taking into account the delay caused by microstrip length intervals. It defines the unit cell geometry by determining the length of the meanders to ensure that axial ratio remains below 3 dB throughout the operational band. Moreover, the meanders are used to provide better control over scanning rate (the ratio of change of angle of maximum radiation with frequency) and reduce spurious radiation of harmonics by ensuring single harmonic operation within the operational band. To guarantee continuous scanning through broadside direction, open stopband is suppressed using mircted angles. The antenna is designed on a 0.254-mm substrate making it suitable for conformal applications. The fabricated antenna shows a backward to forward beam steering range of 72\({}^{\circ}\) (-42\({}^{\circ}\) to 30\({}^{\circ}\)) in the K-band (19.4-27.5 GHz) with circular polarization and of 75\({}^{\circ}\) (-15\({}^{\circ}\) to 60\({}^{\circ}\)) in the Ku-band (11-15.5 GHz) with linear polarization. leaky wave antenna (LWA), Ku-band, K-band, higher spatial order, scanning rate, meandering microstrip antenna ## I Introduction Periodic leaky wave antennas (LWAs) are a class of travelling-wave antennas that radiate energy at the discontinuities of the guiding medium [1]. Changes in frequency causes dispersion within the guiding medium, resulting in varying excitation phases at the discontinuities. This, in turn, alters the main beam pointing direction in the radiation pattern of the antenna with respect to frequency. One-dimensional periodic LWAs (1D LWAs) typically radiate a fan-beam in the E-plane. The direction of maximum radiation changes in the H-plane with the change in frequency [2]. Many 1D periodic LWAs have been proposed using different guiding media. LWAs proposed in [3-7] use substrate integrated waveguide (SIW) or half-mode SIW to support the travelling wave and usually employ a TE\({}_{n0}\) or in certain cases TM\({}_{11}\) mode [8] for radiation. However, to reduce fabrication complexity, lower manufacturing costs, and to enable the creation of compact antennas, microstrip-based guiding media LWAs are an appealing solution [9-13]. Meandering microstrip LWAs radiate due to the discontinuities created at the edges which results in a net magnetic current responsible for radiation [13, 14]. Since LWAs are periodic structures, infinite spatial harmonics exist in the guiding media [2, 15]. Many SIW based LWAs that use higher order Floquet modes for radiation have been proposed recently [16, 17]. Although the previous works report scanning due to higher spatial harmonics, the scanning is due to two or three spatial harmonics at the same time, which leads to more than one beam radiated by the antenna. However, it is desirable to have a single beam scanning operation over a high scanning range. This is possible when only a single spatial harmonic is responsible for radiation. Hence, better separation of different spatial harmonics is necessary. To be suitable for on-body conformal applications [18-21], the antenna must be flexible and bendable [22]. For such a purpose, antennas based on microstrip technology are an excellent candidate. Additionally, the mechanical properties such as tensile modulus of the dielectric substrate also play an important factor. Hence, the Rogers 3003 substrate (\(\varepsilon_{r}=3.0\)) is preferred in this work due to its low tensile modulus (823 MPa) making it flexible. Moreover, it is desirable to have circular polarization for many different applications where the alignment of the receiving and transmitting antenna may impact the overall performance of the system such as radars [23], satellite communications and on-body antenna system [21, 24, 25]. Presence of open stopbands (OSBs) also impacts the scanning of the antenna through the broadside direction [26, 27]. Several techniques have been proposed to suppress or completely mitigate OSBs [28-30]. At the OSB frequency, the input impedance matching is poor and the Bloch impedance (\(Z_{\mathrm{s}}\)) has a high imaginary value. Hence, the aim is to minimize the high imaginary value of impedance to reduce the effects of OSB. The idea of beam scanning due to higher spatial harmonic, limited to a single-band and linear polarization, has been presented in [31] with no suppression of OSB. In this paper, single-layer PCB meandering microstrip based LWA is proposed with dual-band operation without the use of vias shown in Fig 1(a). The Ku-band operation of the antenna is due to the \(n=-1\) spatial harmonic and it exhibits linear polarization, while the second band of operation at K-band is due to \(n=-2\) and it depicts circular polarization as described in Fig. 1(b). The unit cell of the periodic antenna consists of three meanders to ensure single-beam operation by providing better separation between the different spatial harmonics. The structure of the paper is as follows. Section II details the concept of higher spatial order in a microstrip-based unit cell followed by theoretical formulation for the dimensions required to have circular polarization for a unit cell with single meander. In Section III, the unit cell is modified by adding two smaller meanders to increase the separation of spatial harmonics and ameliorate performance of circular polarization over the larger frequency operation range. The smaller meanders also result in better control over scanning rate (i.e. the ratio between the change in the angle of maximum radiation and the change in frequency \(\Delta\theta/\Delta f\)). A technique to remove OSB is discussed using Bloch impedance subsequently. Section IV contains the comparison of simulations and measurements for the final design. In Section V, the conclusions of the study are presented. ## II Radiation Mechanism and Principle for Antenna Design Fig. 1(a) shows the design of the proposed microstrip-based LWA with mitred corners. As detailed in previous works on microstrip based LWAs [13], the radiation occurs due to the magnetic current at the corners of the meandering microstrip. The evolution towards the final design of the unit cell is shown in Fig. 2. Conventional microstrip-based LWA [9, 13, 32, 33, 34] operate in the radiation zone due to spatial harmonic of \(n=-1\) like the unit cell shown in Fig. 2(a). This unit cell can be modified to operate in the \(n=-2\) spatial harmonic, by increasing the pathlength at the desired operational frequency. By introducing mitred corners and properly choosing the length of the interconnecting microstrip lines between the corners the antennas radiates in circular polarization (see Fig. 2(b)). The unit cell operates in the radiation zone associated with \(n=-1\) and \(n=-2\) spatial harmonics. To improve the separation of radiation zone due to two harmonics (\(n=-1,-2\)) we propose the geometry in Fig. 2(c). The unit cell is modified with two additional meanders resulting in improvement in scanning range and circular polarization. The antenna has been optimized for operation in the circularly polarized K-band using the spatial harmonic of \(n=-2\), and all theoretical formulations for the unit cell dimensions have been performed to ensure optimal performance in this frequency range. ### _Higher order spatial harmonics in the unit cell based on microstrip design_ Fig. 3(c) shows the microstrip based unit cell with single meander with four radiating elements due to four mitred corners. Due to the periodicity (\(p_{0}\)) of the structure, infinite number of space harmonics exist due to Bloch-Floquet theorem [35, 36]. The phase constant of the \(n^{\rm th}\) space harmonic \(\beta_{n}\) satisfies \[\beta_{n}p_{0}=\beta_{0}p_{0}+2n\pi \tag{1}\] where \(n\) ranges from \(-\infty\) to \(+\infty\). Here \(\beta_{0}\) is the zeroth order propagation constant in the microstrip medium. Fig. 1: (a) Configuration of the proposed 10 unit cells periodic frequency scanning antenna with radiation pattern shown in the broadside direction. The meandering microstrip is etched on top layer while bottom layer is copper. (b) Beam scanning operation of the proposed LWA as a function of frequency in the Ku-band and Ka-band. The antenna radiates a fan-beam in the E-plane (X-Z plane) while in the H-plane (Y-Z plane) the antenna changes the direction of maximum radiation with change in frequency. Fig. 2: Evolution towards final design of proposed unit cell: (a) conventional unit cell operating in single band and radiating due to the \(n=-1\) spatial harmonic, (b) dual band operating unit cell radiating due to \(n=-1\) and \(n=-2\) spatial harmonics, and (c) modified unit cell with the introduction of two meanders (\(p<p_{0}\)) to improve the scanning range and the circular polarization performance. Fig. 3(d) shows the corresponding Brillouin diagram for the unit cell. The \(n^{\rm th}\) spatial harmonic inside the light cone (\(|\beta_{n}|<|k_{0}|\)), results in radiation [15]. It is to be noted that multiple spatial harmonics can exist within the light cone resulting in multiple beam radiation [16]. From equation (1), the phase difference across the unit cell for the first two higher harmonics \(n=-1\) and \(n=~{}-2\) (near the light cone) can be described in the following manner with respect to fundamental harmonic: \[\beta_{-1}p_{0}=-\pi\text{ to }\pi\to\beta_{0}p_{0}=\pi\text{ to }3\pi \tag{2a}\] \[\beta_{-2}p_{0}=-\pi\text{ to }\pi\to\beta_{0}p_{0}=3\pi\text{ to }5\pi \tag{2b}\] Likewise, the direction of maximum radiation \(\theta_{n}\) corresponding to \(n^{\rm th}\) spatial harmonic is given by [16, 36, 37] \[\theta_{n}=\sin^{-1}(\beta_{-n}p_{0}/k_{0}p_{0}) \tag{3}\] where \(k_{0}\) is the free space wave number. According to equation (3), when the phase difference across the unit cell (\(\beta_{-n}p_{0}\)) is zero, the direction of main beam of the \(n^{\rm th}\) spatial harmonic is in the broadside direction. Hence, at the frequencies of broadside radiation for spatial harmonics \(n=-1\) and \(n=-2\), the phase difference across the unit cell from equation (1) is: \[\beta_{-1}p_{0|f=f_{\rm a}} =0\to\beta_{0}p_{0|f=f_{\rm a}}=2\pi \tag{4a}\] \[\beta_{-2}p_{0|f=f_{\rm b}} =0\to\beta_{0}p_{0|f=f_{\rm b}}=4\pi\,, \tag{4b}\] where \(f_{\rm a}\) and \(f_{\rm b}\) are the frequencies corresponding to the broadside radiation for \(n=-1\) and \(n=-2\), respectively. Fig. 3(a-b) depicts the variation of electric field magnitude at the two frequencies confirming the above two equations. To summarize, the first Ku-band radiation is due to spatial harmonic \(n=-1\), while the second K-band radiation is due to spatial harmonic \(n=-2\). Eq. (4b) implies that at \(f=f_{\rm b}\), the total phase difference (\(\phi_{\rm p_{0}}\)) across the unit cell is equal to \(4\pi\). Therefore, the equation to have broadside radiation for second harmonic at the design frequency \(f=f_{\rm b}\) is, \[4\pi=\phi_{\rm OA_{1}|f=f_{\rm b}}+ \phi_{\rm AA_{1}|f=f_{\rm b}}+\phi_{\rm A_{1}B|f=f_{\rm b}}+ \tag{5}\] \[\phi_{\rm BB_{1}|f=f_{\rm b}}+\phi_{\rm B_{1}P|f=f_{\rm b}}\] Here, \(\phi_{ij}\) is the phase difference across the line segment between the points \(i\) and \(j\), where \(i,j\in\{\rm A,A_{1},B,B_{1},O,M\}\). As indicated earlier, the reason for designing the antenna for the second harmonic is to obtain dual band operation due to spatial harmonics \(n=-1\) and \(n=-2\). ### _Analysis for circular polarization in the radiation zone for spatial harmonic \(n=-2\)_ To analyse the circular polarization easily, the axis is rotated equal to the angle of mitred corners as shown in Fig.3(c). Hence there are two transverse components (\(E_{\rm X^{\prime}}\) and \(E_{\rm Y^{\prime}}\)) radiating from the mitred corners. Fig. 3: \(E_{\rm abs}\) plots at phase \(0^{\circ}\) for the frequency at which broadside radiation occurs for (a) \(n=-1\) and (b) \(n=-2\). The sinusoidal graphs above \(E_{\rm abs}\) plots are representative of the electric field variation across microstrip line throughout the unit cell. (c) Unit cell with single large meander. Radiation sources for a single large meander are depicted in the figure. The four radiating magnetic current sources are shown at points A, A\({}_{1}\), B and B\({}_{1}\). (d) Brillouin diagram for the unit cell with the single meander. (e) Unit cell design with two additional meanders that results in 12 radiation sources. (f) Brillouin diagram for the improved unit cell with two additional meanders. The ratio of the two transverse fields, \(E_{\mathrm{X^{\prime}}}\) and \(E_{\mathrm{Y^{\prime}}}\)[14]: \[\frac{E_{\mathrm{Y^{\prime}}}}{E_{\mathrm{X^{\prime}}}}=\mathrm{e}^{\phi_{ \mathrm{AA_{1}}}+\phi_{\mathrm{A_{1}}\mathrm{B}}}=\mathrm{e}^{-j\beta_{0}(l_{ \mathrm{AA_{1}}}+l_{\mathrm{A_{1}}\mathrm{B}})} \tag{6}\] Additionally, \(l_{ij}\) represents the length of the line segment between the points \(i\) and \(j\), which were indicated earlier. To have accurate length of line intervals, the model from [9] is chosen for analysis. To have the circular polarization, the two transverse components of electric field (\(E_{\mathrm{X^{\prime}}}\) and \(E_{\mathrm{Y^{\prime}}}\)) need to have a phase difference of odd multiple of \(\pi/2\)[2]. Consequently, the condition to obtain circular polarization for a travelling-wave meandering microstrip antenna is stated by [14]. \[\phi_{\mathrm{AA_{1}}}+\phi_{\mathrm{A_{1}}\mathrm{B}}=\beta_{0}(l_{\mathrm{ AA_{1}}}+l_{\mathrm{A_{1}}\mathrm{B}})=(2k+1)\pi/2 \tag{7}\] where \(k=1,2,..\) Since the desire is to operate in spatial harmonic of \(n=-2\), from equation (2b) and (4b), the conclusion can be made to select \(k=1\), hence \[\phi_{\mathrm{AA_{1}}}+\phi_{\mathrm{A_{1}}\mathrm{B}}=3\pi/2 \tag{8}\] Selecting \(k\geq 2\) would lead to similar result with equation (6) still being satisfied, however this would increase the overall period of the unit cell resulting in non desirable harmonics in the radiation zone. Equation (8) imposes a criterion on the the phase difference and hence the length of microstrip interval between two corners \(\mathrm{A}\) and \(\mathrm{B}\) for achieving circular polarization. For initial dimensions, at the design frequency \(f=f_{\mathrm{b}}\), \(l_{\mathrm{AA_{1}}}\) is taken such that \(\phi_{\mathrm{AA_{1}}}=\pi\). This implies that \(\phi_{\mathrm{A_{1}}\mathrm{B}}=\pi/2\). From Fig. 3(d), \(l_{\mathrm{AA_{1}}}=l_{\mathrm{BB_{1}}}\) which implies \(\phi_{\mathrm{AA_{1}}}=\phi_{\mathrm{BB_{1}}}\). Also, the unit cell is considered symmetric, hence \(l_{\mathrm{OA}}=l_{\mathrm{B_{1}}\mathrm{M}}\). The phase difference across the large meander (consisting of \(\mathrm{AA_{1}}\), \(\mathrm{A_{1}B}\) and \(\mathrm{BB_{1}}\)) is defined as \(\phi_{\mathrm{LargeSeg}}\). Therefore, from equation (8), in order to have the best circular polarization performance at the desired design frequency of \(f=f_{\mathrm{b}}\), \[\begin{split}\phi_{\mathrm{LargeSeg}|f=f_{\mathrm{b}}}& =\phi_{\mathrm{AA_{1}}|f=f_{\mathrm{b}}}+\phi_{\mathrm{A_{1}} \mathrm{B}|f=f_{\mathrm{b}}}+\phi_{\mathrm{BB_{1}}|f=f_{\mathrm{b}}}\\ &=5\pi/2\end{split} \tag{9}\] Consequently, there are two design constraints on the length of microstrip line intervals: 1) Equation (5), to construct a unit cell that has broadside radiation at \(f=f_{\mathrm{b}}\). and 2) Equation (9), to construct a unit cell with best circular polarization performance at \(f=f_{\mathrm{b}}\). Table I depicts the dimensions of the unit cell based on the theory discussed at \(f_{\mathrm{b}}=23\) GHz. The dimensions are mentioned in the form of \(\lambda_{\mathrm{b}}\), corresponding wavelength to frequency \(f_{\mathrm{b}}\) in the microstrip medium. The dielectric is chosen as Rogers 3003 (\(\varepsilon_{\mathrm{r}}=3.0\)) while the height of the substrate is \(\mathrm{h_{sub}}=0.254\) mm. The width of the microstrip \(\mathrm{t_{50}}\) is equal to 0.5 mm. It is important to note that in the microstrip medium the effective \(\varepsilon_{\mathrm{r,eff}}(f=0)\) is \(2.375\) calculated from [38]. To account for the dispersive nature of microstrip media, the model proposed by Pramanick and Bhartia from [39] is considered to calculate \(\varepsilon_{\mathrm{r,eff}}(f=f_{\mathrm{b}})\). This parameters are taken into consideration when calculating \(\lambda_{\mathrm{b}}\) in the microstrip medium. Ten such unit cells with single meander are cascaded next to each other to form a LWA and full-wave simulations are performed in CST and ANSYS HFSS. Fig. 4 shows the radiation pattern indicating the beam scanning with frequency and circular polarization nature of the antenna. As can be seen, at lower end of frequency range \(f=18.6\) GHz, there is another beam of spatial harmonic of \(n=-1\) limiting the scanning. The corresponding axial ratio obtained through simulation at the angle of maximum gain is shown in Fig. 5(a). The antenna has good circular polarization performance near the design frequency of \(f_{\mathrm{b}}=23\) GHz. The frequency range where the axial ratio is lower than 3 dB is from 20 GHz to 25.2 GHz with the beam scanning from (\(-26^{\circ}\) to \(+10^{\circ}\)). ### _Improvement in the scanning range by reducing the period of the unit cell_ In the previous subsection, initial dimensions were taken such that \(\phi_{\mathrm{AA_{1}}}=\pi\) and \(\phi_{\mathrm{A_{1}}\mathrm{B}}=\pi/2\) at \(f=f_{\mathrm{b}}\). The scanning Fig. 4: Normalized RHCP and LHCP gain in the H-plane obtained from simulation for the LWA formed by connecting 10 unit cells with singular meander. The main lobe is due to spatial harmonic of \(n=-2\). The realized gain of the main lobe varies from 8 dB to 14 dB across the frequency range. range and the separation of the harmonics can be increased by reducing the period of a unit cell. If the vertical length \(l_{\rm{AA}_{1}}\) is increased such that \(\phi_{\rm{AA}_{1}}=9\pi/8\), then according to (8), to maintain the circular polarization performance \(\phi_{\rm{A}_{1}B}=\pi/4\) at the design frequency \(f_{\rm{b}}\). This reduces the period of the unit cell since the horizontal interval (\(l_{\rm{A}_{1}B}\)) of the unit cell is shortened leading to better separation of harmonics. The rest of the dimensions remain the same as discussed in Table I. Ten such unit cells are connected in series to form a LWA and full-wave simulations are performed in CST and Ansys HFSS. However, as can be seen from Fig. 5(b), there is a negative impact on the circular polarization performance in the operating frequency band. In both the discussed cases with only a single meander in the unit cell (see Fig. 5(a, b)), at the design frequency \(f=f_{\rm{b}}\) and in the frequency range close to it, the antenna shows good circular polarization performance. However, it deteriorates quite fast towards the lower and higher end of frequencies. Hence, the beam scanning range is pretty limited. It can be concluded that using a unit cell with only a single large meander, is extremely challenging and not feasible to obtain large beam scanning range with good circular polarization performance. ## III Improvement in the scanning range and circular polarization by additional meanders To enhance the beam steering range, improve circular polarization performance across the operational frequency band, and gain better control over scanning rate (\(\Delta\theta/\Delta f\)), two additional smaller meanders are introduced on either side of larger meander at an equal distance. The following sections describe the characteristics and steps to design such a unit cell. ### _Constructing the unit cell with three meanders for dual-band operation_ Fig.3(e) depicts the design of improved unit cell. The first additional meander consists of three microstrip line intervals \(\rm{CC}_{1}\), \(\rm{C}_{1}D\) and \(\rm{DD}_{1}\), while the third additional meander consists of line intervals \(\rm{PP}_{1}\), \(\rm{P}_{1}Q\) and \(\rm{QQ}_{1}\). The lengths of each of these 6 microstrip line intervals is equal. The phase difference across these three microstrip line interval is defined to be \(\phi_{\rm{sect}}\). Additionally, phase difference due to the first and the third meander is denoted by \(\phi_{\rm{TrSeg1}}\) and \(\phi_{\rm{TrSeg3}}\). The unit cell is designed to be symmetric, therefore the phase difference across the line intervals is \(\phi_{\rm{D}_{1}A}=\phi_{\rm{B}_{1}P}\), \(\phi_{\rm{OC}}=\phi_{\rm{Q}_{1}M}\), and \(\phi_{\rm{TrSeg1}}=\phi_{\rm{TrSeg3}}\). Furthermore, the phase difference across the \(1^{\rm{st}}\) meander and the \(3^{\rm{rd}}\) meander can be written as \[\phi_{\rm{TrSeg1}}=\phi_{\rm{CC}_{1}}+\phi_{\rm{C}_{1}D}+\phi_{ \rm{DD}_{1}}=3\times\phi_{\rm{sect}} \tag{10a}\] \[\phi_{\rm{TrSeg3}}=\phi_{\rm{PP}_{1}}+\phi_{\rm{P}_{1}Q}+\phi_{ \rm{QQ}_{1}}=3\times\phi_{\rm{sect}} \tag{10b}\] In order to have broadside radiation at frequency of \(f=f_{\rm{b}}\), equation (4b) (for this unit cell with period \(p\), the equation is \(\beta_{0}p=4\pi\)) must be satisfied. Consequently, from the Fig. 3(e), the design equation for the improved unit cell can be written as: \[\begin{split} 4\pi=&\phi_{\rm{OC}|f=f_{\rm{b}}}+\phi_{ \rm{TrSeg1}|f=f_{\rm{b}}}+\phi_{\rm{D}_{1}\rm{A}|f=f_{\rm{b}}}+\\ &\phi_{\rm{LargeSeg}|f=f_{\rm{b}}}+\phi_{\rm{B}_{1}\rm{P}|f=f_{ \rm{b}}}+\\ &\phi_{\rm{TrSeg3}|f=f_{\rm{b}}}+\phi_{\rm{Q}_{1}\rm{M}|f=f_{ \rm{b}}}\end{split} \tag{11}\] Here, \(\phi_{ij}\) represents the phase difference between the points \(i\) and \(j\) where \(i,j\in\{\rm{C},\rm{C}_{1},\rm{D},\rm{D}_{1},\rm{A},\rm{A}_{1},\rm{B},\rm{B}_{1 },\rm{P},\rm{P}_{1},\rm{Q}_{1},\rm{O},\rm{M}\}\). To maintain circular polarization, it is necessary that phase difference across larger meander \(\phi_{\rm{LargeSeg}|f=f_{\rm{b}}}\) equals \(5\pi/2\), as stated in equation (9). Therefore, the following design constraint can be derived from equation (11): \[6\times\phi_{\rm{sect}|f=f_{\rm{b}}}+2\times\phi_{\rm{D}_{1}\rm{A}|f=f_{\rm{b }}}+2\times\phi_{\rm{OC}|f=f_{\rm{b}}}=3\pi/2 \tag{12}\] ### _Circular polarization analysis for the improved unit cell_ It was proved in Section II that a single meander unit with four radiating elements can produce circular polarization at the desired design frequency. Hence, for the improved unit cell with 12 radiating currents (see Fig. 3(e)), circular polarization at \(f=f_{\rm{b}}\) can be achieved if the effect due to radiating currents on the first additional meander (at \(\rm{C}\), \(\rm{C}_{1}\), \(\rm{D}\) and \(\rm{D}_{1}\)) and the third additional meander (at \(\rm{P}\), \(\rm{P}_{1}\), \(\rm{Q}\) and \(\rm{Q}_{1}\)) is nullified. This is possible if the direction of current at \(\rm{C}\) is opposite to the current at \(\rm{P}\) resulting in opposing fields at these corners. The magnetic currents at the corners \(\rm{C}\) and \(\rm{P}\) are oriented along the \(\rm{Y}^{\prime}\) axis as shown in Fig. 1(b), but depending on the phase shift introduced by the interconnecting microstrip lines, they can have the same or opposite directions. Hence, the length of the microstrip line intervals in between these two corners can be optimized in such a way that the magnetic currents are in opposite direction at design frequency \(f=f_{\rm{b}}\) Fig. 5: Axial ratio values in the main beam direction versus frequency obtained by full-wave simulation of 10 unit cells connected in series for 3 cases: (a) Single meander with \(\phi_{\rm{AA}_{1}}=\pi\). (b) Single meander with \(\phi_{\rm{AA}_{1}}=9\pi/8\) to reduce the period of the unit cell. (c) Two additional meanders on either side of large meander. The direction of magnetic current at \(\mathrm{C}\) will be opposite to the magnetic current at \(\mathrm{P}\) if the following constraint is met at \(f=f_{\mathrm{b}}\) (since \(\mathrm{e}^{3\pi}=-1\)): \[\phi_{\mathrm{OP}}-\phi_{\mathrm{OC}} =3\pi \tag{13a}\] \[\phi_{\mathrm{T}\mathrm{S}\mathrm{e}\mathrm{g}\mathrm{I}}+\phi_{ \mathrm{D}_{1}\mathrm{A}}+\phi_{\mathrm{Large}\mathrm{S}\mathrm{e}\mathrm{g} }+\phi_{\mathrm{B}\mathrm{I}\mathrm{P}} =3\pi \tag{13b}\] From equation (9), it is known that to maintain circular polarization at \(f=f_{\mathrm{b}}\), \(\phi_{\mathrm{Large}\mathrm{S}\mathrm{e}\mathrm{g}}=5\pi/2\), hence the following design equation is obtained \[3\times\phi_{\mathrm{s}\mathrm{e}\mathrm{e}\mathrm{l}|f=f_{\mathrm{b}}}+2\times \phi_{\mathrm{D}_{1}\mathrm{A}\mathrm{l}|f=f_{\mathrm{b}}}=\pi/2 \tag{14}\] The equations (9), (12) and (14) are the design constraints on the length of line intervals for the proposed unit cell to maintain broadside radiation and circular polarization at \(f=f_{\mathrm{b}}\). From equation (9), it is clear that the dimensions for larger meander are fixed to maintain circular polarization. The other dimensions are governed by equations (12) and (14). This results in 3 variables (namely \(\phi_{\mathrm{s}\mathrm{e}\mathrm{e}\mathrm{l}|f=f_{\mathrm{b}}}\), \(\phi_{\mathrm{D}_{1}\mathrm{A}\mathrm{l}|f=f_{\mathrm{b}}}\) and \(\phi_{\mathrm{OC}\mathrm{l}|f=f_{\mathrm{b}}}\)) and 2 equations. In the current analysis \(\phi_{\mathrm{s}\mathrm{e}\mathrm{e}\mathrm{l}|f=f_{\mathrm{b}}}\) is considered an independent variable making the other two variables (\(\phi_{\mathrm{D}_{1}\mathrm{A}\mathrm{l}|f=f_{\mathrm{b}}}\) and \(\phi_{\mathrm{OC}\mathrm{l}|f=f_{\mathrm{b}}}\)) dependent. Initial design is chosen such that \(\phi_{\mathrm{s}\mathrm{e}\mathrm{e}\mathrm{l}|f=f_{\mathrm{b}}}=10^{\circ}\). The other two dimensions can hence be calculated from equation (12) and equation (14).The initial lengths of the microstrip sections following this theory are shown in the table II for \(f_{\mathrm{b}}=23\,\mathrm{G}\mathrm{H}\mathrm{z}\). Fig. 6 shows the full-wave simulation of the unit cell at \(f=f_{\mathrm{b}}\) designed according to the equations above. As can be seen the direction of fields at each of the corners in the \(1^{\mathrm{st}}\) smaller meander is opposite to the field at the corresponding corner of the \(3^{\mathrm{rd}}\) smaller meander. 10 such unit cells are sequentially connected as shown in Fig. 1(b) to form a LWA and then simulated in CST. Fig. 5(c) shows the unit cell (in the inset) designed by the formulated theory described above and simulation respectively, of axial ratio obtained for different frequencies over the scanning range for the main fan beam direction emanated from the LWA. Notable improvement in axial ratio performance over a larger frequency band can be observed following the inclusion of two smaller meanders. ### _Performance improvement due to the two additional meanders_ Due to the two additional smaller meanders the electrical length of the unit cell has been increased. This results in following improvements for frequency scanning: * enhanced scanning range, * better control over scanning rate by dictating the size of path-length in the unit cell by reducing or increasing the size of small meanders (\(\phi_{\mathrm{s}\mathrm{e}\mathrm{e}\mathrm{t}}\)), * better separation of spatial harmonics resulting in lower sidelobes due to non-desired spatial harmonics, * improvement of circular polarization performance across the band. By comparing the Brillouin diagrams in Fig. 3(c) and Fig. 3(e) for the unit cell geometry containing one meander and three meanders, respectively, the first three conclusions can be easily drawn. With the help of additional two meanders the rate of dispersion with frequency is controlled within the unit cell. The impact of the proposed geometry on the circular polarization can be observed from Fig. 5. The acceptable range for frequency scanning (frequency range where axial ratio \(<\) 3 dB) has increased from 20 GHz to 25.2 GHz to 19.4 GHz to 27.5 GHz resulting in increase of beam steering from (\(-26^{\circ}\) to \(+10^{\circ}\)) to (\(-42^{\circ}\) to \(+30^{\circ}\)). The beam steering (\(-42^{\circ}\) to \(+30^{\circ}\)) is discussed in Section IV. ### _Mitigation of OSB_ Periodic structures like LWAs have a frequency band near the broadside frequency (\(f_{\mathrm{b}}\) in current case), where there is no conduction of the travelling-wave, known as OSB [2, 15]. To remove OSB, the parameter of Bloch impedance (\(Z_{\mathrm{s}}\)) has to be analysed. The Bloch impedance can be calculated from the S-parameters extracted from driven-mode full-wave simulation Fig. 6: Full-wave simulation for the unit cell constructed using the theoretical formulation. At \(f=f_{\mathrm{b}}\), the electric fields at each of the four mird corners of \(\mathrm{T}\mathrm{S}\mathrm{e}\mathrm{g}\mathrm{I}\) cancels out with corresponding mird corner of \(\mathrm{T}\mathrm{S}\mathrm{e}\mathrm{g}\mathrm{I}\). in terms of circuit parameters \(\mathrm{A}\), \(\mathrm{B}\), \(\mathrm{C}\) and \(\mathrm{D}\) as shown in [38]: \[Z_{\mathrm{s}}=\frac{-2\times\mathrm{B}}{(\mathrm{A}-\mathrm{D}-\sqrt{(\mathrm{ A}+\mathrm{D})^{2}-4})} \tag{15}\] Fig. 7 shows the Bloch impedance extracted for the K-band operation range. For the case when OSB is present, at the broadside frequency, \(f_{\mathrm{b}}=23\,\mathrm{GHz}\), there is an abrupt increase in imaginary \(Z_{\mathrm{s}}\), which results in poor transmission and high return loss. This results in reduced gain at the broadside region. To mitigate the problem of OSB, a technique similar to [13] is employed. For a meandering microstrip with mited corners the effect of bending the microstrip line (corners) can be modelled as capacitance [40-42]. This is clearly evident from Fig. 7 when OSB is present. Hence, the angle for the mited corners are changed which introduces an additional inductance [41, 42] that reduces imaginary impedance at \(f=f_{\mathrm{b}}\) as shown in Fig. 8. Fig. 7 shows that imaginary impedance goes to zero which leads to mitigation of OSB at broadside direction. ## IV Dual-Band Leaky-Wave Antenna A 10 unit cell dual band LNA, having the layout of Fig. 1(b) and the optimised dimensions listed in Table II was fabricated. The antenna is based on Rogers 3003 (\(\varepsilon_{r}=3.0\) and \(\tan\delta=~{}0.001\)) with a substrate thickness of 0.254 mm. ### _Feed design_ A Bulgin end-launch connector (2.4 mm) with the output impedance \(Z_{0}=50~{}\Omega\) is used to feed the antenna. The \(Z_{\mathrm{S}}\) of the antenna is around \(60~{}\Omega\) throughout the operating frequency range as shown in Fig. 7. Hence a matching section is added to match the impedance depicted in Fig. 9. ### _Circularly polarized radiation in K-band_ In this band, the antenna is right-handed circularly polarized and radiation is due to spatial harmonic of \(n=-2\). The measurements are performed at the millimeter-wave test facility CAMILL at Institut d'Electronique et de Telecommunications de Rennes (IETR). The simulated and measured radiation pattern in the azimuth plane are shown in Fig. 11(a-d) for the 19.4 GHz to 27.5 GHz band. The measured radiation pattern at \(f=19.4\,\mathrm{GHz}\) shows higher side-lobes compared to simulations. This is due to the fact that there is reflection from the connector due to a very tilted beam in the H-plane. The 3-dB beamwidth of the antenna is about \(10^{\circ}\) throughout the operational range. The length of the antenna remains compact at 7.67\(\lambda_{0}\). The efficiency of the antenna in this band varies from 0.5 to 0.75. Measured and simulated axial ratio are compared in Fig. 11(e). The realized gain plot of the reconfigurable fan-beam radiation pattern in the H-plane is also shown in Fig. 11(e). The measured S-parameters show that the reflection coefficient remains below \(-10\,\mathrm{dB}\) for the operational range. ### _Linearly polarized radiation in Ku-band_ The antenna behaves as a linearly polarized antenna in the Ku-band 11 GHz to 15.5 GHz and the radiation occurs Fig. 8: OSB mitigation by changing the angle of mited corner. The values of \(\delta_{0}\) and \(\delta_{1}\) are 0.15 mm and 0.06 mm respectively. Fig. 7: The Bloch impedance of the LWA when OSB is present and when OSB is mitigated by the introduction of additional capacitance. Here the impedance is normalised to \(Z_{0}=50\Omega\). Fig. 9: Feed Design to match the output impedance (\(Z_{0}\)) of \(50\Omega\) connector to input impedance of of antenna at around \(60\Omega\). The values for matching section is \(\mathrm{T}_{\mathrm{ms}}=~{}0.57\,\mathrm{mm}\) and \(\mathrm{L}_{\mathrm{ms}}=~{}1.634\,\mathrm{mm}\). The \(50\Omega\) transmission line is \(\mathrm{t}_{\mathrm{Tx}}=~{}0.64\,\mathrm{mm}\). Figure 11: Measurements in K-band where the antenna is circularly polarized. The measured and simulated normalized radiation pattern of the fabricated antenna in the H-Plane for (a) \(f=19.4\,\mathrm{GHz}\). (b) \(f=21\,\mathrm{GHz}\). (c) \(f=23\,\mathrm{GHz}\). (d) \(f=26.8\,\mathrm{GHz}\). The antenna scans from \(-42\,\mathrm{\SIUnitSymbolDegree}\) to \(30\,\mathrm{\SIUnitSymbolDegree}\) in the K-band frequency range. (e) The axial ratio of the fabricated antenna remains below 3 dB throughout the operational frequency range. The measured realized gain of the antenna in the H-Plane ranges from 8 dB to 13.8 dB. (f) Measured transmission and return loss parameters. (g) Measurement setup of the antenna. Figure 12: Measurements in the Ku-band where the antenna depicts linear polarization. The measured normalized radiation pattern closely follows the radiation pattern predicted by the simulation for the frequencies (a) \(f=11\,\mathrm{GHz}\). (b) \(f=12\,\mathrm{GHz}\). (c) \(f=14\,\mathrm{GHz}\). (d) \(f=15.5\,\mathrm{GHz}\). (e) Measured return loss. (f) Realized gain in Ku-band with different number of unit cells obtained through full-wave simulations. (g) Experimental setup for the measurement in Ku-band. (h) The additional connectors required to connect the antenna to the ports of VNA. due to the spatial harmonic of \(n=-1\). The simulated and measured realized gain are shown in Fig. 12(a)-(d). The antenna scanning range is of \(-15^{\circ}\) to \(60^{\circ}\). The 3-dB beamwidth of the antenna is less than \(17^{\circ}\) throughout the operational range. The measured reflection coefficient is below \(-10\) dB in the operating frequency range as shown in Fig. 12(e). Measurement is performed using MVG Starlab measurement system at IETR, as shown in Fig.12(g). The antenna is fixed on the platform using tapes to avoid displacement. To avoid the elevated side-lobe levels at high tilted angles absorbers are added at the either end. It has to be noted that the antenna has been designed to operate in K-band and Ku-band, with a special attention to the performance in K-band. As a consequence, the gain in Ku-band is limited to 5 dB. However, the realized gain can be improved by adding unit cells to the LWA as shown in Fig. 12(f). It is also important to notice that the realized gain in the Ku-band of the antenna is lower by 1-2 dB in the measurements than simulations as shown in Fig. 12(f). This is the effect of the connectors required to properly connect the antenna to the ports of Vector Network Analyzer (VNA) as depicted in Fig. 12(h). ### _Literature comparison_ Table III shows the comparison with the antennas operating in similar frequency ranges along with fabrication technology. Circularly polarized LWAs operating in the range 9.8 GHz to 15 GHz and 20 GHz to 29 GHz based on microstrip are reported in [10, 29]. Both designs require via-holes, thus increasing the complexity for fabrication compared to the proposed design. Ref. [31, 45] report LWA based on microstrip operating in similar mm-Wave range however they exhibit linear polarization. Composite right left-handed (CRLH) based structures such as the ones reported in [3, 5, 46] are extremely sensitive to dimensions of the slot making them difficult to fabricate with precision at high frequencies. Meanwhile, the proposed antenna achieves large beam scanning angles with simple meandering microstrips, while maintaining circular polarization. The design can be scaled at lower or higher frequencies very easily without the increase in complexity of fabrication. ## V Conclusion This paper presents a novel approach to designing a compact and single-layer circularly polarized LWA. The approach is based on utilizing the time delay of microstrip line intervals to achieve circular polarization. The frequency scanning LWA operates as circularly polarized antenna in the K-band and as linearly polarized antenna in the Ku-band. This is possible by the use of spatial harmonics \(n=-2\) and \(n=-1\), respectively. To improve the band-separation and reduce the side-lobes due to unwanted harmonics in the radiation zone, additional meanders have been introduced. The axial ratio remains below 3 dB over large frequency range (19.4 GHz to 27.5 GHz) in the intended band for circular polarization. This ensures large scanning range from \(72^{\circ}\) (\(-42^{\circ}\) to \(30^{\circ}\)). A novel technique to remove OSB is also explained in the work in order to have continuous scanning through broadside direction. The proposed antenna does not make use of metallized vias hence significantly reducing the complexity of fabrication while maintaining compact size (\(7.67\lambda_{0}\)). The antenna is manufactured on a flexible substrate of Rogers 3003, making it suitable for flexible and conformal purposes at mm-wave frequencies.
2307.14916
Contribution of hadronic light-by-light scattering to the hyperfine structure of muonium
The contribution of hadronic scattering of light-by-light to the hyperfine structure of muonium is calculated using experimental data on the transition form factors of two photons into a hadron. The amplitudes of interaction between a muon and an electron with horizontal and vertical exchange are constructed. The contributions due to the exchange of pseudoscalar, axial vector, scalar and tensor mesons are taken into account.
V. I. Korobov, A. V. Eskin, A. P. Martynenko, F. A. Martynenko
2023-07-27T14:58:14Z
http://arxiv.org/abs/2307.14916v2
# Contribution of hadronic light-by-light scattering to the hyperfine structure of muonium ###### Abstract The contribution of hadronic scattering of light-by-light to the hyperfine structure of muonium is calculated using experimental data on the transition form factors of two photons into a hadron. The amplitudes of interaction between a muon and an electron with horizontal and vertical exchange are constructed. The contributions due to the exchange of pseudoscalar, axial vector, scalar and tensor mesons are taken into account. Muonium hyperfine splitting, one meson exchange interaction, quantum electrodynamics pacs: 36.10.Dr, 12.20.Ds, 14.40.Aq, 12.40.Vv ## I Introduction Exotic atoms such as the muonium atom, positronium atom, positronium ion, muonic hydrogen, etc. play a very important role in modern physics. Precise study of their energy levels, decay widths is such a direction of fundamental research, within which one can look for manifestations of new interactions of particles. Although such systems do not exist for a long time by the standards of conventional systems, their creation and experimental study allows one to look into a field of research that is inaccessible when working with stable atoms and molecules. We can say that the study of exotic systems, along with collider physics, is a tool for understanding reality beyond the Standard Model. Electromagnetic two-particle bound states make it possible to test one of the most successful theories of particle interaction - quantum electrodynamics. Theoretical calculations of the energy levels of the simplest bound states in quantum electrodynamics have reached a very high accuracy [1; 2; 3; 4]. But since the accuracy of the experimental study of energy levels has steadily increased in recent decades, this has led to the need to study not only the electromagnetic high order contributions but also the contributions of weak and strong interactions in such systems. For example, the contribution of hadronic vacuum polarization has already reached the level of experimental verification for the anomalous magnetic moment (AMM) of the muon, hyperfine splitting in muonium, in the Lamb shift, and the hyperfine structure of muonic hydrogen. The most acute situation with the calculation of hadronic contributions has developed for the AMM muon [5; 6; 7]. But for the other two problems, the hadronic contributions also become significant, taking into account the increasing precision of the experiment. The study of the fine and hyperfine structure (HFS) of muonium has been central to the study of quantum electrodynamics for decades, since in this purely lepton system of different leptons there are no nuclear structure effects, which have always been the main theoretical uncertainty [1; 2; 3]. In recent years, new more accurate experimental studies related to muonium have already begun. The Mu-MASS (MuoniuM IAser SpectroScopy) collaboration aims to measure the \(1S-2S\) transition in muonium with a final uncertainty of 10 kHz, providing a 1000-fold improvement on accuracy [8]. New result of measurement of the n=2 Lamb shift in muonium comprises an order of magnitude improvement upon the previous best measurement [9]. The MuSEUM (Muonium Spectroscopy Experiment Using Microwave) collaboration performed a new precision measurement of the muonium ground-state hyperfine structure at J-PARC using a high-intensity pulsed muon beam [10]. The accuracy of the experimental result in [10] is 4 kHz and is still less than the accuracy of the previous experiment in 1999 [11]. One can consider experiments with muonium for more precise determination on the mass ratio \(m_{\mu}/m_{e}\), for the test of the Standard Model with greater accuracy and possibly, for revealing the source of previously unaccounted interaction between particles forming the bound state in QED. According to the work [12], the theory predicts \(\nu_{HFS}=4463302872(515)~{}Hz\), \(\delta=1.3\times 10^{-7}\), where the most part of the uncertainty (511 Hz) is dominated by the measurement of the ratio \(m_{\mu}/m_{e}\) (120 ppb). Therefore, from a comparison of the theoretical and new experimental results for muonium HFS, one can obtain a more accurate value for the mass ratio \(m_{\mu}/m_{e}\). The MuSEUM collaboration aims to precisely measure the ground-state hyperfine splitting of muonium atoms with the accuracy 1 ppb [13]. Such a high experimental accuracy of measuring the hyperfine structure of muonium at a level of 1 Hz requires corresponding theoretical calculations of various high-order corrections to the fine structure constant. Such calculations have been carried out over the years by various groups. In this paper, we study only one of the contributions to the hyperfine structure connected with the effect of light-by-light scattering, which leads to the production of various mesons in the intermediate state. In the quark model, such processes are determined by the production of a pair of light quarks and antiquarks in the \(\gamma^{*}\gamma^{*}\) interaction, which can then form a light meson. The corresponding interaction amplitudes are shown in Fig. 1. They can be divided into two parts, which we call vertical and horizontal exchanges. In our previous work [14], we investigated the contribution connected with the horizontal exchange of pseudoscalar mesons. A more complete study of these processes was carried out in Ref. [15], in which, along with horizontal exchanges, the contribution of vertical exchanges, including axial vector mesons, was also investigated. In our recent papers, we calculated the hadronic contributions of light-by-light scattering into the fine and hyperfine structure of muonic hydrogen [16; 17; 18; 19; 20] (see also Refs. [21; 22; 23; 24]) and showed that such processes must be taken into account when obtaining the total value of a specific energy interval, taking into account the ever-increasing accuracy experiments carried out by the CREMA collaboration [25; 26; 27; 28], as well as other collaborations are planned [29; 30; 31]. The purpose of this work is to calculate all possible meson contributions (pseudoscalar, scalar, axial vector and tensor) to the hyperfine splitting (HFS) in muonium and to estimate the possible total contribution from such interactions. The factor determining the order of the contribution, \(m_{e}^{3}\alpha^{7}/\Lambda^{2}h\sim 0.04\) Hz where \(\Lambda\) is typical hadronic mass near 1 GeV, is estimated to be not very large due to recoil effects and the nature of the hadronic interaction itself. Nevertheless, the study of such contributions in the hyperfine structure is of interest in connection with an increase in the accuracy of measurements. Thus, for example, in the case of muonic hydrogen, hadronic effects of light-by-light scattering turn out to be rather significant both in the Lamb shift and in the hyperfine splitting [16; 17; 18; 19; 20]. ## II Contribution of axial vector mesons We begin the discussion of the contributions of axial vector mesons from the vertical exchange amplitudes in Fig. 1(c). The diagram has a vertex of the transition of two virtual photons to an axial vector meson, for which the following parametrization is used [16; 32]: \[T^{\mu\nu}(k_{1},k_{2})=4\pi i\alpha\varepsilon_{\mu\nu\alpha\beta}(k_{1}^{ \alpha}k_{2}^{2}-k_{2}^{\alpha}k_{1}^{2})\varepsilon_{A}^{\beta}A(t^{2},k_{1} ^{2},k_{2}^{2}), \tag{1}\] where \(k_{1}=k\), \(k_{2}=t-k\) are four-momenta of virtual photons, \(t=p_{1}-q_{1}=(0,\mathbf{t})\) is the four-momentum of the meson, \(p_{1}\), \(p_{2}\) are four-momenta of electron and muon in initial state, \(q_{1}\), \(q_{2}\) are four-momenta of electron and muon in final state, \(M_{A}\) is the mass of axial vector meson. Note that the axial vector decay into two real photons is forbidden by Landau-Yang theorem but the process with one virtual photon can already take place. To pick out the electron-muon states with a certain spin, we use projection operators constructed from the wave functions of the particles in their rest frame: \[\hat{\Pi}_{S=0}=[u(0)\bar{v}(0)]_{S=0}=\frac{(1+\gamma^{0})}{2\sqrt{2}}\gamma _{5},\quad\hat{\Pi}_{S=1}=[u(0)\bar{v}(0)]_{S=1}=\frac{(1+\gamma^{0})}{2\sqrt{ 2}}\hat{\varepsilon}. \tag{2}\] As a result, the general expression for the interaction amplitude in Fig.1(c) can be transformed to the following trace: \[i\mathcal{M}^{c}=\frac{\alpha^{2}(Z\alpha)^{2}}{16m_{1}^{2}m_{2}^{2}}\int\frac {d^{4}k}{\pi^{2}}\frac{A(t^{2},k_{1}^{2},k_{2}^{2})}{(k^{2})^{2}}\int\frac{d^{ 4}r}{\pi^{2}}\frac{A(t^{2},r_{1}^{2},r_{2}^{2})}{(r^{2})^{2}}\frac{\varepsilon_ {\mu\nu\alpha\beta}(k_{1}^{\alpha}k_{2}^{2}-k_{2}^{\alpha}k_{1}^{2})}{(k^{2}-2 k_{0}m_{1})}\times \tag{3}\] \[\frac{\varepsilon_{\sigma\lambda\rho\omega}(r_{1}^{\rho}r_{2}^{2}-r_{2}^{\rho }r_{1}^{2})}{(r^{2}-2r_{0}m_{2})}D^{\beta\omega}(t)Tr\Big{[}(\hat{q}_{1}+m_{1} )\gamma^{\nu}(\hat{p}_{1}-\hat{k}+m_{1})\gamma^{\mu})\hat{p}_{1}+m_{1})\hat{ \Pi}_{S=1,0}(\hat{p}_{2}-m_{2})\times\] \[\gamma^{\sigma}(\hat{r}_{1}-p_{2}+m_{2})\gamma^{\lambda}(\hat{q}_{2}-m_{2}) \hat{\Pi}_{S=1,0}\Big{]}D^{\beta\omega}(t),\] where \(m_{1}\), \(m_{2}\) are the masses of electron and muon correspondingly, \(k_{1}=k\), \(k_{2}=t-k\) are four-momenta of virtual photons in one loop, \(r_{1}=k\), \(r_{2}=t-r\) are four-momenta of Figure 1: Hadronic light-by-light scattering amplitudes with horizontal and vertical exchanges. Wavy line corresponds to the virtual photon. The bold dot denotes the form factor of the transition of two photons into a meson. virtual photons in other loop. \(D^{\beta\omega}(t)\) is the propagator of axial-vector meson. After taking the trace in leading order in \(\alpha\) and a number of simplifications, the amplitude numerator for hyperfine splitting can be represented as: \[N^{c}_{AV}=\frac{1}{3}k^{2}r^{2}\mathbf{k}^{2}\mathbf{r}^{2}, \tag{4}\] where the index (c) denotes the contribution of the amplitude in Fig. 1(c). For the purpose of further integration over loop momenta, we pass to the Euclidean space: \[k^{2}\rightarrow-k^{2},\quad r^{2}\rightarrow-r^{2},\quad k^{2}_{0}\to -k^{2}_{0}=-k^{2}\cos^{2}\psi_{1},\quad r^{2}_{0}\rightarrow-r^{2}_{0}=-r^{2} \cos^{2}\psi_{2}. \tag{5}\] As a result of all transformations, two integrals over k and r are factorized, and the contribution to the interaction operator in momentum space can be represented as follows: \[\Delta V^{c}=-\frac{64}{9}\frac{\alpha^{2}(Z\alpha)^{2}}{\mathbf{t}^{2}+M_{A} ^{2}}\int\frac{d^{4}k}{\pi^{2}}A(t^{2},k^{2},k^{2})\frac{(2k^{2}+k^{2}_{0})}{k ^{2}(k^{2}-2m_{1}k_{0})}\int\frac{d^{4}r}{\pi^{2}}A(t^{2},r^{2},r^{2})\frac{(2 r^{2}+r^{2}_{0})}{r^{2}(r^{2}-2m_{2}r_{0})} \tag{6}\] To calculate each of the integrals, it is necessary to know the form of the transition form factor \(A(t^{2},k^{2},k^{2})\) of \(1^{++}\) meson to two photons, which is one of the main structural elements of the formula (6). At present we have only few experimental data on it [33; 34; 35]. The L3 Collaboration studied the reaction \(e^{+}e^{-}\to e^{+}e^{-}\gamma^{*}\gamma^{*}\to e^{+}e^{-}f_{1}(1285) \to e^{+}e^{-}\eta\pi^{+}\pi^{-}\) in [33] and measured the \(f_{1}(1285)\) transition form factor for the case when one of the photons is real and another one is virtual. In [34] the production of \(f_{1}(1420)\) was investigated by the L3 Collaboration in the reaction \(\gamma^{*}\gamma^{*}\to K^{0}_{S}K^{\pm}\pi^{\mp}\). By using these data, we can parameterize the transition form factor for the case of two photons with equal virtualities as follows: \[A(M_{A}^{2},k^{2},k^{2})=A(M_{A}^{2},0,0)F_{AV}(k^{2}),\quad F_{AV}(k^{2})= \frac{\Lambda_{A}^{4}}{(\Lambda_{A}^{2}-k^{2})^{2}}. \tag{7}\] The effects of off-shellness for exchange by massive \(f_{1}\) mesons might be important. This effect was investigated in [36; 37], and in [38] a simple parametrization was proposed. The simplest way to take it into account is the introduction of the exponential suppression factor [38]: \[\frac{A(t^{2},0,0)}{A(M_{A}^{2},0,0)}\approx e^{(t^{2}-M_{A}^{2})/M_{A}^{2}}, \tag{8}\] which gives the factor \(\sim e^{-1}\) for \(t^{2}\approx 0\). The values of the form factors in (7) for the case of \(f_{1}(1285)\) and \(f_{1}(1420)\) can be fixed from L3 data [16]: \[A_{f_{1}(1285)\gamma^{*}\gamma^{*}}\left(M_{f_{1}(1285)}^{2},0,0 \right) = (0.266\pm 0.043)\ \text{GeV}^{-2},\] \[A_{f_{1}(1260)\gamma^{*}\gamma^{*}}\left(M_{f_{1}(1260)}^{2},0,0 \right) = (0.160\pm 0.120)\ \text{GeV}^{-2},\] \[A_{f_{1}(1420)\gamma^{*}\gamma^{*}}\left(M_{f_{1}(1420)}^{2},0,0 \right) = (0.193\pm 0.041)\ \text{GeV}^{-2}. \tag{9}\] Using the dipole parameterization from (7) we can calculate sequentially analytically the integrals over all variables in the Euclidean space: \[I_{e}=\int d^{4}k\frac{(2k^{2}+k^{2}_{0})}{k^{2}(k^{2}-2k_{0}m_{1})}\frac{ \Lambda^{4}}{(k^{2}-\Lambda^{2})^{2}}=-\int\limits_{0}^{\infty}dk^{2}L_{e}(k^{ 2})A(0,k^{2},k^{2})= \tag{10}\] \[-\frac{\pi^{2}\Lambda_{A}^{2}}{4(1-a_{e}^{2})^{5/2}}\left[3\sqrt{1-a_{e}^{2}}-a_{e }^{2}(5-2a_{e}^{2})\ln\frac{1+\sqrt{1-a_{e}^{2}}}{a_{e}}\right],\] \[L_{e}(k^{2})=\frac{\pi^{2}}{8m_{1}^{4}}\left[k^{2}(k^{2}-6m_{1}^{2})-(k^{2}-8m_{ 1}^{2})\sqrt{k^{2}(k^{2}+4m_{1}^{2})}\right],\quad a_{e}=\frac{2m_{1}}{\Lambda}. \tag{11}\] The integral for the muon loop \(I_{\mu}\) is obtained by replacing \(m_{1}\to m_{2}\). Thus, the final contribution to the muonium HFS can be represented by the following analytical formula: \[\Delta E_{c}^{hfs}(1S)=-\frac{64\alpha^{2}(Z\alpha)^{5}\mu^{3}A(0,0,0)^{2}}{9 \pi M_{A}^{2}\left(1+\frac{2W}{M_{A}}\right)^{2}}I_{e}I_{\mu}. \tag{12}\] For numerical estimates of this contribution, we take three axial vector mesons with masses 1285 MeV, 1260 MeV and 1420 MeV. The total numerical value of the contribution is presented in Table 1. We write out in the Table 1 the numerical values of the individual \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Meson & Mass & \(I^{G}(J^{PC})\) & \(\Lambda\) & \(A(M^{2},0,0)\) & \(\Delta E^{hfs}(1S)\) \\ & in MeV & & in MeV & & in Hz \\ \hline \(f_{1}\) & 1281.9 & \(0^{+}(1^{++})\) & 1040 & 0.266 & -0.00028 \\ & & & & \(GeV^{-2}\) & -0.00311 \\ \hline \(a_{1}\) & 1260 & \(1^{-}(1^{++})\) & 1040 & 0.160 & -0.00011 \\ & & & & \(GeV^{-2}\) & -0.00115 \\ \hline \(f_{1}\) & 1426.3 & \(0^{+}(1^{++})\) & 926 & 0.193 & -0.00007 \\ & & & & \(GeV^{-2}\) & -0.00096 \\ \hline \(\sigma\) & 550 & \(0^{+}(0^{++})\) & 2000 & -0.596 & 0 \\ & & & & \(GeV^{-1}\) & 0.02701 \\ \hline \(f_{0}\) & 980 & \(0^{+}(0^{++})\) & 2000 & -0.085 & 0 \\ & & & & \(GeV^{-1}\) & 0.00023 \\ \hline \(a_{0}\) & 980 & \(1^{-}(0^{++})\) & 2000 & -0.086 & 0 \\ & & & & \(GeV^{-1}\) & 0.00023 \\ \hline \(f_{0}\) & 1370 & \(0^{+}(0^{++})\) & 2000 & -0.036 & 0 \\ & & & & \(GeV^{-1}\) & 0.00002 \\ \hline \(\pi^{0}\) & 134.9768 & \(1^{-}(0^{-+})\) & 770 & 0.025 & 0 \\ & & & & \(GeV^{-1}\) & -0.00650 \\ \hline \(\eta\) & 547.862 & \(0^{+}(0^{-+})\) & 774 & 0.024 & 0 \\ & & & & \(GeV^{-1}\) & -0.00170 \\ \hline \(\eta^{\prime}\) & 957.78 & \(0^{+}(0^{-+})\) & 859 & 0.031 & 0 \\ & & & & \(GeV^{-1}\) & -0.00165 \\ \hline \(f_{2}\) & 1275.4 & \(0^{+}(2^{++})\) & 2000 & 0.498 & 0 \\ & & & & & 0.00006 \\ \hline Total contribution & \multicolumn{4}{c|}{0.012 Hz} \\ \hline \end{tabular} \end{table} Table 1: Hadronic light-by-light contribution to muonium HFS. The top line in each cell corresponds to a vertical exchange, and the bottom line corresponds to a horizontal exchange. contributions to the nearest five digits after the decimal point, bearing in mind that the smallest contributions are of this order. Let us further consider horizontal exchanges with axial vector mesons shown in Fig.1(a,b). In this case, the use of projection operators (2) also makes it possible to reduce the product of various factors in the numerator to a common trace, which can be calculated for the sum of the amplitudes in Fig.1(a,b) as \[N_{AV}^{(a+b)}=(k_{1}^{2}k_{2}^{4}+k_{2}^{2}k_{1}^{4})(2\cos\Omega+\cos\psi_{1} \cos\psi_{2})-k_{1}^{3}k_{2}^{3}(1+3\cos^{2}\Omega+\cos^{2}\psi_{1}+\cos^{2} \psi_{2}), \tag{13}\] \[\cos\Omega=\cos\psi_{1}\cos\psi_{2}+\sin\psi_{1}\sin\psi_{2}\cos\theta. \tag{14}\] To immediately take the sum of the amplitudes in Fig.1(a,b), we multiply the direct amplitude by the factor \((k_{2}^{2}+2k_{2}^{0}m_{2})\), and the cross amplitude by the factor \((k_{2}^{2}-2k_{2}^{0}m_{2})\). In addition, we have passed to the Euclidean space of variables \(k_{1}\) and \(k_{2}\). After all transformations, the contribution of horizontal exchanges to the HFS of the spectrum will be determined by the following integral expression: \[\Delta E_{AV,(a+b)}^{hfs}=\frac{16\alpha^{2}(Z\alpha)^{5}\mu^{3}\Lambda^{2}}{ 3\pi}\int_{0}^{\infty}dk_{1}\int\frac{d\Omega_{1}}{\pi^{2}}\int_{0}^{\infty}dk _{2}\int\frac{d\Omega_{2}}{\pi^{2}}\frac{A(M_{A}^{2},k_{1}^{2},k_{2}^{2})}{(k_ {1}^{2}+a_{e}^{2}\cos^{2}\psi_{1})}\times \tag{15}\] \[\frac{A(M_{A}^{2},k_{1}^{2},k_{2}^{2})}{(k_{2}^{2}+a_{\mu}^{2}\cos^{2}\psi_{2} )}\frac{N_{AV}^{(a+b)}}{(k_{1}^{2}+k_{2}^{2}+2k_{1}k_{2}\cos\Omega+\frac{M_{A} ^{2}}{\Lambda^{2}})},\] where \(d\Omega_{1}=2\pi\sin^{2}\psi_{1}\sin\theta d\theta d\psi_{1}\), \(d\Omega_{2}=4\pi\sin^{2}\psi_{2}d\psi_{2}\). Further, the calculation of these integrals is carried out numerically, and the results are presented in Table 1. ## III Contribution of scalar mesons Recent results on the properties of light scalar mesons [39] show that they are being intensively studied, including decays into two photons. But the accuracy of measuring the decay width \(\Gamma_{S\gamma\gamma}\) is currently not high. Let us consider the contribution of scalar mesons to the interaction amplitudes and HFS, using the methods formulated in the previous section for constructing hadronic light-by-light scattering amplitudes. The general parametrization of scalar meson \(\rightarrow\gamma^{*}+\gamma^{*}\) vertex function takes the form [40; 41; 42; 43]: \[T_{S}^{\mu\nu}(t,k_{1},k_{2})=4\pi\alpha\bigg{\{}A(t^{2},k_{1}^{2},k_{2}^{2}) (g^{\mu\nu}(k_{1}\cdot k_{2})-k_{1}^{\nu}k_{2}^{\mu})+ \tag{16}\] \[B(t^{2},k_{1}^{2},k_{2}^{2})(k_{2}^{\mu}k_{1}^{2}-k_{1}^{\mu}(k_{1}\cdot k_{2} ))(k_{1}^{\nu}k_{2}^{2}-k_{2}^{\nu}(k_{1}\cdot k_{2}))\bigg{\}},\] where \(A(t^{2},k_{1}^{2},k_{2}^{2})\), \(B(t^{2},k_{1}^{2},k_{2}^{2})\) are two scalar functions on three variables, \(k_{1,2}\) are four momenta of virtual photons, t is the four-momentum of scalar meson. The first term in (16) represents transverse photons interaction, and the second term represents longitudinal photons interaction. In the leading order, the contribution of the structure function \(A(t^{2},k_{1}^{2},k_{2}^{2})\) is decisive. \(t\) is the four momentum of scalar meson which is equal to \((k_{1}+k_{2})\) for the horizontal exchanges. The numerator of the sum of the horizontal exchange amplitudes is equal to \[N_{S}^{(a+b)}=k_{1}^{2}k_{2}^{2}\cos\Omega(\cos\Omega\cos\psi_{1}\cos\psi_{2}- 1-\cos^{2}\psi_{1}-\cos^{2}\psi_{2}-\cos^{2}\Omega). \tag{17}\] The total contribution of scalar mesons to the hyperfine structure is similar to expression (15) and in euclidean space has the following integral form: \[\Delta E^{hfs}_{S,(a+b)}=\frac{16\alpha^{2}(Z\alpha)^{5}\mu^{3}}{3 \pi}\int_{0}^{\infty}dk_{1}\int\frac{d\Omega_{1}}{\pi^{2}}\int_{0}^{\infty}dk_{2 }\int\frac{d\Omega_{2}}{\pi^{2}}\frac{A(M_{S}^{2},k_{1}^{2},k_{2}^{2})}{(k_{1} ^{2}+a_{e}^{2}\cos^{2}\psi_{1})}\times \tag{18}\] \[\frac{A(M_{S}^{2},k_{1}^{2},k_{2}^{2})}{(k_{2}^{2}+a_{\mu}^{2} \cos^{2}\psi_{2})}\frac{N_{S}^{(a+b)}}{(k_{1}^{2}+k_{2}^{2}+2k_{1}k_{2}\cos \Omega+\frac{M_{S}^{2}}{\Lambda^{2}})},\] where the parameterization of function \(A(M_{S}^{2},k_{1}^{2},k_{2}^{2})\) for scalar meson chosen in the same form as for axial-vector mesons (monopole form for variables \(k_{1}^{2}\) and \(k_{2}^{2}\)): \[A(M_{s}^{2},k_{1}^{2},k_{2}^{2})=A_{S}\frac{\Lambda^{4}}{(k_{1}^{2}-\Lambda^{ 2})(k_{2}^{2}-\Lambda^{2})}. \tag{19}\] where the \(S\gamma\gamma\) coupling \(A_{S}\) is related with the \(S\rightarrow\gamma\gamma\) partial width [43; 18; 44]: \[A_{S}=\sqrt{\frac{4\Gamma_{S\gamma\gamma}}{\pi\alpha^{2}M_{S}^{3}}}. \tag{20}\] The vertical exchange amplitudes for scalar mesons are also constructed. The structure of the interaction vertices is such that the vertical exchanges are suppressed in comparison with the horizontal ones by the degree of momentum \(|\mathbf{t}|\sim\mu\alpha\) and therefore give a contribution of a higher order in \(\alpha\), which we omit below. ## IV Contribution of pseudoscalar mesons The transition vertex of two virtual photons into pseudoscalar meson is determined only by one structure function. The effective interaction vertex of the \(\pi^{0}\) meson (or other pseudoscalar mesons \(\eta\), \(\eta^{\prime}\)) and virtual photons can be expressed in terms of the transition form factor \(F_{\pi^{0}\gamma^{*}\gamma^{*}}(k_{1}^{2},k_{2}^{2})\) in the form: \[V^{\mu\nu}(k_{1},k_{2})=i\varepsilon^{\mu\nu\alpha\beta}k_{1\alpha}k_{2\beta} A(t^{2},0,0)F_{\pi^{0}\gamma^{*}\gamma^{*}}(k_{1}^{2},k_{2}^{2}),\quad A(M_{P}^{ 2},0,0)=\frac{\alpha}{\pi F_{P}}, \tag{21}\] where the pseudoscalar meson decay constants are \(F_{\pi}=0.0924\) GeV, \(F_{\eta}=0.0975\) GeV, \(F_{\eta^{\prime}}=0.0744\) GeV. The pseudoscalar decay constants are related to the two photon partial width of the resonance by the equation: \[F_{P}^{2}=\frac{\alpha^{2}}{64\pi^{3}}\frac{M_{P}^{3}}{\Gamma(P\to \gamma\gamma)}. \tag{22}\] The transition form factor \(F_{\pi^{0}\gamma^{*}\gamma^{*}}(k_{1}^{2},k_{2}^{2})\) is normalized by the condition: \(F_{\pi^{0}\gamma^{*}\gamma^{*}}(0,0)=1\) and parametrization (19) is used for it. The form factors of the transition of pseudoscalar mesons into two photons have been studied experimentally by various collaborations [46; 47; 48]. Fitting the experimental data using function (9) gave the following values of the cutoff parameter: \(\Lambda_{\pi}=0.770\) GeV, \(\Lambda_{\eta}=0.774\) GeV, \(\Lambda_{\eta^{\prime}}=0.859\) GeV. From the theoretical viewpoint \(F_{\eta}\), \(F_{\eta^{\prime}}\) entering in (22) should be considered as effective decay constants due to \(\eta-\eta^{\prime}\) mixing [49]. The general formula that determines the contribution to the ground state HFS from the horizontal exchange amplitudes can be represented in integral form in Euclidean space: \[\Delta E^{hfs}_{PS}=-\frac{\alpha^{2}(Z\alpha)^{5}\mu^{3}}{3\pi F_{\pi}^{2}}\int \frac{d^{4}k_{1}}{k_{1}\pi^{4}}\int\frac{d^{4}k_{2}}{k_{2}\pi^{4}}[F_{\pi^{0} \gamma^{*}\gamma^{*}}(k_{1}^{2},k_{2}^{2})]^{2}\times \tag{23}\] \[\frac{N^{(a+b)}_{PS}}{(k_{1}^{2}+a_{e}^{2}\cos\psi_{1}^{2})(k_{2}^{2}+a_{\mu}^ {2}\cos\psi_{2}^{2})((k_{1}+k_{2})^{2}+\frac{M_{PS}^{2}}{\Lambda^{2}})},\] where the function in the numerator \[N^{(a+b)}_{PS}=\left(\cos\Omega+\cos^{2}\Omega-\cos^{3}\Omega+\cos\psi_{1}\cos \psi_{2}-\cos\Omega\cos^{2}\psi_{1}-\cos\Omega\cos^{2}\psi_{2}\right) \tag{24}\] is obtained by calculating the trace, summing over the Lorentz indices in an expression like \[\varepsilon^{\mu\nu\alpha\beta}k_{1\alpha}k_{2\beta}\varepsilon^{\lambda \sigma\rho\omega}k_{1\rho}k_{2\omega}Tr[\gamma^{\lambda}(p_{1}-k_{1}+m_{1}) \gamma^{\mu}\hat{\Pi}\gamma^{\nu}(-p_{2}+k_{2}+m_{2})\gamma^{\sigma}\hat{\Pi}^ {+}]. \tag{25}\] The index \((a+b)\) denotes the contribution of the diagrams \((a)\) and \((b)\) in Fig. 1. Subsequently, integrals in (23) are calculated numerically, as in the case of scalar mesons with horizontal exchanges. Turning to the vertical exchange amplitudes, it should be noted that they contain additional powers of the momentum t. So, for example, the numerator of the amplitude in Fig. 1(c) is equal to \[N^{c}_{PS}=t^{2}k^{2}r^{2}-k^{2}(rt)^{2}-r^{2}(kt)^{2}+(kr)(kt)(rt)-k_{0}r_{0}( kt)(rt). \tag{26}\] As a result, it turns out that vertical interaction contribution to the hyperfine splitting is of order \(\alpha^{2}(Z\alpha)^{7}\). Therefore, this contribution can be neglected. ## V Contribution of tensor mesons The lowest tensor resonance is the spin 2 \(f_{2}(1270)\) dominating in \(\gamma\gamma\rightarrow\pi^{+}\pi^{-},\pi^{0}\pi^{0}\) production. The \(f_{2}\) parameters extracted are \(M_{f_{2}}=1275.4\) MeV, \(\Gamma_{f_{2}}=185.8\) MeV and \(\Gamma_{f_{2}\gamma\gamma}/\Gamma_{f_{2}}=1.42\pm 0.24\times 10^{-5}\). For tensor mesons consisting from light quarks the experimental analysis of decay angular distributions for \(\gamma\gamma\) cross sections to \(\pi^{+}\pi^{-}\), \(\pi^{0}\pi^{0}\), \(K^{+}K^{-}\) have shown that the \(J=2\) mesons are produced mainly in a state with helicity \(\Lambda=2\)[45]. We will assume further that hadronic light-by-light scattering amplitude for tensor mesons is dominated be helicity \(\Lambda=2\) exchange. Then the amplitude of the process \(\gamma^{*}+\gamma^{*}\to T\) (see Fig. 1) can be parametrised as follows [40]: \[T^{T}_{\mu\nu\alpha\beta}(k_{1},k_{2})=4\pi\alpha\frac{k_{1}k_{2}}{M}{\cal M} _{\mu\nu\alpha\beta}(k_{1},k_{2}){\cal F}_{T\gamma^{*}\gamma^{*}}(k_{1}^{2},k _{2}^{2}), \tag{27}\] where \({\cal F}_{T\gamma^{*}\gamma^{*}}(k_{1}^{2},k_{2}^{2})\) is a transition form factor, \(k_{1}\), \(k_{2}\) are four momenta of virtual photons, \[{\cal M}_{\mu\nu\alpha\beta}(k_{1},k_{2})=\bigg{\{}R_{\mu\alpha}(k_{1},k_{2})R _{\nu\beta}(k_{1},k_{2})+\frac{1}{8(k_{1}+k_{2})^{2}\left[(k_{1}k_{2})^{2}-k_{ 1}^{2}k_{2}^{2}\right]}R_{\mu\nu}(k_{1},k_{2})\times \tag{28}\] \[\left[(k_{1}+k_{2})^{2}(k_{1}-k_{2})_{\alpha}-(k_{1}^{2}-k_{2}^{2})(k_{1}+k_{2 })_{\alpha}\right]\times\left[(k_{1}+k_{2})^{2}(k_{1}-k_{2})_{\beta}-(k_{1}^{2 }-k_{2}^{2})(k_{1}+k_{2})_{\beta}\right]\bigg{\}},\] \[R_{\mu\nu}(k_{1},k_{2})=-g_{\mu\nu}+\frac{1}{X}\left[(k_{1}k_{2})(k_{1}^{\mu}k_{2}^ {\nu}+k_{2}^{\mu}k_{1}^{n}u)-k_{1}^{2}k_{2}^{\mu}k_{2}^{\nu}-k_{2}^{2}k_{1}^{\mu }k_{1}^{\nu}\right],\ X=(k_{1}k_{2})^{2}-k_{1}^{2}k_{2}^{2}.\] Then the electron-muon direct interaction amplitude via horizontal tensor meson exchange can be presented as follows: \[i{\cal M}_{T}=\frac{\alpha^{2}(Z\alpha)^{2}}{16m_{1}^{2}m_{2}^{2}}\int\frac{d^ {4}k_{1}}{\pi^{2}(k_{1}^{2})^{2}}\int\frac{d^{4}k_{2}}{\pi^{2}(k_{2}^{2})^{2}} \frac{(k_{1}k_{2})^{2}}{M_{T}^{2}}{\cal F}_{T\gamma^{*}\gamma^{*}}^{2}(k_{1}^ {2},k_{2}^{2}){\cal M}_{\mu\nu\alpha\beta}(k_{1},k_{2}){\cal M}_{\sigma\lambda \rho\omega}(k_{1},k_{2})\times \tag{29}\] \[D_{T}^{\alpha\beta\rho\omega}(k_{1}+k_{2})Tr\Big{[}\hat{\Pi}(\hat{q}_{1}+m_{1} )\gamma^{\sigma}S_{e}(p_{1}-k_{1})\gamma^{\mu}(\hat{p}_{1}+m_{1})\hat{\Pi}( \hat{p}_{2}-m_{2})\gamma^{\nu}S_{\mu}(-p_{2}+k_{2})\gamma^{\lambda}(\hat{q}_{ 2}-m_{2})\Big{]},\] where \(S_{e}(p_{1}-k_{1})\) and \(S_{\mu}(-p_{2}+k_{2})\) are the propagators of electron and muon. The massive spin 2 propagator has the form: \[D_{T}^{\mu\nu\alpha\beta}(k)=\frac{f^{\mu\nu\alpha\beta}}{k^{2}-M_{T}^{2}+i0}, \tag{30}\] \[f^{\mu\nu\alpha\beta}=\frac{1}{2}\left(g^{\mu\alpha}g^{\nu\beta}+g^{\mu\beta} g^{nu\alpha}-g^{\mu\nu}g^{\alpha\beta}\right)+\frac{1}{2}\left(g^{\mu\alpha} \frac{k^{\nu}k^{\beta}}{M^{2}}+g^{\nu\beta}\frac{k^{\mu}k^{\alpha}}{M^{2}}+g^ {\mu\beta}\frac{k^{\nu}k^{\alpha}}{M^{2}}+g^{\nu\alpha}\frac{k^{\mu}k^{\beta} }{M^{2}}\right)+ \tag{31}\] \[+\frac{2}{3}\left(\frac{1}{2}g^{\mu\nu}+\frac{k^{\mu}k^{\nu}}{M^{2}}\right) \left(\frac{1}{2}g^{\alpha\beta}+\frac{k^{\alpha}k^{\beta}}{M^{2}}\right).\] The crossed amplitude in Fig. 1(b) has the similar structure. After further simplifications of the numerator of the expression (29) in the Form package, it takes the following form: \[N_{T}^{(a,b)}=k_{1}^{0}k_{2}^{0}-k_{1}k_{2}+\frac{1}{k_{1}^{2}k_{2}^{2}-(k_{1} k_{2})^{2}}[k_{1}^{2}(k_{2}^{0})^{2}(k_{1}k_{2})+k_{2}^{2}(k_{1}^{0})^{2}(k_{1}k_{ 2})-2k_{1}^{2}k_{2}^{2}k_{1}^{0}k_{2}^{0}]. \tag{32}\] For the form factor of the transition of a tensor meson into two virtual photons, we use the monopole parametrization with respect to each square of the photon momentum of the form: \[{\cal F}_{T\gamma^{*}\gamma^{*}}(k_{1}^{2},k_{2}^{2})=\frac{A_{T\gamma^{*} \gamma^{*}}(M_{T}^{2},0,0)}{(k_{1}^{2}-\Lambda_{T}^{2})(k_{2}^{2}-\Lambda_{T} ^{2})}, \tag{33}\] and the value of \(A_{T\gamma^{*}\gamma^{*}}(M_{T}^{2},0,0)\) is determined using the width \(\Gamma_{T\ \gamma\gamma}\) of the decay of the tensor meson into two photons: \[A_{T\gamma^{*}\gamma^{*}}(M_{T}^{2},0,0)=\frac{2\sqrt{5\Gamma_{T\ \gamma\gamma}}}{ \alpha\sqrt{\pi M_{T}}}. \tag{34}\] The numerical values of the decay width of one tensor meson \(f_{2}(1275)\) is taken from [39]. After passing to the Euclidean space and a number of simplifications, we can represent the total contribution in Fig. 1(a+b) to the hyperfine structure of muonium in the integral form: \[\Delta E_{T}^{hfs}=\frac{128\pi\alpha^{2}(Z\alpha)^{5}\mu^{3}}{3M_{T}^{2}} \int_{0}^{\infty}k_{1}^{2}dk_{1}\int_{0}^{\pi}\frac{\sin^{2}\psi_{1}}{\pi^{3}} \int_{0}^{\infty}k_{2}^{2}dk_{2}\int_{0}^{\pi}\frac{\sin^{2}\psi_{2}}{\pi^{3}} \int_{0}^{\pi}\sin\theta d\theta \tag{35}\] \[\frac{A_{T\gamma^{*}\gamma^{*}}^{2}(M_{T}^{2},0,0)N_{T}^{(a,b)}}{(k_{1}^{2}+1)^ {2}(k_{2}^{2}+1)^{2}(k_{1}^{2}+a_{e}^{2}\cos^{2}\psi_{1})(k_{2}^{2}+a_{\mu}^{2} \cos^{2}\psi_{2})[(k_{1}+k_{2})^{2}+\frac{M_{T}^{2}}{\Lambda^{2}}]}.\] The results of the numerical calculation (35) are presented in the Table 1 only for one tensor meson since the contribution of other mesons is negligible due to the small width \(\Gamma_{T\ \gamma\gamma}\). Conclusion As is known, the last measurement of the hyperfine splitting of the ground state in muonium was carried out in 1999 [11] with a record-breaking accuracy for those times up to hundredths of a kHz. In a recent paper [10], the MuSEUM collaboration announced the start of new measurements of HFS in muonium and obtained a result that agrees with [11], but is still inferior to it in accuracy. It can be said that the planned increase in the accuracy of measuring HFS in muonium to 1 ppb [13] opens a new stage in the theoretical study of this problem, which is connected with an increase in the accuracy of calculations of various corrections. It should be emphasized that theoretical work in this direction did not stop during the last two decades [12; 1; 2; 3; 4]. Various high-order quantum electrodynamic contributions in \(\alpha\) were calculated. A sharp increase in the experimental accuracy leads to the need to take into account in the theoretical calculations the contributions of other interactions, as is the case for the anomalous magnetic moment of the muon or the Lamb shift in muonic hydrogen. This work is devoted to the study of one of these new contributions, due to the production of hadrons in light-by-light scattering diagrams. Compared to our previous work in 2002, this study takes into account the contributions of light mesons of different spins both in horizontal-type diagrams and in amplitudes with vertical exchange. The calculated contributions from various mesons are presented separately in the final Table 1. For all mesons, the parameter \(A(M^{2},0,0)\), which is also presented in the Table 1, plays an important role in the numerical evaluation of the contribution. The numerical value of this parameter is related to the width of the meson decay into two photons, which is taken from various experiments. An analysis of the available experimental data on the decay widths into two photons shows that the accuracy of their measurement is not high [39]. Therefore, it is more correct to consider the results presented in Table 1 as possible estimates of contributions of this type. For mesons for which the \(\Gamma_{\gamma\gamma}\) value has not yet been fixed [39], the average values were taken from the available data. But with pseudovector and pseudoscalar mesons, which make the main contribution to the Table 1, the situation with fixing \(A(M^{2},0,0)\) is more or less certain, so that the error of their contribution obtained does not exceed 30 percent. Nevertheless, there is a significant scatter in the experimental data for the width of the \(\sigma\) meson \(\Gamma_{\sigma\gamma\gamma}\). In our calculations, we used for it a value of 4.5 keV. Since in the end it turns out that the contribution of this meson is the main one, we estimate the total error of the calculation in the Table 1 at 50 percent. It should be noted that the obtained contributions of pseudoscalar mesons improve our results due to more accurate numerical integration. The calculation formula (23) is transformed in comparison with [14] in such a way that the contribution of both direct and cross horizontal amplitudes is taken into account at once. Numerically, the contributions of \(\pi\), \(\eta\), \(\eta^{\prime}\) mesons are among the most significant. A close numerical value is given by the horizontal exchange amplitude with the \(\sigma\) meson. As regards the contribution of axial vector mesons, they contribute from both types of exchanges (horizontal and vertical). The difference between our results on vertical exchanges of pseudovector mesons and work [15] is, in our opinion, that we take into account an additional reducing factor (8), the square of which just leads to a decrease in our contribution compared to [15] by an order of magnitude. The total contribution of all mesons to the hyperfine splitting turned out to be positive. Although the contributions of axial vector and pseudoscalar mesons are negative, there is a positive contribution of the \(\sigma\) meson, which exceeds all previous ones in magnitude. The resulting value of 0.012 Hz can be regarded as an estimate of this small hadronic effect. ###### Acknowledgements. The authors are grateful to A.E. Radzhabov and A.S. Zhevlakov for useful discussions. This work is supported by Russian Science Foundation (grant No. RSF 23-22-00143).
2303.17730
On Pentagon Equation, Tetrahedron Equation, Evolution and Integrals of Motion
There is a sub-class of the solutions to Quantum Tetrahedron Equation related to the algebraical Pentagon Equation. The Quantum Tetrahedron Equation defines an evolution operator in wholly discrete three dimensional space-time. In this paper we establish the Liouville integrability of one particular quantum evolution model/classical integrable model on cubic lattice. The key feature of the model is that it has two independent quantum/classical spectral curves. In particular, on the classical level its Hamiltonian equations of motion decouple into two independent Hirota equations.
Sergey Sergeev
2023-03-30T22:10:32Z
http://arxiv.org/abs/2303.17730v1
# On Pentagon Equation, Tetrahedron Equation, Evolution and Integrals of Motion. ###### Abstract There is a sub-class of the solutions to Quantum Tetrahedron Equation related to the algebraical Pentagon Equation. The Quantum Tetrahedron Equation defines an evolution operator in wholly discrete three dimensional space-time. In this paper we establish the Liouville integrability of one particular quantum evolution model/classical integrable model on cubic lattice. The key feature of the model is that it has two independent quantum/classical spectral curves. In particular, on the classical level its Hamiltonian equations of motion decouple into two independent Hirota equations. ## 1 Introduction We discuss in this paper some specific three-dimensional exactly integrable model related to the algebraic Pentagon equation. In particular, we consider the solution of the Tetrahedron equation \[R_{123}R_{145}R_{246}R_{356}\ =\ R_{356}R_{246}R_{145}R_{123} \tag{1.1}\] of the form \[R_{123}\ =\ S_{13}P_{23}S_{13}^{-1}\, \tag{1.2}\] where \(P_{23}\) is the permutation operator, while \(S_{13}\) is the subject of the algebraic Pentagon equation \[S_{12}S_{23}\ =\ S_{23}S_{13}S_{12}\, \tag{1.3}\] and an auxiliary 10-term relation (see [1] for details), \[S_{12}S_{13}^{-1}S_{14}S_{24}^{-1}S_{34}\;=\;S_{24}^{-1}S_{34}S_{14}^{-1}S_{12}S_ {13}^{-1}\;. \tag{1.4}\] The \(R\)-matrix of the Tetrahedron Equation can be used for construction of the evolution operator, its kernel is given by \[U^{\{\alpha^{\prime},\beta^{\prime},\gamma^{\prime}\}}_{\{\,\alpha,\,\beta,\, \gamma\}}\;=\;\prod_{i,j}R^{\alpha^{\prime}_{i+1,j},\;\;\beta^{\prime}_{i,j}, \;\;\gamma^{\prime}_{i,j+1}}_{\;\alpha_{i,j},\;\;\beta_{i,j},\;\;\;\gamma_{i,j }}\;. \tag{1.5}\] The periodical boundary conditions imply \(i,j\in\mathbb{Z}_{N}\). Evolution operator commutes with a layer-to-layer transfer-matrix constructed with the help of \(R\)-matrices, usually this justifies the integrability in the case when \(R\)-matrix is endowed by a set of spectral parameters. In this paper we consider the case of the constant Pentagon and Tetrahedron Equations related to the Weyl algebra of observables and quantum dilogarithm. We establish the complete integrability of the model by producing explicitly the complete set of the integrals of motion. The remarkable feature of the model is that the complete set is produced by two independent determinants (generating functions) rather then by one. On the classical level this means that the model has two independent spectral curves. ## 2 Linear problems and Tetrahedral map We commence with the formulation of an auxiliary linear problem(s) intertwined by \(R\)-matrix (1.2) and allowing to derive the quantum spectral curves for the quantum evolution operator (1.5). A remarkable feature of \(R\)-matrix (1.2) is that there are two completely different approaches to its linear problems. One is based on the linear problem providing the algebraic Pentagon equation (1.3). This approach is rather geometrical, but it does not provide a way to produce a set of integrals of motion. We will give details in the appendix. The auxiliary linear problem used in this section can be formulated as follows. The picture below gives a combinatoric of a Tetrahedral map: (2.1) On this picture \(a,b,\cdots,h\) denote the cells of two two-dimensional kagome lattices1. The vertices stand for local Weyl algebras. The left and right graphs of (2.1) present two systems of linear equations: Footnote 1: Kagome lattice is a lattice formed by three sets of parallel lines. Kagome lattice also can be seen as a dissection of a cubic lattice by an auxiliary plane in general position. R. J. Baxter suggested the term “kagome lattice” inspired by the Japanese handcraft \[\left\{\begin{array}{l}\ell_{1}\ \stackrel{{ def}}{{=}}\ \phi_{f}-\mathbf{v}_{1}\phi_{a}+\mathbf{u}_{1}\phi_{b}\ =\ 0\\ \\ \ell_{2}\ \stackrel{{ def}}{{=}}\ \phi_{a}-\mathbf{v}_{2}\phi_{e}+ \mathbf{u}_{2}\phi_{g}\ =\ 0\\ \\ \ell_{3}\ \stackrel{{ def}}{{=}}\ \phi_{f}-\mathbf{v}_{3}\phi_{d}+ \mathbf{u}_{3}\phi_{a}\ =\ 0\end{array}\right.\quad\rightarrow\quad\left\{ \begin{array}{l}\ell^{\prime}_{1}\ \stackrel{{ def}}{{=}}\ \phi_{d}-\mathbf{v}^{\prime}_{1}\phi_{e}+\mathbf{u}^{\prime}_{1}\phi_{h}\ =\ 0\\ \\ \ell^{\prime}_{2}\ \stackrel{{ def}}{{=}}\ \phi_{f}-\mathbf{v}^{\prime}_{2}\phi_{d}+\mathbf{u}^{\prime}_{2}\phi_{b}\ =\ 0\\ \\ \ell^{\prime}_{3}\ \stackrel{{ def}}{{=}}\ \phi_{b}-\mathbf{v}^{\prime}_{3}\phi_{h}+\mathbf{u}^{\prime}_{3}\phi_{g}\ =\ 0\end{array}\right. \tag{2.2}\] Here \(\mathbf{u},\mathbf{v}\) stand for a local Weyl algebra, \[\mathbf{\mathfrak{w}}_{j}\ =\ (\mathbf{u}_{j},\ \mathbf{v}_{j}\ :\ \mathbf{u}_{j}\mathbf{v}_{j}\ =\ q^{2}\mathbf{v}_{j}\mathbf{u}_{j}) \tag{2.3}\] (the term "local" means that the Weyl elements assigned to different vertices on the same graph commute). Auxiliary elements \(\phi_{a},\cdots,\phi_{h}\) belong to a formal right module of all Weyl algebras. The intertwining means the linear equivalence of the left and right systems of (2.2). In particular, \(\mathbf{v}_{1}^{-1}\ell_{1}+\mathbf{u}_{3}^{-1}\ell_{3}\) and \(\ell^{\prime}_{2}\) have the same auxiliary variables and therefore they must be proportional, \[(\mathbf{v}_{1}^{-1}+\mathbf{u}_{3}^{-1})^{-1}(\mathbf{v}_{1}^{-1}\ell_{1}+\mathbf{u}_{3}^{-1}\ell_{3})\ \equiv\ \ell^{\prime}_{2}\;. \tag{2.4}\] The equivalence of two linear systems (2.2) provides \[\begin{array}{l}\mathbf{u}^{\prime}_{1}\ =\ \mathbf{v}_{3}^{-1}F\;, \mathbf{u}^{\prime}_{2}\ =\ \mathbf{u}_{3}(\mathbf{v}_{1}+\mathbf{u}_{3})^{-1} \mathbf{u}_{1}\;,\ \ \mathbf{u}_{3}\ =\ \mathbf{u}_{1}^{-1}(\mathbf{v}_{1}+\mathbf{u}_{3})\mathbf{u}_{2}\;,\\ \\ \mathbf{v}^{\prime}_{1}\ =\ \mathbf{v}_{3}^{-1}(\mathbf{v}_{1}+ \mathbf{u}_{3})\mathbf{v}_{2}\;,\ \ \mathbf{v}^{\prime}_{2}\ =\ \mathbf{v}_{1}(\mathbf{v}_{1}+\mathbf{u}_{3})^{-1}\mathbf{v}_{3}\;,\ \ \ \mathbf{v}^{\prime}_{3}\ =\ \mathbf{u}_{1}^{-1}F\;,\end{array} \tag{2.5}\] where \(F\) remains undefined. There are several possible choices of \(F\) providing the integrability [2]. In this paper we fix \(F\) by an extra requirement: the same \(\mathbf{\mathfrak{w}}_{j},\mathbf{\mathfrak{w}}^{\prime}_{j}\) must satisfy the second independent linear problem: \[\left\{\begin{array}{l}\tilde{\ell}_{1}\ \stackrel{{ def}}{{=}}\ \tilde{\phi}_{g}-\mathbf{v}_{1}\tilde{\phi}_{b}+\mathbf{u}_{1} \tilde{\phi}_{a}\ =\ 0\\ \\ \tilde{\ell}_{2}\ \stackrel{{ def}}{{=}}\ \tilde{\phi}_{c}-\mathbf{v}_{2}\tilde{\phi}_{g}+\mathbf{u}_{2}\tilde{\phi}_{e}\ =\ 0\\ \\ \tilde{\ell}_{3}\ \stackrel{{ def}}{{=}}\ \tilde{\phi}_{e}-\mathbf{v}_{3}\tilde{\phi}_{a}+\mathbf{u}_{3}\tilde{\phi}_{d}\ =\ 0\end{array}\right.\quad\rightarrow\quad\left\{\begin{array}{l}\tilde{\ell}^{ \prime}_{1}\ \stackrel{{ def}}{{=}}\ \tilde{\phi}_{c}-\mathbf{v}^{\prime}_{1}\tilde{\phi}_{h}+\mathbf{u}^{\prime}_{1}\tilde{\phi}_{e}\ =\ 0\\ \\ \tilde{\ell}^{\prime}_{2}\ \stackrel{{ def}}{{=}}\ \tilde{\phi}_{h}-\mathbf{v}^{\prime}_{2}\tilde{\phi}_{b}+\mathbf{u}^{\prime}_{2} \tilde{\phi}_{d}\ =\ 0\\ \\ \tilde{\ell}^{\prime}_{3}\ \stackrel{{ def}}{{=}}\ \tilde{\phi}_{c}-\mathbf{v}^{\prime}_{3}\tilde{\phi}_{g}+\mathbf{u}^{\prime}_{3} \tilde{\phi}_{h}\ =\ 0\end{array}\right. \tag{2.6}\] Linear problems (2.2) and (2.6) are compatible, so that the map \(\mathfrak{w}_{j}\to\mathfrak{w}^{\prime}_{j}\) becomes unique: \[\begin{split}\boldsymbol{u}^{\prime}_{1}\;=\;\boldsymbol{u}_{2}+ \boldsymbol{u}_{1}\boldsymbol{v}_{2}\boldsymbol{v}_{3}^{-1}\;,\;\;\;\;\; \boldsymbol{u}^{\prime}_{2}\;=\;\boldsymbol{u}_{3}(\boldsymbol{v}_{1}+ \boldsymbol{u}_{3})^{-1}\boldsymbol{u}_{1}\;,\;\;\;\boldsymbol{u}^{\prime}_{3} \;=\;\boldsymbol{u}_{1}^{-1}(\boldsymbol{v}_{1}+\boldsymbol{u}_{3})\boldsymbol{ u}_{2}\;,\\ \boldsymbol{v}^{\prime}_{1}\;=\;\boldsymbol{v}_{3}^{-1}(\boldsymbol {v}_{1}+\boldsymbol{u}_{3})\boldsymbol{v}_{2}\;,\;\;\boldsymbol{v}^{\prime}_{2 }\;=\;\boldsymbol{v}_{1}(\boldsymbol{v}_{1}+\boldsymbol{u}_{3})^{-1} \boldsymbol{v}_{3}\;,\;\;\;\boldsymbol{v}^{\prime}_{3}\;=\;\boldsymbol{v}_{2} +\boldsymbol{u}_{1}^{-1}\boldsymbol{u}_{2}\boldsymbol{v}_{3}\;.\end{split} \tag{2.7}\] This set of formulas defines the map \(\mathfrak{R}_{123}\) on a space of functions of \(\boldsymbol{u}_{j},\boldsymbol{v}_{j}\), \[(\mathfrak{R}_{123}\circ f)(\mathfrak{w}_{1},\mathfrak{w}_{2},\mathfrak{w}_{3 })\;=\;f(\mathfrak{w}^{\prime}_{1},\mathfrak{w}^{\prime}_{2},\mathfrak{w}^{ \prime}_{3})\;. \tag{2.8}\] **Statement 2.1**.: _Properties of \(\mathfrak{R}\) are:_ * _The map_ \(\mathfrak{R}\) _is the automorphism of_ \(\mathfrak{w}_{1}\otimes\mathfrak{w}_{2}\otimes\mathfrak{w}_{3}\) _(proof by direct verification)._ * _The map_ \(\mathfrak{R}_{123}\) _satisfies the functional Tetrahedron equation (proof by direct verification)._ * _In all cases of the self-dual representation of Weyl algebra (e.g. the cyclic_ \(\mathbb{Z}_{N}\) _representation, modular double,_ \(\mathbb{Z}\otimes\mathbb{T}\)_, etc.), when the quantum dilogarithm is well defined, the quantum operator_ \(R_{123}\)_, such that_ \[(\mathfrak{R}_{123}\circ f)\;=\;R_{123}\cdot f\cdot R_{123}^{-1}\;,\] (2.9) _is uniquely defined. Operator_ \(R_{123}\) _has the structure (_1.2_) and satisfy the quantum Tetrahedron equation_ _[_5_]__._ ## 3 Evolution map and the integrals of motion The evolution map appears as an extension of the "Yang-Baxter-type map" (2.1) to the whole kagome lattice: (3.1) For the whole kagome lattice, \(c_{i,j}\) label the hexagonal cells, while \(a_{i,j}\) and \(b_{i,j}\) label two types of triangular cells. Notations for the cells here preserve the structure of the kagome lattice, however some cells must coincide according to (2.1): \[a^{\prime}_{i,j}\;=\;c_{i,j}\;,\quad c^{\prime}_{i,j}\;=\;b_{i-1,j-1}\;, \tag{3.2}\] while the cells \(a_{i,j}\) on the left disappear and cells \(b^{\prime}_{i,j}\) on the right appear. Explicit form of the evolution map follows from (2.7): \[\begin{array}{l}\mathbf{u}^{\prime}_{1,i+1,j}\;=\;\left(\mathbf{u}_{2}+\mathbf{u}_{1}\mathbf{v}_{2}\mathbf{v}_{3}^{-1}\right)_{i,j}\;,\;\;\;\;\;\;\mathbf{v}^{\prime }_{1,i+1,j}\;=\;\left(\mathbf{v}_{3}^{-1}(\mathbf{v}_{1}+ \mathbf{u}_{3})\mathbf{v}_{2}\right)_{i,j}\;,\\ \mathbf{u}^{\prime}_{2,i,j}\;=\;\left(\mathbf{u}_{3}(\mathbf{v}_{1}+\mathbf{u}_{3})^{-1}\mathbf{u}_{1}\right)_ {i,j}\;,\;\;\;\;\mathbf{v}^{\prime}_{2,i,j}\;=\;\left(\mathbf{ v}_{1}(\mathbf{v}_{1}+\mathbf{u}_{3})^{-1}\mathbf{v}_{3} \right)_{i,j}\;,\\ \mathbf{u}^{\prime}_{3,i,j+1}\;=\;\left(\mathbf{u}_{1}^{-1}( \mathbf{v}_{1}+\mathbf{u}_{3})\mathbf{u}_{2}\right) _{i,j}\;,\;\;\mathbf{v}^{\prime}_{3,i,j+1}\;=\;\left(\mathbf{ v}_{2}+\mathbf{u}_{1}^{-1}\mathbf{u}_{2}\mathbf{v}_{3} \right)_{i,j}\;.\end{array} \tag{3.3}\] The evolution map \(({\cal U}\circ f)(\mathbf{v}_{\alpha,i,j})=f(\mathbf{v}^{ \prime}_{\alpha,i,j})\), being realised for a particular representation of the Weyl algebra, \(({\cal U}\circ f)\;=\;U\cdot f\cdot U^{-1}\), is given by (1.5). Generating functions for the integrals of motion preserved by the evolution map are quantum spectral polynomials. To obtain the first spectral polynomial, consider the whole set of auxiliary linear relations for the whole kagome lattice. The type (2.2) gives \[\left\{\begin{array}{l}\ell_{1,i,j}\;=\;\phi_{c_{i,j}}-\mathbf{v}_{1,ij}\phi_{a_{i,j}}+\mathbf{u}_{1,ij}\phi_{b_{i-1,j}} \;,\\ \ell_{2,ij}\;=\;\phi_{a_{i,j}}-\mathbf{v}_{2,ij}\phi_{c_{i+1,j}}+ \mathbf{v}_{2,ij}\phi_{c_{i,j+1}}\;,\\ \ell_{3,ij}\;=\;\phi_{c_{i,j}}-\mathbf{v}_{3,ij}\phi_{b_{i,j-1}}+ \mathbf{u}_{3,ij}\phi_{a_{i,j}}\;.\end{array}\right. \tag{3.4}\] Consider the periodic boundary conditions and introduce the quasi-periods, \[\phi_{c_{i+N,j}}\;=\;\lambda\phi_{c_{i,j}}\;,\;\;\;\phi_{c_{i,j+N}}\;=\;\mu \phi_{c_{i,j}}\;,\;\;\;\;\mbox{etc.} \tag{3.5}\] The system of linear relations (3.4) can be written in the matrix form, \[\ell_{A}\;=\;\sum_{B}L^{(1)}_{A,B}\phi_{B} \tag{3.6}\] where \(A\) takes \(3N^{2}\) values \(A\in\{(1,ij),(2,ij),(3,ij)\}\), and \(B\) takes \(3N^{2}\) values \(B\in\{a_{ij},b_{ij},c_{ij}\}\). Define now \[J^{(1)}(\lambda,\mu)\;=\;\left(\prod_{i,j}\mathbf{u}_{1,ij}^{-1} \right)\cdot\det L^{(1)} \tag{3.7}\] Operator \(J^{(1)}(\lambda,\mu)\) is a polynomial of \(\lambda,\mu\) with the following Newton polygon (this is a particular case of \(N=3\)): (3.8) so that \[J^{(1)}(\lambda,\mu)\;=\;\sum_{a,b}\lambda^{a}\mu^{b}J^{(1)}_{a,b} \tag{3.9}\] The extra multiplier in (3.7) makes \[J^{(1)}_{-N,0}\;=\;1\;, \tag{3.10}\] this coefficient corresponds to the empty dot in (3.8). In the similar way one can consider the linear problem (2.6), \[\left\{\begin{array}{l}\tilde{\ell}_{1,i,j}\;=\;\tilde{\phi}_{c_{i,j+1}}- \mathbf{v}_{1,ij}\tilde{\phi}_{b_{i-1,j}}+\mathbf{u}_{1,ij} \tilde{\phi}_{a_{i,j}}\;,\\ \tilde{\ell}_{2,ij}\;=\;\tilde{\phi}_{b_{i,j}}-\mathbf{v}_{2,ij}\tilde {\phi}_{c_{i,j+1}}+\mathbf{v}_{2,ij}\tilde{\phi}_{c_{i+1,j}}\;,\\ \tilde{\ell}_{3,ij}\;=\;\tilde{\phi}_{c_{i+1,j}}-\mathbf{v}_{3,ij} \tilde{\phi}_{a_{i,j}}+\mathbf{u}_{3,ij}\tilde{\phi}_{b_{i,j-1}}\;, \end{array}\right. \tag{3.11}\] but now with the opposite quasimomenta, \[\tilde{\phi}_{c_{i+N,j}}\;=\;\lambda^{-1}\tilde{\phi}_{c_{i,j}}\;,\quad\tilde {\phi}_{c_{i,j+N}}\;=\;\mu^{-1}\tilde{\phi}_{c_{i,j}}\;,\quad\mbox{etc.} \tag{3.12}\] In the same way we can define its determinant, \[\tilde{\ell}_{A}\;=\;\sum_{B}L^{(2)}_{A,B}\tilde{\phi}_{B}\;,\quad J^{(2)}( \lambda,\mu)\;=\;\left(\prod_{i,j}\mathbf{u}_{1,ij}^{-1}\right) \cdot\det L^{(2)}\;. \tag{3.13}\] Polynomial \(J^{(2)}(\lambda,\mu)\) has the same Newton polygon (3.8). **Statement 3.1**.: _Polynomials \(J^{(k)}(\lambda,\mu)\), \(k=1,2\), have the following properties:_ * _They generate the integrals of motion,_ \({\cal U}\circ J^{(k)}(\lambda,\mu)\;=\;J^{(k)}(\lambda,\mu)\)_._ * _Polynomials_ \(J^{(1)}(\lambda,\mu)\) _and_ \(J^{(2)}(\lambda,\mu)\) _have identical perimeters._ * _Elements_ \(J^{(1,2)}_{a,b}\) _and from the inside of the Newton polygon are algebraically independent._ * _Elements_ \(J^{(1,2)}_{a,b}\) _satisfy the algebra_ \[J^{(k)}_{a,b}J^{(k^{\prime})}_{a^{\prime},b^{\prime}}\;=\;q^{2(a^{\prime}+N)b- 2(a+N)b^{\prime}}J^{(k^{\prime})}_{a^{\prime},b^{\prime}}J^{(k)}_{a,b}\.\] (3.14) * _Thus, the set_ \(J^{(1,2)}_{a,b}\) _gives_ \(3N^{2}+1\) _invariants with_ \(3N^{2}\) _commutative independent invariants._ _Thus, the Liouville integrability of the evolution is established._ **Comments on the Statement.** The general approach to the quantum curve method, including the proof of (3.14), can be found in [3, 4]. However, in this paper two different quantum curves appear, so that all items from the statement above, especially the statements about completeness, have been verified straightforwardly for \(N\leq 4\). Common perimeters of \(J^{(1,2)}(\lambda,\mu)\) worth mentioning. The most left side of (3.8) is given by \[J^{(1,2)}(\lambda,\mu)\;=\;\lambda^{-N}\prod_{i}\left(1-\mu\prod_{j}\mathbf{u}_{2,ij}\mathbf{u}_{3,ij}\right)+\lambda^{-N+1}\cdots\, \tag{3.15}\] the bottom side is given by \[J^{(1,2)}(\lambda,\mu)\;=\;\mu^{-N}\left(\prod_{i,j}\mathbf{u}_{1,ij }^{-1}\mathbf{v}_{3,ij}\right)\,\prod_{j}\left(1-\lambda\prod_{i} \mathbf{v}_{1,ij}\mathbf{v}_{2,ij}\right)+\mu^{-N+1}\cdots\, \tag{3.16}\] and the left-bottom side is given by \[J^{(1,2)}(\lambda,\mu)\;=\;\lambda^{-N}\prod_{k}\left(1+\frac{\lambda}{\mu} \prod_{i+j=k}\mathbf{u}_{1,ij}^{-1}\mathbf{v}_{3,ij}\right)+\cdots \tag{3.17}\] One can extract very simple invariants, \[\mathbf{U}_{i}\;=\;\prod_{j}\mathbf{u}_{2,ij}\mathbf{u}_{3,ij}\,\quad\mathbf{V}_{j}\;=\;\prod_{i}\mathbf{v}_{1,ij}\mathbf{v}_{2,ij}\, \tag{3.18}\] so that \[I^{(k)}_{a,b}\;=\;\mathbf{U}_{0}^{-b}\mathbf{V}_{0}^{-a-N}J ^{(k)}_{a,b}\, \tag{3.19}\] is the set of integrals of motion in involution. The non-commutative pair \(\mathbf{U},\mathbf{V}\) has the meaning of a mass center of the system. Classical model It is interesting to demonstrate the existence of two independent spectral curves on the classical level. To do it, we rewrite firstly the evolution (3.3) in the form of equations of motion on the cubic lattice. Let \(\mathbf{n}=(n_{1},n_{2},n_{3})\) stand for a vertex of the cubic lattice. Let \[\mathbf{e}_{1}=(1,0,0)\;,\quad\mathbf{e}_{2}=(0,1,0)\;,\quad \mathbf{e}_{3}=(0,0,1) \tag{4.1}\] be unit vectors. Then the local map (2.7) can be written as \[\begin{array}{l} u_{1,\mathbf{n}+\mathbf{e}_{1} }\;=\;\frac{u_{2,\mathbf{n}}v_{3,\mathbf{n}}+u_{1,\mathbf{n}}v_{2,\mathbf{n}}}{v_{3,\mathbf{n}}}\;,\;\;v_{1,\mathbf{n}+\mathbf{e}_{1}}\;=\;\frac{v_{2,\mathbf{ n}}(v_{1,\mathbf{n}}+u_{3,\mathbf{n}})}{v_{3,\mathbf{n}}}\;,\\ \\ u_{2,\mathbf{n}+\mathbf{e}_{2}}\;=\;\frac{u_{1,\mathbf{n}}u_{3,\mathbf{n}}}{v_{1,\mathbf{n}}+u_{3,\mbox {\boldmath$n$}}}\;,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;v_{2,\mathbf{n}+ \mathbf{e}_{2}}\;=\;\frac{v_{1,\mathbf{n}}v_{3,\mathbf{n}}}{v_{1,\mathbf{n}}+u_{3,\mathbf{n}}}\;,\\ \\ u_{3,\mathbf{n}+\mathbf{e}_{3}}\;=\;\frac{u_{2,\mathbf{n}}(v_{1,\mathbf{n}}+u_{3,\mathbf{n}})}{u_{1, \mathbf{n}}}\;,\;\;\;\;\;v_{3,\mathbf{n}+\mathbf{e }_{3}}\;=\;\frac{u_{2\mathbf{n}}v_{3,\mathbf{n}}+u_{1, \mathbf{n}}v_{2,\mathbf{n}}}{u_{1,\mathbf{n}}}\;. \end{array} \tag{4.2}\] The evolution map (3.3) is the map \(t\to t+1\) of the space-like two dimensional surface \[\mathbf{n}\;=\;(i,t-i-j,j)\;,\quad t-{\rm fixed}. \tag{4.3}\] The Legendre transform bringing the Hamiltonian equations of motion (4.2) to the corresponding Lagrangian equations is not quite evident. It is \[\begin{array}{l} u_{1,\mathbf{n}}\;=\;\frac{\tau^{(1) }_{\mathbf{n}}}{\tau^{(1)}_{\mathbf{n}+\mathbf{e}_ {3}}}\frac{\tau^{(2)}_{\mathbf{n}+\mathbf{e}_{2}+\mathbf{e}_{3}}}{\tau^{(2)}_{\mathbf{n}+\mathbf{e}_{2}}} \;,\;\;v_{1,\mathbf{n}}\;=\;\frac{\tau^{(1)}_{\mathbf{n}+ \mathbf{e}_{2}+\mathbf{e}_{3}}}{\tau^{(1)}_{\mathbf{n}+\mathbf{e}_{3}}}\frac{\tau^{(2)}_{\mathbf{n}}}{ \tau^{(2)}_{\mathbf{n}+\mathbf{e}_{2}}}\;,\\ \\ u_{2,\mathbf{n}}\;=\;\frac{\tau^{(1)}_{\mathbf{n}+\mathbf{e}_{1}}}{\tau^{(1)}_{\mathbf{n}+\mathbf{e}_{1}+ \mathbf{e}_{3}}}\frac{\tau^{(2)}_{\mathbf{n}+\mbox{\boldmath $e$}_{3}}}{\tau^{(2)}_{\mathbf{n}}}\;,\;\;v_{2,\mathbf{n}}\;= \;\frac{\tau^{(1)}_{\mathbf{n}+\mathbf{e}_{3}}}{\tau^{(1)}_{ \mathbf{n}+\mathbf{e}_{1}+\mathbf{e}_{3}}}\frac{ \tau^{(2)}_{\mathbf{n}+\mathbf{e}_{1}}}{\tau^{(2)}_{\mathbf{n}}}\;,\\ \\ u_{3,\mathbf{n}}\;=\;\frac{\tau^{(1)}_{\mathbf{n}+\mathbf{e}_{1}+\mathbf{e}_{2}}}{\tau^{(1)}_{\mathbf{n}+ \mathbf{e}_{1}}}\frac{\tau^{(2)}_{\mathbf{n}}}{\tau^{(2)}_{ \mathbf{n}+\mathbf{e}_{2}}}\;,\;\;v_{3,\mathbf{n}}\;= \;\frac{\tau^{(1)}_{\mathbf{n}}}{\tau^{(1)}_{\mathbf{n}+ \mathbf{e}_{1}}}\frac{\tau^{(2)}_{\mathbf{n}+\mbox{\boldmath $e$}_{1}+\mathbf{e}_{2}}}{\tau^{(2)}_{\mathbf{n}+\mathbf{e}_{2}}}\;.\end{array} \tag{4.4}\] As the result, the Lagrangian equations of motion decouple into two independent Hirota equations: \[\begin{array}{l}\tau^{(1)}_{\mathbf{n}}\tau^{(1)}_{\mathbf{ n}+\mathbf{e}_{1}+\mathbf{e}_{2}+\mathbf{e}_{3}}\;=\; \tau^{(1)}_{\mathbf{n}+\mathbf{e}_{1}}\tau^{(1)}_{\mathbf{n}+\mathbf{e}_{2}+\mathbf{e}_{3}}+\tau^{(1)}_{ \mathbf{n}+\mathbf{e}_{3}}\tau^{(1)}_{\mathbf{n}+ \mathbf{e}_{1}+\mathbf{e}_{2}}\;,\\ \\ \tau^{(2)}_{\mathbf{n}+\mathbf{e}_{1}+\mathbf{e}_{2}+ \mathbf{e}_{3}}\;=\;\tau^{(2)}_{\mathbf{n}+\mbox{\boldmath $e$}_{1}}\tau^{(2)}_{\mathbf{n}+\mathbf{e}_{2}+\mbox{\boldmath $e$}_{3}}+\tau^{(2)}_{\mathbf{n}+\mathbf{e}_{3}}\tau^{(2)}_{ \mathbf{n}+\mathbf{e}_{1}+\mathbf{e}_{2}}\;.\end{array} \tag{4.5}\] The algebraic curve \(J^{(k)}(\lambda,\mu)=0\) becomes the spectral curve for \(\tau^{(k)}\)-equation, \(k=1,2\). **Acknowledgement.** I would like to thank Vladimir Bazhanov and Vladimir Mangazeev for valuable discussions. Also I acknowledge the support of the Australian Research Council grant DP190103144.
2302.00165
A Brief Overview of Software-Defined Networking
The Internet is the driving force of the new digital world, which has created a revolution. With the concept of the Internet of Things (IoT), almost everything is being connected to the internet. However, with the traditional IP network system, it is computationally very complex and costly to manage and configure the network, where the data plane and the control plane are tightly coupled. In order to simplify the network management tasks, software-defined networking (SDN) has been proposed as a promising paradigm shift towards an externalized and logically centralized network control plane. SDN decouples the control plane and the data plane and provides programmability to configure the network. To address the overwhelming advancement of this new technology, a holistic overview of SDN is provided in this paper by describing different layers and their functionalities in SDN. The paper presents a simple but effective overview of SDN, which will pave the way for the readers to understand this new technology and contribute to this field.
Alexander Nunez, Joseph Ayoka, Md Zahidul Islam, Pablo Ruiz
2023-02-01T01:01:45Z
http://arxiv.org/abs/2302.00165v1
# Software Defined Networking: An Overview ###### Abstract The Internet is the driving force of the new digital world, which has created a revolution. With the concept of the Internet of Things (IoT), almost everything is being connected to the internet. However, with the traditional IP network system, it is computationally very complex and costly to manage and configure the network, where the data plane and the control plane are tightly coupled. In order to simplify the network management tasks, software-defined networking (SDN) has been proposed as a promising paradigm shift towards an externalized and logically centralized network control plane. SDN decouples the control plane and the data plane and provides programmability to configure the network. To address the overwhelming advancement of this new technology, a holistic overview of SDN is provided in this paper by describing different layers and their functionalities in SDN. The paper presents a simple but effective overview of SDN, which will pave the way for the readers to understand this new technology and contribute to this field. Software defined networking, Control plane, Data plane, Controller, and Internet. ## I Introduction In a traditional IP network, the network devices communicate with each other through various defined protocols to negotiate the exact network behavior based on the configuration of each device. In this architecture, there is no significant process of automation, and the network administrator is required to manually configure a large number of devices. Though some software tools facilitate the work, it still requires a significant amount of time and human resources. An example of a simple software tool is the simple network management protocol (SNMP), which is widely used for statistics and alarm collection but not for configuration management due to numerous limitations as listed in [1]. There is another protocol, NETCONF, to automate the configuration of network devices through a standard application programming interface (API). However, before using the NETCONF, all of the functionalities and logic must be implemented on the network devices [1]. Another problem with traditional networking is that the network devices are sold as closed components at a very high price by the manufacturers with their own operating systems, specific hardware, and features. As there are many equipment vendors with their own configuration languages to meet the network protocols, there can be several interoperability issues for the integration of devices from different vendors. This closed component also hinders the participation of diverse research communities to bring innovation to computer networks. Furthermore, the traditional network is not adaptable when it comes to adding new services to an existing network. A new device must be integrated and configured to perform the new service before it can be added [2]. In contrast to the traditional networking, software-defined networking (SDN) has introduced a paradigm shift in computer networking by decoupling the control plane from the data plane. The switches and the routers become pieces of hardware in the data plane to redirect the traffic from one interface to another, whereas the controller in the control plane communicates with the switches to control the switches based on user-defined protocols. As all the switches can be managed from the central controllers, it is no longer required to access individual switches and configure them. The network controller communicates with the switches using standard protocols like OpenFlow, so there is no need to use vendor-specific programs to control the forwarding behavior of the switches, which facilitates the interoperability between devices from different vendors. [2]. SDN also provides an easy interface for implementing new services in the network. Any service in the network can be implemented by writing the logic in a high-level programming language and having the controller translate this logic into low-level forwarding rules for the switches. SDN shifts the focus from hardware to software and provides the opportunity for the diverse research communities to develop their own ideas and implement them. The details of SDN structure and functionalities are described in this paper. The definition of SDN is described in Section II; the data plane and the control plane of SDN are discussed in Sections III and IV, respectively. Sections V and VI summarize the management plane and challenges of SDN, respectively, and finally, Section VII concludes the paper. ## II Software Defined Networking ### _What is SDN_ A Software Defined Network (SDN) is a networking approach that vastly differs from the internet's traditional network structure. Software-defined networking is a networking method that uses software-based controllers and application programming interfaces (APIs) to communicate with underlying hardware infrastructure (switches and routers) and to direct traffic on a network. With the use of a centralized software-based SDN controller, the network can be controlled and automated via software. Software-defined networking allows for vertical network integration, separation of the network's control logic from the underlying routers and switches, centralized logical network control, and the ability to program the network [3]. To further understand what SDN is, it is important to know its architecture. The SDN architecture has 3 planes: the management plane, the control plane (SDN controller operates here), and the data plane (switches, forwarding devices) [3]. SDN provides centralized network control by separating the control plane and the data plane from both being on routers. Instead of having the control plane governed by protocols on routers and switches, the control plane is now managed by the SDN controller. The controller is connected to all the underlying switches and routers found in the data plane. This allows the controller to control the network devices by dispatching different commands and instructions to them. Software-defined networking can be thought of as separating the "brains" from the power in a network. The controller would be the "brains" in this example because it has the management capabilities and instructions to control the power (routers and switches). ### _How does SDN Work_ Now that it is clear what software-defined networking is, it is important to learn an overview of how it works. As stated before, SDN detaches the control plane and the data plane from being on a single device like a router. The detached control plane is now known as the SDN controller. Having a main controller gives network administrators the ability to control the entire network via one controller instead of on a device by device basis. An SDN network allows for dynamic, efficient, and centralized network configuration. Dynamic network configuration is possible through the use of SDN applications. SDN applications are used to configure the network and are created with specified network requirements and desired network behavior. Applications are in the management plane and they are deployed by the SDN controller to the switches and routers in the data plane. The applications allow SDN controllers to autonomously control the network and its behavior. In order for this to be possible there needs to be a way for the SDN controller to communicate with every device on the network. The 3 SDN planes use dedicated interfaces to interact with each other. There are two interfaces named the northbound and southbound interfaces. The northbound interface connects the SDN controller/control plane to the management plane which is where applications are deployed and handled. While the southbound interface connects the controller to the underlying forwarding devices. The southbound interface allows for control of the devices, forwarding operations, and other services [3]. Communication through these interfaces is possible with the use of an Application Programming Interface (API). ### _Motivation behind SDN_ Why should software defined networks be implemented? An example can be if there is a demanding application that a traditional network cannot handle or deploy without manual and costly configurations. Also, if there is a need for centralized control of a network to quickly adjust to bottlenecks and application requirements. Centralized control over a network is crucial in the cloud because it's necessary to move data between distributed systems and SDN allows data to move easily between distributed locations. All these advantages are possible with software defined networking, that is otherwise difficult to configure onto a traditional network. Some drawbacks traditional networks have that SDN helps address include: moving from distributed network architecture to centralized network architecture, from a non-programmable network to a programmable and configurable network, and from static/manual configuration of devices to autonomous configuration. SDN also has special services that supply useful network data and statistics. SDN provides the following services [3]: * Topology service (Determines how forwarding devices are connected to each other and assembles a topology of the network) * Inventory service (used to record all SDN devices and basic information about them (Ex: The version of OpenFlow/APIs used and its capabilities). * Statistics service (flow tables and flow counters) * Host tracking (determines where IP and MAC addresses of hosts are located on the network) ## III Data Plane of SDN ### _Overview of the Data Plane_ The function of the data plane in a SDN controlled network does not change fundamentally compared to a traditional network with a distributed control plane [4]. The main purpose of the data plane is to forward arriving datagrams from their respective input port to the correct output port as determined by the forwarding or flow tables. The critical performance metric for the data plane is data throughput, or in other words how many datagrams or bytes can it forward per unit time. To maximize the performance of the data plane in this metric it is critical that it be implemented using custom hardware at each network router or switch. This specialized hardware commonly includes ternary content addressable memories (TCAMs) which are very fast content addressable tables which can retrieve the address of a field in one clock cycle. A good example is the Cisco Catalyst network switch product line which includes TCAMs with approximately one million forwarding table entries [5]. ### _Forwarding Logic_ In a traditional distributed network, the data plane relies on forwarding tables which are populated by the control plane to tell it what output port each datagram should be forwarded to. Forwarding tables are relatively simple containing only two columns, the first of which contains a destination address range and the second of which contains the corresponding link interface or output port. The destination address range fields in the forwarding table are matched to the address obtained from each datagram's destination address header field. In the case where there are one or more ranges which match the destination address the longest will be selected as it represents a match to a more specific, less broad subnet. Fig. 1: Data Plane Representation [4]. On the other hand, SDN data planes use an improved, more complex version of forwarding tables referred to as flow tables. These new tables allow the data plane to perform more complex decisions based on any of the data contained within a datagram. Flow tables contain one more column compared to forwarding tables bringing the total up to three columns. The first column fields contain the rules for matching incoming packets to specific actions. These rules can be applied to any part of the incoming datagram including its data payload unlike the rules in forwarding tables which can only be applied to the destination address field in each datagram's header. The fields in the second column of a flow table contain the actions to be performed on an incoming datagram that matched with the corresponding rule field. These actions could be virtually anything within the capabilities of the data plane hardware, from forwarding a datagram to an output port to modifying fields within the datagram and even outright dropping the packet. The fields in the third and final column of a flow table are used to store performance metrics on their corresponding rule and action fields. For example, they are commonly used to count the number of datagrams that triggered the corresponding rule, or the number of bytes contained within said packages. Together the three fields within flow tables offer vastly improved functionality over a simple forwarding table. However, this functionality comes at the cost of increased performance overheads and increased complexity within the network's core. These downsides would have made flow tables impractical in the past but the continuous increases in compute performance and efficiency have made it both viable and economical [4]. ### _OpenFlow Protocol_ Since in SDN the control plane and data plane are not physically on the same device a new software interface had to be developed as a means of communication. There are a variety of proprietary interfaces of this kind however the most common is an open-source protocol named OpenFlow which is maintained by the Open Network Foundation [7]. The OpenFlow protocol lays out the guidelines for communication between a "dumb" network device which only contains the data plane and the centralized control plane. It uses the TCP networking protocol to establish a connection with network devices and transmits updated flow table entries to the device from the centralized control plane. OpenFlow is strictly a protocol for communication between the control plane and the data plane, it is not used for communication between data plane devices [7]. ## IV Control Plane of SDN ### _Overview of the Control Plane_ The control plane is separated from the data plane and communicates and manages the data plane using various protocols like OpenFlow. It is responsible for creating the routing table of the switches and managing the traffic in the switches. The control plane consists of two components, which are the SDN controllers and a set of network-control applications. The controller is the core component in SDN infrastructure, which is responsible for managing and controlling the controlled switches. The basic structure of a SDN controller and its functionality is described below. _1. Controller Core:_ The controller core interacts with the controlled switches, maintains the global view of the underlined networks, and creates an abstract view for the application layer. Fig. 4: Overview of SDN controller [8]. Fig. 3: Global and Abstract view of Network [2]. Fig. 2: Flow Table Entry Representation [6]. For example, as shown in Fig. 3, several switches are necessary in order to communicate between Device A and Device B, which is marked as Global view. The controller obtains this global view of the network and create the abstract view of the network. The abstract view over the network is accessed to write a control program by the user. There can be several modules for doing specific tasks in controller as shown in Fig. 4. The link discovery modules regularly interact with the switches to build the topology of the network, which is then maintained in the topology manager. The other modules such as decision making module use the network topology from the topology manager and find optimal paths between nodes in the network. The flow modules directly interact with the flow tables of the data plane to notify them the updated tables based on the rules. In addition, the controller may have storage and queue manager for storing various information and queue reports. _2. Interfaces:_ The core controller is surrounded by different APIs for interacting the different layers. The upper layer application uses northbound API to interact with the core, while the core uses southbound API to interact with the controlled switches. The east-westbound APIs can be used for interacting with controller themselves in case of distributed multiple controllers. The widely used SBI is OpenFlow protocol, however, with the recent controllers, the SBI supports other protocol along with the OpenFlow. In case of NBI, controllers support a number of protocols, but the most popular one is the REST API. ### _Different Types of SDN Controllers_ The most well-known SDN controllers are NOX, POX, Beacon, Floodlight, Ryu, OpenDayLight (ODL), and ONOS [10]. The different controllers have different and diverse features. In Fig. 5, a qualitative features of the controllers such as first release date, running platform, supporting APIs, programming language, network category, recent updating, etc., are shown. Among the controllers, the ODL and ONOS controllers are the most recent distributed open-sourced controllers. These controllers have their unique architecture based on their goals, however, the overview of their architecture can be generalized as shown in Fig. 4. A QoS based evaluation of ODL and ONOS is performed in [11]. For evaluating their performances, three different topologies, namely single, linear, and tree topologies are constructed with 8 hosts, 3 OpenFlow switches and 1 controller using Mininet [11]. The data transfer delay between two hosts is measured for transferring a large number of fixed size packets with certain inter-departure time. The time-delay and packet drops experienced by the ODL and ONOS controllers are summarized in Table I. It is observed that the ONOS controllers over-performs the ODL controller in all the three topologies. ### _Challenges in the Control Layer_ _1. Controller Placement Problem:_ Due to the decoupling of the data and control plane in SDN, the controlled-switches are memoryless and query to the controllers for appropriate flow table for every new flow it encounters. The controller processes the queries, sets up the required flow table, and then, send the flow tables to the switches. Therefore, in order to process the flow tables with minimum latency, the relative location between controller-switch needs to be optimized. Randomly placed controllers increase the latency in the network, which in turn creates a wide range of network issues such as resiliency, energy efficiency, load balancing, quality-of-service (QoS), and so on. Due to the importance of optimal controller placement (OCP) problem, it has been extensively studied in the literature. OCP problem can be addressed in several categories, such as how many controllers to place in an SDN network and where to place them in a network to achieve particular objectives. Some important objectives are network responsiveness, fault tolerance, resilience, QoS, load balancing, latency, and so on. A variety of methodologies based on graph theory, clustering algorithms, game theory, greedy algorithms, approximation algorithms, exhaustive search, etc., are described to attain the \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Topology & Controller & Max delay (ms) & Avg. delay (ms) & Packet dropped (\%) \\ \hline Single & ODL & 41.64 & 1.14 & 0.25 \\ \cline{2-4} & ONOS & 2.38 & 0.014 & 0 \\ \hline Linear & ODL & 33.23 & 0.87 & 0.09 \\ \cline{2-4} & ONOS & 4.03 & 0.02 & 0.02 \\ \hline Tree & ODL & 49.96 & 1.94 & 0.63 \\ \cline{2-4} & ONOS & 1.21 & 0.01 & 0 \\ \hline \end{tabular} \end{table} Table I: Comparison between ODL and ONOS controllers [11]. Figure 5: Features of different SDN controllers [9]. objectives in OCP [10]. The OCP problem is still an open problem in SDN and both the industries and the research communities are searching for an appropriate solution to meet the future requirements of SDN. _2. Centralized vs. Distributed Controller:_ The placement of SDN controllers can be categorized into two categories, namely centralized and distributed controller [12]. In centralized controller, only a single controller manages the whole network. This architecture works very well in an enterprise network in terms of better performance and user experiences. However, as user traffics increase in large networks such as data center, various issues related to security, scalability, performance, and availability arises. To alleviate these issues of centralized controller, distributed controllers are used, where multiple controllers are used to manage and obtain the global view of the network. The distributed controllers are divided into flat and hierarchical architecture. In the flat architecture, the whole network is split into small networks and a single controller manage the small network. On the other hand, in hierarchical architecture, multiple local controllers are used to manage local network and they provide information to the upper layer of controllers. In [12], the authors compare the performances of the two categories by using load balancing application. The results show that distributed controller is better in terms of response time and number of transactions. It also demonstrates that the distributed controllers are better for scalability and reduces the bottleneck in the network. ## V Management Plane of SDN The Management plane is often referenced on its own and differs from the other planes in SDN. It is responsible for the Management and control of other planes. The management plane will be described with the help of Fig. 6, which shows how the management plane works and operates. The plane starts with the basic 5 management functional areas which are the Fault Services, Configuration Services, Accounting Services, Performance Services, and Security Services. They will be explained below. Fault Services: The aim of Fault Services is to detect, separate, and correct failures which may be present in logical or physical Software Defined Networking resources. Configuration Services: This modifies and updates the state of SDN resources. Accounting Services: Accounting services does the job of constantly tracking and allocating usage of Software Defined Network resources. Performance Services: It collects and reports information about the operation of SDN resources to the Network Admin. Security Services: Security services is present in order to control and monitor access to SDN resources. Programming Services: This service is present in order to assist in updating any programmable software of SDN resources. As seen in Fig. 6, there are various interfaces in the management plane, they include the User Interface, Repository Interface, Adapter Interface, and the set of Management Interfaces. They are explained below. User Interface: The user interface allows the network admins to interact with Management Services. Repository Interface: This interface connects the Manager with the Data Repository and instance data. Adapter Interface and Management Interface: These interfaces are responsible for the transportation of requested messages /details from the Manager or Network Admin to a particular Agent in a given plane. This is being channeled through the needed Adapter at a given time. The other Management interfaces are shown is Fig. 6 as, NetApp MI, NOS MI, NetSlicer MI, and NetDev MI. These Management Interfaces handle data formats and protocols implemented as (_e.g._ NCP, SNMP, SSH) etc. With these services at the hand of the Network Admin, SDN resources can be managed and controlled in order to achieve smooth performance. ## VI Challenges in SDN Due to the centralized nature of SDN, its implementation can have beneficial effects on network security [14] By continually collecting data and network statistics from all the network switches within a network a centralized controller has a comprehensive view of the network performance and state. This unique position could allow it to detect network threats before they have a chance to compromise the network's performance [15]. For example, if a network threat such as a Distributed Denial of Service (DDoS) attack is detected the control plane could algorithmically find similarities between the malicious datagrams and then proceed to modify the flow tables of the affected routers, using the OpenFlow protocol, to identify and drop the targeted packets without affecting the rest of the network traffic [15]. Another, novel SDN security strategy is Moving Target Defense (MTD) algorithms. These algorithms aim to periodically change the location of key network features to make it more difficult for attackers to discern the composition of the network and the addresses of its critical components [16]. This strategy would be very hard and complex to implement in a traditional network but is relatively straight forward in an SDN due to the flexibility provided by having a central controller and having the rest of the network devices be interchangeable "dumb" switches. ## VII Conclusion In conclusion, implementing an SDN network mainly consists of separating the control plane from the data plane and then combining every router's control plane into a central entity Fig. 6: Overview of Management Plane [13]. that can define the overall network behavior as well as each individual switch's flow tables. Such a centralized control plane is defined primarily in software, and as such, it can interact directly with applications through its northbound interface, or API, which makes the network much more flexible and adaptable to each specific use case. Additionally, since the control plane is now separate from the data plane, it uses a southbound software interface, commonly OpenFlow, to communicate and populate the flow tables of each individual network switch. The above-mentioned features enable SDNs to quickly adapt to changing network requirements and performance. Although SDN offers a multitude of improvements over traditional networking, it also comes with its own unique challenges. Due to the network's reliance on a centralized controller, it is at risk of collapsing if the controller is compromised. For this reason, backup, physically distributed controllers must be maintained to ensure network resiliency. Additionally, the controllers must be scalable to control and manage the ever-increasing number of network-enabled devices and their traffic. Finally, the new open interfaces of a SDN may introduce new forms of network attacks that need to be addressed in the SDN framework and communication protocols. Overall, SDNs are a recent innovation and, as such, constitute a relatively small portion of the networks currently in use. However, the rise of distributed applications with more decentralized and complex traffic patterns, such as cloud computing and cloud services, which require more flexible networks, almost guarantees that SDNs will become more widespread going forward.
2303.08610
Blind Estimation of Audio Processing Graph
Musicians and audio engineers sculpt and transform their sounds by connecting multiple processors, forming an audio processing graph. However, most deep-learning methods overlook this real-world practice and assume fixed graph settings. To bridge this gap, we develop a system that reconstructs the entire graph from a given reference audio. We first generate a realistic graph-reference pair dataset and train a simple blind estimation system composed of a convolutional reference encoder and a transformer-based graph decoder. We apply our model to singing voice effects and drum mixing estimation tasks. Evaluation results show that our method can reconstruct complex signal routings, including multi-band processing and sidechaining.
Sungho Lee, Jaehyun Park, Seungryeol Paik, Kyogu Lee
2023-03-15T13:31:56Z
http://arxiv.org/abs/2303.08610v2
# Blind Estimation of Audio Processing Graph ###### Abstract Musicians and audio engineers sculpt and transform their sounds by connecting multiple processors, forming an _audio processing graph_. However, most deep-learning methods overlook this real-world practice and assume fixed graph settings. To bridge this gap, we develop a system that reconstructs the entire graph from a given reference audio. We first generate a realistic graph-reference pair dataset and train a simple blind estimation system composed of a convolutional reference encoder and a transformer-based graph decoder. We apply our model to singing voice effects and drum mixing estimation tasks. Evaluation results show that our method can reconstruct complex signal routings, including multi-band processing and sidechaining. Sungho Lee\({}^{1}\), Jaehyun Park\({}^{1}\), Seungyeol Paik\({}^{1}\), and Kyogu Lee\({}^{1,2,3}\)+\({}^{1}\)Department of Intelligence and Information, Seoul National University \({}^{2}\)Interdisciplinary Program in Artificial Intelligence, Seoul National University \({}^{3}\)Artificial Intelligence Institute, Seoul National University {sh-lee,lotussoh,paik402,kglee}@snu.ac.kr Graph representation learning, audio processing graph, blind estimation, audio effect. Footnote †: Acknowledgement. This work was supported by Culture, Sports, and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2022 (No.R2022020066).** ## 1 Introduction A dry audio signal is rarely delivered to the end listeners "as is"--it goes through various processing steps to achieve desired auditory effects. Recent deep learning approaches have improved, replaced, and automated some parts of such processing pipelines with neural networks. Some focused on estimating the parameters of known conventional DSP systems [1, 2, 3, 4], while others mimicked, augmented, or integrated the existing processors with neural networks [5, 6, 7]. These prior works consider one or a few types of processors with fixed signal routings. However, domain practitioners, e.g., musicians and audio engineers, typically connect multiple small audio processors in a flexible way to achieve the desired processing. If we consider these processors as nodes and the connections as edges, we can represent this procedure as an _audio processing graph_\(G\). See Figure 1 for an example. First, we feed an input dry signal \(x\), e.g., singing voice, into the [in] node. Then, the [crossover] splits the signal into two, one with low-frequency components and the other with high-frequency components (written as low and high, respectively). The two signals are independently processed with their respective [distortion] modules, summed together with [mix], and passed to the [out] node, resulting in an output signal \(y=G(x)\). This graph \(G\) performs so-called "multiband distortion," and we can extend its functionality with additional processors and connections. For example, we can add [stereo_lfo] to modulate the crossover frequency, making the distortion effect time-varying. This flexibility makes expressive processing possible while retaining full interpretability and controllability. In this paper, we conduct a preliminary study on integrating the audio processing graph structure with neural networks. Specifically, we tackle a _blind estimation_ problem; from the reference audio \(y\), we aim to reconstruct the original graph \(G\). Our motivation for choosing this task is threefold. First, automated reverse engineering itself is a well-established task and has practical values [4]. Second, it provides concrete objective metrics, allowing us to evaluate baseline systems reliably. Finally, it could serve as an appropriate first step towards applying and extending the graph-based methods to other applications, which include automatic processing [8, 9, 10] and style transfer [11]. The blind graph estimation task poses several challenges to overcome. First, there is no publicly available graph-audio paired dataset. Therefore, we first build a _synthetic dataset_ on our own. Second, audio processing graphs are heterogeneous; each node and edge have a type and parameters as attributes, making them nontrivial to decode. To mitigate this, we propose a two-stage decoding method consisting of (i) _prototype graph decoding_, which estimates graph connectivity and node/edge types, and (ii) _parameter estimation_, which decides the remaining parameters. We achieve this with a convolutional reference encoder [12] and a transformer-based graph decoder [13, 14]. Finally, we apply our method to singing voice effects and drum mixing estimation tasks. The evaluation results show that, while our model fails to perfectly reconstruct the original graph in most cases, it preserves essential local/global graph structures, and rendered audio from the estimated graph can be perceptually similar to the reference. We also show that the proposed graph decoding strategy is preferable to other possible alternatives. ## 2 Audio Processing Graph Our audio processing graph \(G\) is a heterogeneous directed acyclic graph (DAG) with the following specifications. **Processors/Nodes.** Each processor \(v_{i}\) can take audio/control signals as input and output audio/control signals. We normalize each processor's output audio signals so that the total energy remains unchanged. Each node has a categorical type \(t_{i}\) and continuous-valued parameters \(p_{i}\) as attributes. While we can include any processor (even neural networks) in our framework, we use \(33\) conventional ones listed in Table 1. Unless stated otherwise, we use the default implementations [19]. This makes our graph readily interpretable and controllable. Figure 1: An example audio processing graph \(G\). **Connections/Edges.** Each connection \(e_{ij}\) requires outlet \(m_{ij}\) and inlet \(n_{ij}\) (or input/output channel) attributes to eliminate any ambiguity. We define the edge's type \(t_{ij}\) as an outlet-inlet pair \((m_{ij},n_{ij})\). When multiple edges are connected to the inlet, we sum the incoming signals. Each edge has a gain parameter \(p_{ij}\). ### Synthetic Graph for Training **Prototype Graph.** The real-world audio processing graphs have frequently occurring _motifs_: combinations of processors (subgraphs) that achieve desired effects. For example, we can control [reverb] using [noisegate], resulting in the well-known "gated reverb." Inspired by this, we sample and combine various motifs to obtain a synthetic graph. We have \(10\) different motifs, from a simple parametric equalizer to more complex parallel [pitchshift] banks. The sampled motifs are serially stacked to generate a full graph except for the following cases: (i) [crossover] can be used for multi-band processing and (ii) each motif can receive auxiliary signals from the others for various modulations, e.g., sidechaining. For the drum mixing graphs, we generate a subgraph for each individual source track. Then, we combine the subgraphs with another "mixing bus" graph. See Figure 1(b), 5, and 6 for examples of the synthesized graphs. In this stage, we only determine categorical type attributes \(t_{i}\) and \(t_{ij}\) of the graph, and we call this a _prototype graph_\(G_{0}\). **Adaptive Parameter Randomization.** Next, we decide the remaining parameters of each prototype graph. Specifically, we randomize them adaptively to the incoming signals, preventing the graph from having "ghost" nodes that do not contribute to the final output signal. For example, when a [highshelf] receives a [crash] signal, its center frequency should be constrained to the high-frequency region to change the frequency response of the input audio. To achieve this, we compute the cumulative energy distribution of the input signal across the frequency, then sample the center frequency from where the cumulative distribution lies between \(0.2\) and \(0.8\). We follow similar procedures to the other low-order linear filters. For the dynamic range controllers, we use an input energy envelope to determine their thresholds. This way, we generated \(300\)k and \(450\)k graph-reference audio pairs for the singing and drum, respectively. The reference audio is stereo, \(3.63\)s long, and has \(44.1\)kHz sampling rate. **Graphs Statistics.** Figure 1(a) reports the graph statistics of our synthetic graphs. Especially, we compare our datasets with PCQM4Mv2 chemical dataset [20], While the singing/drum graphs have a comparable size to the PCQM4Mv2's, they tend to be more sparse (lower node degree and density). This implies that simple graph neural networks which update each node via aggregating itself and its immediate neighborhoods might struggle to make distant nodes communicate and capture the global structures of the audio processing graphs. **LTI Reordering.** Different audio processing graphs can produce the same output, making the blind graph estimation a one-to-many problem. One reason is that serially connected single-input single-output linear time-invariant (LTI) systems, either single nodes or subgraphs, can be swapped without changing the entire system response. To resolve this ambiguity, we rearrange them according to their sizes and processor types (see Figure 1(b)). ## 3 Blind Estimation System Our blind estimation system first encodes the reference \(y\) into latent vectors \(z\), which should contain the necessary information to estimate the graph. Then, from the latent \(z\), we reconstruct the graph in two stages, which resemble the synthetic data generation procedure; we first decode the prototype \(\hat{G}_{0}\) autoregressively, then estimate the remaining parameters \(\hat{p}\) (see Figure 3). **Reference Encoder.** We first apply \(7\) two-dimensional convolutional layers to the reference Mel spectrogram. Then, we flatten and project the channel and frequency axes of the output feature map into \(512\)-dimensional vectors. Finally, we perform an attentive pooling across the time axis with learnable queries to obtain latent vectors \(z\). **Prototype Decoder.** Most autoregressive models [21, 22, 23] generate graphs _node-by-node_; for each step, they observe a partially decoded subgraph, then estimate the next node and its edges. Instead, we present an alternative method called _token-by-token_ generation with Tokenized Graph Transformer (TokenGT, see Figure 4) [14]. TokenGT uses a vanilla transformer [13] and treats both nodes and edges as input tokens. It introduces token type embeddings that distinguish the nodes and edges and node id embeddings to describe the connectivity (we additionally have node/edge type embeddings). This model architecture allows us to (i) alleviate the potential infor \begin{table} \begin{tabular}{p{22.8pt}} \hline \hline _Processor(s):_ [inlets, optional”] \(\rightarrow\) [outlets]; [parameters]. \\ \hline _Low-order linear filters_[15] \\ _Second-order low/and/highpass._ [in, frequency”] \(\rightarrow\) [out]; [frequency, q]. _Parameter equalizer filters - low/highselfself and bell (peaking filter):_ [in, frequency, gain] \(\rightarrow\) [out]; [frequency, q, gain]. _Crossover:_ [in, frequency”] \(\rightarrow\) [low, high; [frequency]. _Phaser_: [in, mod] \(\rightarrow\) [out]; [frequency, feedback, mix]. _High-order linear filters_[16] \\ _Chorus/finger/video:_ [in, mod] \(\rightarrow\) [out]; [delay, feedback, mix]. _Mono and pinposto (delay:_ [in] \(\rightarrow\) [out]; [delay, feedback, mix, frequency, q, stereo,offset]. _Rewer (mono and stereo):_ [in] \(\rightarrow\) [out]; [size, damping, width, mix]. _Nonlinear filters_ \\ _Distortion_[17]: [in] \(\rightarrow\) [out]; [gain, hardness, asymmetry]. _Blicrush:_ [in] \(\rightarrow\) [out]; [bit]. _Dynamic range controllers - compressor/noisegate/expander_[18]: [in, sidechain] \(\rightarrow\) [out]; [threshold, ratio, attack, release, knee]. _Pitshshirt:_ [in] \(\rightarrow\) [out]; [semitone]. _Pitshirt:_ [in] \(\rightarrow\) [out]; _Usmit:_ [n] \(\rightarrow\) [out]; _Usmit:_ [n]_ \(\rightarrow\) [out]; _Usmit:_ [n] \(\rightarrow\) mation bottleneck problem due to the sparse audio processing graph and (ii) frame the prototype decoding as a canonical autoregressive sequence generation task. To decode the graph, we first add a start-of-graph token \(\mathrm{x_{S}}\) to the empty sequence. Then, we estimate the following graph tokens \(\mathrm{x_{1}},\cdots,\mathrm{x_{N}}\) one by one with prediction heads for the token type, node id, node type, and edge type. The decoding starts with the [out] node, then follows the breadth-first search (BFS) order. For the drum graphs, we decode the mixing subgraph first; then, we decode each individual source track subgraph one by one in BFS order. We finish the decoding when the token type estimator outputs an end-of-graph token \(\mathrm{x_{E}}\). To condition the reference latent \(z\), we concatenate it with the other input tokens. **Parameter Estimator.** We reuse the prototype decoder for the parameter estimation; we add another projection head, append a task \(\mathrm{x_{T}}\) to differentiate the two tasks, and remove the causal attention mask. Since each parameter has a different range and scale, we translate and rescale the ground-truth value to fit into \([0,1]\) range. **Architecture Details.** The FFT size, hop length, and the number of Mel filter banks of the reference Mel spectrogram are \(1536,384\), and \(256\), respectively. The convolutional backbone is a VGGish model [12] with the following modifications: (i) depthwise separable convolutions [24], (ii) channels divided into four groups with dilations of \([1,2,4,8]\), (iii) and the use of layer normalization [25]. We used a Transformer decoder layer with \(1\) (singsing) and \(6\) (drum) queries for the pooling. For the graph decoder, we use the \(6\)-layer transformer encoder with the pre-layer normalization [26] and \(16\) heads. While the original paper used eigenvectors of the normalized Laplacian matrix as node id encoding, we use the sinusoidal embeddings since the eigenvectors are intractable during the decoding. **Training.** We train the prototype decoding task in a teacher-forcing manner using the cross-entropy losses with label-smoothing of \(0.1\). At the same time, we train the parameter estimation task by feeding an oracle prototype \(G_{0}\) as input and using the \(l_{1}\) distance as an objective. We use AdamW [27] optimizer, a linear learning rate scheduler with \(\mathtt{5e-4}\) peak learning rate, \(50\)k warmup steps, \(200\)k total training steps, and batch size of \(32\). ## 4 Experiments ### Data **Singing.** We used the OpenSinger dataset [28], which has \(50\)h clean recordings from \(76\) speakers. We used 90% of the audio from the \(71\) speakers for the training. We use the remaining 10% and the other \(5\) speakers' recordings as two separate validation sets, _seen_ and _unseen_ speaker datasets, respectively, to compare the effect of dry source distribution shift on the model performance. **Drum.** We collected the source signals by ourselves; we rendered the [kick], [snare], [hat], [tom], [ride], and [crash] tracks separately with \(14\) commercial sampling libraries and MIDI files from the Groove MIDI Dataset [29]. We performed equalization to the tracks so that each instrument has the same average frequency response across the kit. To generate each reference audio, we sampled a random segment from the dry tracks. Then, we generated a graph with the input nodes corresponding to non-zero energy tracks so that there are no dummy subgraphs. This indicates that our system should also perform a drum instrument recognition task. We used the \(11\) kits and the MIDI files from the drummerl/session1-3 subset for the training. We used drummerl/eval_session for the two validation sets; we used the same kits for a _seen_ kit validation set and the remaining \(3\) kits for an _unseen_ kit validation set. ### Metric **Prototype Decoding.** For each decoding step, we evaluate the node and edge error rate, counting each prediction as an error if either token type, node id, or node/edge type is incorrect. Using the following metrics, we also compare the decoded graph with the ground truth. (i) Invalid graph rate: we consider a graph invalid if it is cyclic, not connected, or missing necessary connections. (ii) Intersection-over-union (IOU) of the node types: while this metric ignores the graph structure, it checks whether necessary processors and input nodes are decoded somewhere in the graph. For the drum graph, we calculate the IOU for each track and mixing subgraph and average the values. Finally, (iii) we render the ground-truth graph and the estimated prototype graph with default parameters and compare the outputs using multi-scale spectral loss (MSS-default) [3]. **Parameter Estimation.** Along with the parameter loss, we evaluate the MSS loss rendered on the oracle prototype with estimated parameters (MSS-oracle) and the fully-decoded graph (MSS-full). **Listening Test.** We measured subjective scores with MUllipte Stimuli with Hidden Reference and Anchor (MUSHRA) test [30, 31]. We asked \(8\) graduate students to score the similarity between the reference and the rendered audio with the estimated graph. A total of \(48\) sets were scored (\(24\) sets for each task and \(12\) sets for each _seen_ and _unseen_ speaker/kits). ### Evaluation Results **Sanity Check.** Table 2 reports the evaluation results. Before training the blind estimation models, we first trained a \(\copyright\) graph autoencoder by introducing another TokenGT as a graph encoder (we also embedded the parameters for the encoder input). Its evaluation results, e.g., \(0\) node error rate, confirm that the graph decoder is powerful enough and the dimension of the latent \(z\) is sufficiently large to reconstruct the original graph. Furthermore, its MUSHRA score is comparable to the hidden reference's, agreeing with the objective metrics. Next, we evaluated the \(\copyright\) graph decoder with latent vectors set to \(0\). Its node error rate (\(0.502\) and \(0.602\)) is better than what a random guess would achieve. This is because (i) the probability distribution of the processors is nonuniform, (ii) we sorted the LTI subgraphs, and (iii) the ground-truth intermediate prototype is available to the network, which can be exploited for a better guess. **Performance Analysis.** On the _seen_ dry speaker/kit sets, the \(\copyright\) proposed model reports the node error rate of \(0.215\) and \(0.335\) for the singing and drum task, respectively, indicating that the perfect reconstruction of the prototype graph is rare. Yet, audio rendered from the estimated graph can be perceptually close to the reference, reporting the MUSHRA score of \(76.6_{\pm 4.6}\) and \(69.8_{\pm 5.4}\). On the _unseen_ dry speaker/kit sets, the evaluation results are degraded in most metrics, confirming that the graph estimation from unseen sources is challenging. We note that the majority of the errors come from either (i) the wrong order of processors (see Figure 5 and 6) or (ii) some processors which destroy the signal and make preceding processors harder to notice (see Figure 6). Finally, to check the difficulty of the Figure 4: Blind estimation with Tokenized Graph Transformer. blind estimation, we trained the same model but with @ dry sources also provided as input by concatenating it with the reference across the channel axis. Indeed, the estimation performance improves by a noticeable margin, indicating that extracting the graph-relevant information solely from the reference is a challenging task. **Blind estimation strategy comparison.** We compared our @ token-by-token approach with the conventional @ node-by-node decoding method [22]. While this model uses the same TokenGT backbone, it estimates the next node type first and then performs an edge prediction task using the transformer outputs. It showed similar or slightly worse performance overall compared to our model @, and it had the drawback of having a high invalid graph rate. Finally, we tried a @ single-stage generation method; we decoded the node/edge parameters along with the categorical types. Since the transformer only has access to the decoded intermediate graph, its parameter estimation performance was much worse than the two-stage approach, resulting in higher MSS-full loss and lower MUSHRA score. ## 5 Discussion **Summary.** We integrated the audio processing graph structure with deep learning methods. We first synthesized the graph-reference pair data, discussed its characteristics, and built a blind estimation system with the off-the-shelf neural network components. We found that the token-by-token generation is an effective method for our graph structure, and the two-stage approach separately treating the connectivity information and processor parameters is beneficial. **Future Works.** (i) Currently, the synthetic data remain in the toy example level; more diverse processors and input source signals could be desirable. (ii) Many real-world audio processing graph structures allow multi-edges and cycles; relaxing our single-edge DAG constraint could allow more expressive processing capabilities. (iii) The evaluation results showed that encoding the references from the unseen source distribution is challenging; we need an improved method for encoding only the processing-relevant information in a disentangled manner. (iv) The proposed graph decoder uses the default transformer with sinusoidal encodings; further performance improvement might be obtained by explicitly injecting graph connectivity information. (v) While we used ground-truth prototypes as input to train the parameter estimation task, at the inference time, the model uses decoded graphs, which are different from the originals in most cases. Furthermore, while not all parameters are equally important for perceptual similarity, we ignored such aspects. Since most processors we used are known to be differentiable [6, 11, 32], end-to-end training with audio-domain objectives could be possible and beneficial for alleviating such issues. (vi) With the differentiable processors, we can combine them with neural audio processors, allowing us to balance between interpretability/controllability and expressibility. (vii) Finally, extending the current blind estimation framework to other applications, e.g., automatic processing [8, 9, 10] and style transfer [11], is a promising research direction. \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c c c c c c c} \cline{3-14} \multirow{3}{*}{Methods} & \multicolumn{14}{c|}{Singing voice effect estimation} & \multicolumn{14}{c|}{Drum mixing graph estimation} \\ \cline{3-14} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} \\ \cline{3-14} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} & \multicolumn{14}{c|}{} \\ \hline Hidden reference & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(92.0\)\(\pm\)\({}_{3.4}\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(90.0\)\(\pm\)\({}_{3.2}\) \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & 0.000 &.005 & 1.002 &.008 &.004 &.004 &.85.7\(\pm\)\({}_{4.0}\) & 0.0 &.001 & 0 & 1 &.001 &.002 &.018 &.018 & 90.3\(\pm\)\({}_{4.8}\) \\ & \(-\) &.502 &.014 & 0 &.365 &.116 &.200 &.095 &.107 & 21.2\(\pm\)\({}_{4.5}\) &.602 &.04 & 0 &.267 &.203 &.207 &.138 &.187 & 32.8\(\pm\)\({}_{8.0}\) \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & Token-by-token, two-stage & \(\$\) 1 &.215 &.007 &.839 &.044 &.120 &.056 &.058 & 76.6\(\pm\)\({}_{4.6}\) &.335 &.019 & 0 & 0.659 &.100 &.145 &.080 &.085 & 69.8\(\pm\)\({}_{5.4}\) \\ & \(\$\) 242 &.009 & 0 &.800 &.054 &.125 &.057 &.059 & 75.6\(\pm\)\({}_{4.7}\) &.446 &.015 & 0 &.530 &.159 &.161 &.088 &.119 & 46.4\(\pm\)\({}_{5.7}\) \\ \cline{3-14} & \(\$\) 2 with source conditioning & \(\$\) 1 &.169 &.011 & 0 &.906 &.034 &.123 &.054 &.052 & \(77.5\pm\)\({}_{5.1}\) &.411 &.019 & 0 &.550 &.137 &.160 &.092 &.110 & 59.3\(\pm\)\({}_{5.9}\) \\ \hline \multirow{3}{*}{ \begin{tabular}{} \end{tabular} } & Node-by-node, two-stage & \(\$\) 1 &.267 &.052 &.072 &.742 &.064 &.142 &.065 &.066 & 60.9\(\pm\)\({}_{6.0}\) &.461 &.045 &.132 &.443 &.134 &.193 &.106 &.134 & 39.2\(\pm\)\({}_{5.3}\) \\ \cline{3-14} & \(\$\) Token-by-token, single stage & \(\$\) 1 &.249 &.020 &.002 &.799 &.053 &.168 & \(-\) &.070 & 62.4\(\pm\)\({}_{5.6}\) &.442 &.023 &.023 &.483 &.159 &.213 & \(-\) &.144 & 40.6\(\pm\)\({}_{5.3}\) \\ \hline \end{tabular} \end{table} Table 2: Evaluation results of the proposed method and other baselines. U and S denote unseen and seen speaker/kit validation set, respectively. Figure 5: An inference result from the proposed method @ on the drum mixing estimation task (left: ground-truth, right: prediction). First, our model correctly predicted the source instrument types: [kick], [snare], [hat], and [ride]. The ground-truth graph panned [kick], and [ride], which is also reconstructed by the prediction. However, the prediction failed to estimate (i) the correct multi-band processing of [reverb], (ii) multiple stages of [distortion] processors, and (iii) modulation of linear filters in the mixing subgraph. Figure 6: Inference results from the proposed method @ on the singing voice effect estimation task (left: ground-truth, right: prediction).
2310.08728
Quantum Communication Countermeasures
Quantum communication, particularly quantum key distribution, is poised to play a pivotal role in our communication system in the near future. Consequently, it is imperative to not only assess the vulnerability of quantum communication to eavesdropping (one aspect of quantum hacking), but also to scrutinise the feasibility of executing a denial-of-service attack, specifically, stopping quantum communication from working. Focusing primarily on the free-space quantum channel, the investigation of possible denial-of-service attacks from a strategic perspective is performed. This encompasses the analysis of various scenarios, numerical modelling, risk estimation and attack classification. The out-of-FOV (field of view) attack emerges as a particularly severe threat across nearly all scenarios. This is accompanied by proposed counter-countermeasures and recommendations.
Michal Krelina
2023-10-12T21:31:29Z
http://arxiv.org/abs/2310.08728v1
# Quantum Communication Countermeasures ###### Abstract Quantum communication, particularly quantum key distribution, is poised to play a pivotal role in our communication system in the near future. Consequently, it is imperative to not only assess the vulnerability of quantum communication to eavesdropping (one aspect of quantum hacking), but also to scrutinise the feasibility of executing a denial-of-service attack, specifically, stopping quantum communication from working. Focusing primarily on the free-space quantum channel, the investigation of possible denial-of-service attacks from a strategic perspective is performed. This encompasses the analysis of various scenarios, numerical modelling, risk estimation and attack classification. The out-of-FOV (field of view) attack emerges as a particularly severe threat across nearly all scenarios. This is accompanied by proposed counter-countermeasures and recommendations. **Keywords:** quantum communication, quantum countermeasures, QKD, laser, denial-of-service attack, quantum vulnerabilities ## 1 Introduction Quantum network and its protocols provide information-theoretic provably secure communication, including the exchange of encryption keys via quantum key distribution (QKD) [1, 2, 3]. More advanced quantum information networks or quantum internet [4] can go beyond QKD and offer additional services such as quantum digital signatures [5], secure identification [6], secret sharing [7], or quantum secure direct communication [8, 9]. Quantum communication has the potential to become a new security standard alongside post-quantum cryptography (PQC) to protect against attacks by future quantum computing and its quantum cryptoanalytical capabilities [10, 11, 12]. In comparison to PQC, quantum communication necessitates the establishment of new infrastructures and typically requires a period of maturation. The ensuing benefit, however, encompasses additional non-security services, including distributed quantum computing [13, 14], clock synchronisation [15], and networked quantum sensors [16]. This is particularly evident in the case of integrated Quantum Information Networks (QIN), which are instrumental for long-distance quantum communication. These networks deploy various quantum assets such as quantum repeaters, quantum entanglement distribution systems, quantum sensors, and imaging systems in space around the Earth [17]. These quantum assets manifest primarily as dedicated quantum satellites or integral components of space experimental laboratories. In the long run, we anticipate the seamless integration of quantum systems with classical counterparts, including laser or microwave communication systems, Earth surveillance systems, and so forth [18]. One of the crucial advantages of classical communication over quantum is the ability to reroute traffic in the event of a line or node being severed, typically without major consequences. However, in the near to medium term, especially when quantum networks are not yet large-scale networks with numerous interconnected nodes, an attack on a quantum network asset in space could result in a significant denial-of-service (DoS) scenario. This could have far-reaching implications if critical communication infrastructure depends heavily on this link. Quantum hacking [19, 20, 21, 22, 23, 24, 25, 26, 27] is a dynamic area of research focused on uncovering vulnerabilities in quantum communication that could potentially lead to eavesdropping on communications, which is often the primary objective. Nevertheless, DoS attacks are, in principle, more straightforward, and it is vital to comprehend their nature and implications prior to quantum communication becoming an integral part of critical communication infrastructure. In such a scenario, we can draw parallels to electronic warfare and discuss quantum countermeasures [28, 29, 18]. Countermeasures in quantum communication, particularly in terms of dazzling or, albeit less accurately, jamming, remain a relatively unexplored area. There are primarily two works that delve into this topic in depth [30, 31]. These works operate at the "tactical" level, focusing on specific scenarios, considering specific designs and offering analyses of QKD performance in terms of quantum bit error rate (QBER). In this study, our aim is to approach this subject more strategically, with the objective of identifying potentially promising scenarios and vectors of attack. These will then be subjected to further detailed study and assessment. Within this context, we will examine the feasibility and types of attacks on quantum communication in free space, categorise types of DoS attacks, provide preliminary risk estimates, conduct initial numerical modelling, ascertain the prerequisites for such attacks, and propose potential mitigations. This paper is divided as follows. First, Section 2 introduces quantum network design and security aspects. Section 3 introduces the term "quantum communication countermeasure," describes the most vulnerable elements in a quantum network, provides basics on laser weapons, and classifies types of laser attacks on a quantum receiver. Then, Section 4 provides boundaries of considered attacks on quantum communication and considered scenarios, and describes numerical modelling and risk estimation. Results are presented in Section 5, including their discussion. Section 6 then presents possible counter-countermeasures to protect quantum assets. Lastly, Section 7 concludes the paper. Appendix A provides details on laser propagation. ## 2 Quantum communication security/resilience In general, quantum information networks allow the exchange of quantum bits at large distances, ranging from laboratory settings to intercontinental spans. This enables a multitude of applications, including Quantum Key Distribution (QKD), distributed quantum computing, networked sensing, and other Quantum Information Network (QIN) services [4]. The true service capability of a given QIN hinges largely on its ability to distribute quantum entanglement (quantum entangled quantum bits). If it can, numerous additional protocols and services mentioned above can be implemented. If not, such a quantum network is usually considered for QKD only (no quantum entanglement is needed). A fundamental property of quantum communication lies in its inherent resistance to eavesdropping, provided certain conditions are met, and the entire system is properly implemented, rendering it information-theoretically secure [32]. Consequently, the domain of security proofs for quantum communication protocols constitutes a dynamic field of research, focused on analysing the resilience of quantum protocols against various attacks, colloquially referred to as quantum hacking. Quantum hacking may target hardware imperfections (detectors, emitters) [19, 22, 23, 27], exploit vulnerabilities in the quantum link (e.g. many potential side channels for signals from coherent sources) [25, 26], or address imperfect implementations of quantum protocols themselves [21, 24]. While most of these studies share the common objective of demonstrating the potential for eavesdropping on quantum communication (typically QKD, though many of these methods can be extended to other quantum communication protocols), they also propose mitigation for any identified vulnerabilities. In some cases, these studies also discuss the possibility of (temporary or permanent) DoS attacks as a consequential outcome [19, 22, 20]. Here, we are interested exactly in these DoS attacks. This paper specifically centres on these DoS attacks. In a severe conflict, interfering with and eavesdropping on an opponent's communication would provide a significant advantage. Yet, if such activities prove infeasible or overly challenging, a DoS attack, as part of electronic or information warfare, would be highly desirable and, indeed, anticipated [33]. It's worth noting that many quantum side-channel attacks can be mitigated through improved hardware, such as single photon sources instead of weak coherent sources, adding additional optical elements (such as spectral and spatial filtering in the case of free-space communication), or new employing new protocols such as (measurement-)device-independent protocols whose proper implementation effectively eliminates most of the side channels. However, these mitigations are irrelevant in our context, where we consider intense illumination (blinding, dazzling), or temporary or permanent harm to optical or electronic components. ### Quantum network designs A comprehensive understanding of quantum network designs is crucial for a successful DoS attack. Quantum networks are implemented using either fibre optic links or free-space links. It is worth noting that a quantum network cannot function in isolation; classical channels for command and control are essential for its operation. The fibre optic channel, commonly employed for terrestrial applications, is technologically more straightforward. However, the distance over which quantum information can be transmitted is constrained due to the exponential increase in transmission losses with distance, and amplification of the quantum signal is impossible due to the no-cloning theorem. Presently, commercially available point-to-point connections can span up to 100 km [34] (although, through experimental utilisation of the so-called twin-field QKD protocols, distances of up to 1,000 km have been achieved [35]). For greater distances, repeaters (relay points) become indispensable. In configurations involving three or more endpoints, or in more complex quantum network topologies, additional elements like optical switching or splitting may also be employed. Repeaters can be categorised into trusted repeaters, which are relay nodes with access to the key and are assumed to be secure against intrusion and attacks by any unauthorised parties (hence, they are deemed "trusted"), and quantum repeaters (quantum relay nodes) equipped with quantum memory and/or capable of quantum entanglement distribution. The free-space link comprises multiple optical ground stations (OGSs) and quantum satellites acting as relay points. Fig. 1 illustrates the three most common designs for free-space quantum links: Figure 1: Various considered quantum link configurations. Configuration a) simple downlink and/or uplink considered for QKD. Configurations b) and c) are considered for quantum information networks with quantum entanglement. Here, b) is only with downlinks and entanglement swapping at the ground. c) is with downlink and inter-satellite link with entanglement swapping at satellites. 1. Simple downlink and uplink. 2. Entangled downlink (involving the transmission of two entangled photons towards two OGSs) with the potential for entanglement swapping at the OGS. 3. Downlink with bidirectional inter-satellite links (with potential entanglement swapping occurring at the satellites). OGSs can be seamlessly integrated into terrestrial quantum networks. Future concepts for leveraging quantum communication in space envision even more complex designs and topologies, including the utilisation of intermediate quantum satellites [36]. In particular, space-based free-space quantum links enable communication over distances of thousands of kilometres [37] owing to the general square-law relationship compared to transmission in fibre links. As will be detailed later, the space-based quantum link offers additional advantages for the downlink, as the primary losses in the atmosphere occur within the final approximately ten kilometres [17]. Conversely, ground-to-ground free-space links have been experimentally tested with a range of up to 144 km [38] due to atmospheric losses. Experimental testing of QKD in space encompasses simple downlinks from Low Earth Orbit (LEO) [39] or Geostationary Orbit (GEO) [40], uplinks to LEO [41], or entangled pair downlinks [37, 42]. The majority of presently planned QKD projects, whether for demonstration or nearing commercialisation, focus on either a simple downlink or an entangled downlink at LEO [43, 44, 45, 46]. However, there are also exceptions, such as [47], which explore the possibility of an uplink. ## 3 Quantum communication countermeasures For the purpose of this report, the term "quantum communication countermeasures" (QCCM) refers to various actions leading to temporary or permanent denial of quantum communication service, or its significant slowing down. We identified that quantum communication countermeasures could be conducted in three vectors: 1. Quantum link - free-space link 2. Quantum link - fibre optic link 3. Classical link - quantum networks command and control The attack on classical links can be executed through a wide spectrum of cyber methods [48], and as such, will not be extensively covered here, with a few exceptions. From this perspective, a satellite for quantum communication is a conventional satellite equipped with quantum-optical, optical, and radio frequency (RF) payloads. Therefore, all countermeasures designed against classical satellites can be applicable here. This encompasses kinetic anti-satellite (ASAT) weapons, cyber attacks, radio-frequency jamming, as well as high-energy laser, microwave, and particle beam attacks [49, 50]. The attack on fibre optic links poses a greater challenge, requiring prior knowledge of the location of the optic fibres or quantum nodes for a successful execution. In this context, a simple fibre interruption suffices. However, underwater optical fibres are of heightened interest, presenting a greater probability of sabotage in case of conflict [18]. In the early stages of quantum networks, such an attack can prove highly effective due to the network's low density, making rerouting very costly or even impossible. As simulated in [51], quantum network resources escalate exponentially, while fidelity experiences an exponential decrease with path length. The scenario becomes more intriguing in the case of the free-space channel, where, under specific conditions, multiple strategies can be employed. In the following, we consider the laser as the most viable tool for use against quantum channels or optical communication in general. ### Quantum network elements in space The fundamental components of quantum free-space communications are the receiver and the transmitter. Broadly put, a transmitter comprises a source of single photons or weak coherent pulses, optical elements (mirrors, filters, attenuators, beam splitters, etc.), and a telescope. Conversely, the receiver generally encompasses a telescope, optical elements, and a single photon detector (SPD). Typically, the integral parts of both the receiver and transmitter include components for the beacon laser (emitter and receiver) as part of the acquiring, pointing, and tracking (APT) system, which also incorporates an additional laser and a camera (such as a CCD). A typical representation of a receiver and transmitter can be found in Figure 1 in [39]. For the purposes of this discussion, we consider the use of a laser for the attack unless otherwise specified. When contemplating a laser attack, we identified the single photon detector as the most susceptible target for QCCM based on the literature review and simulations. The SPD is, in essence, an extremely sensitive device capable of detecting individual photons within a specific wavelength range. Two main types of SPDs are considered: semiconductor-based avalanche photodiodes (APDs) and superconducting single-photon detectors (SNSPDs). While SNSPDs generally exhibit superior performance, they are more expensive and necessitate cryocooling compared to APDs [17]. Thresholds for blinding or damaging APDs have been studied, for instance, in [19, 20, 52, 53], although comprehensive studies on laser damage to APDs have not been conducted yet. This differs from the situation with camera sensors [54, 55]. The values of laser power and their effects are presented in Tab. 1. It is essential to note that the presented laser power is applied directly to the SPD. In real Quantum Key Distribution (QKD) systems, several optical elements such as bandpass filters (BPS), beam splitters (BS), or polarising beam splitters (PBS) are placed prior to the SPDs, effectively reducing the collected laser power to individual SPDs. The free-space optical communication has some specific features. Firstly, one must take into account the field of view (FOV). Telescope systems used at satellites and OGSs, when communicating, must be in a mutual line of sight (in the best case with the same optical axis), i.e. in their respective FOVs. This is a critical consideration for our analysis, as any strong attack leading to the destruction of an SPD must be directed into the target's FOV. Given that the typical FOV angle ranges from 3 \(\mu\)rad (e.g. [43]) to around 500 \(\mu\)rad (e.g. [59]), there is extremely limited space for an "in-FOV" attack, as demonstrated in Table 2. It is worth noting that the FOV for the APT system is on the order of a few mrad or tens of mrad. On the other hand, due to receiver system imperfections, an "out-of-FOV" attack can be considered as more relevant, as first suggested in [30]. \begin{table} \begin{tabular}{c|l|l} Power / Power density & Effect & Ref. \\ \hline \(\gtrsim 10^{-15}\) W & Too high noise for SPD & [53] \\ \(\gtrsim 10^{-11}-10^{-8}\) W & Non-gated SPD APD blinding 1 [56, 52] \\ \(\sim 10^{-3}\) W & APD thermal blinding & [52] \\ \(\gtrsim 10^{-1}\) W/cm\({}^{2}\) & CCD image transducer saturation threshold (used as part of APT) & [57] \\ \(\gtrsim 1.2\) W & APD permanent blinding, lower sensitivity & [19] \\ \(\gtrsim 2\) W & APD structural damage, complete insensitivity & [19] \\ \(\gtrsim 4\) W & Attenuators damage2 & [26] \\ \(\gtrsim 3\) W & Polarisation spatial filter degradation & [20] \\ \(\gtrsim 3\times 10^{2}\) W/cm\({}^{2}\) & Optical glass melting & [57] \\ \(\gtrsim 10^{3}\) W/cm\({}^{2}\) & Melting initiation threshold for aluminium & [58] \\ \end{tabular} \end{table} Table 1: Summary of the approximate laser power or power density needed for various effects. It is important to note that the minimal time of target illumination by a laser to cause the discussed effect was not studied in all cases. The power per area needed is to be transformed to power based on the receiving telescope area. Secondly, photon loss due to atmospheric absorption and scattering is dominant only in the lower approximately 10 km. In general, two wavelengths are considered: 810 nm and 1550 nm. Communication at 810 nm has a beam divergence one half of that at 1550 nm. However, 810 nm is overwhelmed during the day. 1550 nm is considered to be possibly used during daylight [17]. Thirdly, OGSs are usually positioned at fixed locations, lacking flexibility and potentially being vulnerable to constant surveillance and probes. Conversely, the protective elements of OGSs can be continuously improved compared to component replacements at a satellite in Low Earth Orbit (LEO) and above. ### Laser weapon systems As mentioned earlier, in our analysis, we will consider an attack on free-space quantum communication using a laser weapon system (LWS). LWS can be divided into three types according to the impact [60]: * **Dazzling** is considered more as a countermeasure than a weapon because it has no permanent effect. Typically, dazzling targets optical sensors like a charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS). In our context, it will mean the increasing level of noise leading to the DoS. * **Blinding or satellite image sensor damage** (or associated electronics and optics) is caused by a laser with higher energy. The damage is permanent and can refer to either single-pixel damage or damage to the entire sensor. In our case, it means temporary or permanent damage to SPD. * **Damage to the satellite bus** requires a very high-energy laser where the damage is considered due to the thermal effects of the absorbed energy. This attack is focused on critical systems like the thermal regulation system, batteries, solar panels, or the attitude control system, resulting in the complete failure of the satellite. Typically, the delivered energy for melting should be around and higher than 10,000 J [61] (e.g., 2 kW impinging for 5 seconds). It's important to note that the threshold between dazzling and blinding is almost impossible to predict, which introduces a risk of blinding during the action of dazzling countermeasures. Counterspace or anti-satellite LWS encompasses not only lasers but also high-fidelity space situational awareness, high-power laser devices, precise beam tracking and control, and adaptive optics to counteract atmospheric turbulence (for ground-based lasers) [62]. In general, (space) laser weapons are a well-studied topic, as seen in references such as [63, 64]. However, detailed information and parameters of current and currently developed LWS are naturally classified. Available information is usually limited to the range of output power, and the aperture can be deduced from published photos. Some overview of the current status in the counterspace sense can be found in [60, 65, 57, 66]. ## 4 Attacks against quantum assets Within this section, we will set boundaries for our analysis and specify individual scenarios along with their objectives. \begin{table} \begin{tabular}{l|l|l|l|l} \(\phi_{FOV}\) [\(\mu\)rad] & \(l=10\) km & \(l=500\) km & \(l=1,000\) km & \(l=35,000\) km \\ \hline 1 & 0.01 & 0.5 & 1 & 35 \\ 10 & 0.1 & 5 & 10 & 350 \\ 100 & 1 & 50 & 100 & 3,500 \\ 1,000 & 10 & 500 & 1,000 & 35,000 \\ \end{tabular} \end{table} Table 2: FOV diameters \(d\) [m] at distance \(l\) for selected FOV angle. First and foremost, we categorise attacks into two types. The first is the in-FOV attack (direct attack), wherein the LWS is within the target's Field of View (FOV) and the entirety of the initial laser energy, minus losses, is directly channelled into the quantum receiver system. The in-FOV attack bears the potential for the quantum receiving system's destruction. Second, the out-of-FOV attack (side attack) takes into account receiver imperfections, specifically, the acquisition of a small fraction of laser light due to internal scattering. This form of attack, at its best, can achieve a dazzling effect. The possible ways of the in-FOV and out-of-FOV attacks are illustrated in Figure 2. Next, we consider only the delivered laser power in our analysis. This is a very simplified approach since critical is not only the Subsequently, our analysis exclusively considers the delivered laser power. This approach is notably simplified, as the critical factor is not only the transmitted power but also the transmitted energy, i.e., the duration the target was illuminated. Another crucial aspect is whether the employed LWS was a continuous wave (CW) or a pulsed laser. Furthermore, different destruction mechanisms come into play depending on whether nano- and picosecond or femtosecond laser pulses are employed [67]. However, we exclude consideration of femtosecond lasers due to additional non-linear effects in atmospheric propagation. The existing literature presently lacks the depth required for a more detailed analysis concerning quantum communication components. To illustrate, in [19], it is reported that a 2 W CW laser led to the destruction of APDs when illuminated for 60 s. However, it is important to note that 60 s may not represent the minimal time required for permanent damage to occur in APDs. Additionally, we refrain from specifying a particular Laser Weapon System (LWS) in terms of output power properties or the employed laser technology. Instead, we present a universal output and assessment of LWS initial power, approximating from 1 W to 1 MW. Unless explicitly stated, we assume that the LWS is configured to mitigate or suppress non-linear atmospheric effects such as blooming [68]. This Figure 2: This picture illustrates the FOV of the satellite and OGS targets, encompassing possible in-FOV and out-of-FOV attacks, as well as the distinct Ground-LEO-Ground attack variation across various platforms. implies a continuous-wave laser of up to \(25\) kW (where the blooming effect begins to exhibit significance [69]) when operated from a ground-based platform. For the sake of simplicity, we consider a zenith angle of \(\phi=0^{\lx@math@degree}\) (or an elevation angle of \(90^{\lx@math@degree}\)) for the in-FOV attack. This implies that the LWS is always directed perpendicular to the horizon plane, representing the shortest path through the atmosphere and to the target (i.e., altitude, \(h\), equates to the propagation path, \(L\)). Conversely, for out-of-FOV scenarios, we assume a zenith angle of \(\phi=60^{\lx@math@degree}\), simulating an attack from a greater distance, such as from a neighbouring country. It is worth noting that a zenith angle of \(\phi=60^{\lx@math@degree}\) results in a suppression factor of approximately \(\sim 0.37\) for a satellite at an altitude of \(h=1,000\) km, and approximately \(\sim 0.58\) at an altitude of \(h=500\) km when compared to \(\phi=0^{\lx@math@degree}\). For both types of attack, we consider an ideal Laser Weapon System (LWS) targeting system, wherein the axis of the laser beam emitted from the LWS is precisely aligned with the centre of the target's aperture, in conjunction with high-fidelity space situational awareness. In the case of the out-of-FOV attack, the simulation was based on the in-scattering profile [30], with a slight modification incorporating a suppression parameter \(\kappa_{outFOV}\) to account for increased uncertainty stemming from potential future enhancements in design, attack angles, and related factors. Further details can be found in Appendix A. As a result, this analysis serves as an illustrative approximation - a strategic perspective, offering a magnitude estimate of the requisite LWS power, and it yields insights into potentially significant scenarios. An alternate method for executing a quantum DoS attack involves the generation and dispersion of scattering aerosols, such as soot aerosol [70]. This would necessitate dispersal above the OGS, potentially from an aircraft. However, as described later in more detail, we consider this scenario improbable due to anticipated anti-aircraft defence measures, resulting in only a limited number of plausible scenarios. A potent variant of the out-of-FOV attack, not involving lasers, is the "redout" effect triggered by the detonation of a nuclear weapon [71]. In this context, we do not contemplate the use of nuclear detonations solely for the purpose of executing a quantum communication DoS. Finally, our analysis is also simplified regarding the definition of quantum communication operability. Many studies on quantum communication focus on parameters like key rate (in the case of QKD) or universally on the QBER. The QBER typically has a threshold, commonly around \(8-12\%\), beyond which, due to high noise levels, the communication is deemed insecure and terminated. Our approach is derived from [53], establishing a minimal received power, \(P_{recv}\gtrsim 10^{-15}\) W, where noise levels become prohibitively high for secure quantum communication. This minimal receiver power is established based on limits on the number of noise/background photons, at \(3\cdot 10^{-5}\) count pulse\({}^{-1}\), to maintain an error rate below \(5\%\). It's important to note that elevated noise levels, which approach the QBER threshold, also result in a reduction in the quantum communication rate. ### Scenarios Here, we specify various scenarios with different vectors of attack. Additionally, we assume that the positions of ground stations as well as satellites are known. In brief, quantum communication satellites can be tracked by space surveillance. The surveillance of space objects and debris is typically carried out by ground sensors, such as the Space Fence project [72], or by the EU Space Surveillance and Tracking (SST) support framework [73]. Determining the location of ground stations poses a greater challenge. They can be localised in the case of using an RF classical link for command and control. Otherwise, we must rely on intelligence work. We consider a GEO satellite at an altitude of \(h=35,800\) km, LEO satellites at altitudes of \(h=500\) and \(1,000\) km. For the LWS in the air domain, we contemplate drones, planes, or stratospheric vehicles at typical altitudes of \(h=5\), \(10\), and \(30\) km. Regarding the ground component, both LWS (mobile - ships, trucks, and fixed) and OGSs correspond to an altitude of \(h=0\). It's worth noting that for the in-FOV attack originating from the ground, we also consider the use of adaptive optics (AO) to minimise atmospheric losses. On the other hand, out-of-FOV attacks, especially with high power, may not require AO and possibly only a coarse targeting system. In our calculations, we assume good (cloudless) weather conditions. Nonetheless, it's important to recognise that adverse weather will typically have a more detrimental effect on quantum communication compared to a high-power laser beam. We consider the following scenarios: * **Ground-LEO-Ground** scenario involves a strong illumination of an LEO satellite, causing a significant portion of the laser to be reflected towards the ground-based receiver. In this scenario, the satellite behaves like a very bright but moving star, as first described in [30]. The LWS can be a mobile (truck or ship) or fixed LWS facility, with fewer limitations on available power. * **Ground-LEO** scenario involves the in-FOV or out-of-FOV attack, where a ground-based LWS (facility, ship, or truck) potentially employs a high-energy laser for a minimal time in the target's FOV to achieve a direct hit. The success of the out-of-FOV attack is very probable, even with low LWS output power. * **Ground-GEO** scenario is similar to the Ground-LEO scenario. On one hand, the situation for the attacker is easier due to the stable and known position of the GEO satellite. On the other hand, the required laser power is higher due to the intensity decrease as \(1/L^{2}\) (compare the suppression \(\sim 8\times 10^{-10}\) for GEO with \(\lesssim 10^{-6}\) for LEO), where \(L\) is the distance. Therefore, this scenario is considered valid, especially for the out-of-FOV attack. * **Air-Ground** scenario considers an LWS placed on a plane, drone, or stratospheric vehicle. Due to the relatively small distance between the LWS and the ground target, i.e., a small FOV cone, conducting the in-FOV attack is very challenging. The out-of-FOV attack is considered quite accessible. * **Air-LEO** scenario is similar to the previous one except for targeting the LEO satellites. The in-FOV attack is again very challenging. The potential advantage could be reduced losses due to the propagation in the atmosphere and simplified LWS without adaptive optics from approximately 5 km or higher. * which will be rather a very rare case. On the other hand, the out-of-FOV attack on the target is more accessible. * **LEO-LEO** scenario could be relevant for more advanced network designs where the inter-satellite quantum link with entanglement distribution and swapping is considered. However, this scenario could be very challenging for the attacker due to the relatively fast mutual motion and the need for advanced orbital manoeuvrability of the satellite with the LWS payload. Nevertheless, the out-of-FOV attack could be considered more feasible. * **LEO-GEO** scenario, similar to LEO-LEO, can be expected to be technically very challenging. Due to the payload limitations, mostly the out-of-FOV attack (where low laser power is sufficient) can be expected. * **GEO-Ground** scenario might have the advantage of a vast area within reach for (possibly permanent) laser illumination, making it particularly relevant for the out-of-FOV attack. The main limitation will be the available laser power. Other scenarios, such as Air-GEO, and Ground-MEO (medium Earth orbit, between \(2,000\) and \(35,800\) km), can be generalised and interpolated from the scenarios considered above. It's worth noting that a more precise simulation should also take into account the scattering of laser photons from the atmosphere, where a limited number of photons can be scattered inside the FOV. However, such a scenario was not investigated in this study. ### Numerical modelling In our simulation, several parameters had to be set. Firstly, the size of the apertures (of parabolic mirrors or lenses) were fixed. The specific sizes of apertures are detailed in Tab. 3, and they were determined as averages based on a survey of the relevant literature. Next, we consider that the LWS will operate at wavelengths either identical to or very close to those used in quantum communication, specifically 810 and 1550 nm, in order to bypass band filters. We take into account the parameters of the LWS to ensure avoidance or suppression of non-linear atmospheric effects such as thermal blooming. This typically involves the implementation of adaptive optics, active beam control, managing pulse duration and repetition rate, or employing a CW laser restricted to a certain power density threshold (which depends on wavelength, altitude, and atmospheric conditions) [69]. The characteristics of free-space and atmospheric propagation are detailed in Appendix A, which includes modelling for adaptive optics. It is crucial to note that our numerical calculations assume "good" weather conditions without clouds. In the case of cloudy weather, both the attacks and quantum communication itself are weakened. For this simulation, we have developed our own simulation framework, Aether (currently at version 0.2), written in Python. Aether is a free-space propagation simulator comprising individual modules that represent distinct optical elements and environments. It not only simulates the free-space propagation of laser but is also prepared for the simulation of free-space quantum communication, such as QKD, in subsequent projects. ### Risk estimation The risk is assessed through a combination of the probability of a particular attack and its potential impact, as outlined in Table 4. These impacts are categorised as follows: Negligible to no effect, Marginal to dazzling, Critical to blinding, and Catastrophic to satellite bus damage. Based on this assessment, the risk can be classified into four categories, each with corresponding recommendations: * Continue with existing controls, but closely monitor for any changes. * Requires focused attention to mitigate the risk, along with regular or continuous monitoring. It is advisable to implement design-originated mitigations. * Demands immediate action to reduce the risk to an acceptable level. It is imperative to implement a comprehensive set of countermeasures, including considering alternative routes for quantum communication. * The risk is deemed excessively high and is not acceptable. Rigorous mitigation measures must be put in place. There is a significant risk of unfeasible quantum communication in at least the medium to long term. \begin{table} \begin{tabular}{l|c} System & Diameter [m] \\ \hline Ground LWS & 1.0 \\ Airborne LWS & 0.2 \\ Space LWS & 0.2 \\ Ground quantum communication system & 0.6 \\ LEO quantum communication system & 0.2 \\ GEO quantum communication system & 0.2 \\ \end{tabular} \end{table} Table 3: Fixed averaged sizes of apertures of various considered systems. The likelihood of an attack was assessed differently for each scenario, based on the expected opportunities for a laser attack. For instance, in the case of a Ground-to-LEO attack, the in-FOV attack was considered improbable due to the very low chance of the ground-based LWS entering the target's FOV cone. However, the out-of-FOV attack was deemed probable, considering certain range and weather limitations. Another example is an Air-to-Ground/LEO attack involving various LWS platforms such as drones, planes, and stratospheric vehicles. While the in-FOV attack might be slightly more probable than a Ground-to-LEO attack, it is still highly improbable to achieve target acquisition within the small FOV (approximately \(\phi_{FOV}=10\)\(\mu\)rad, see Table 2). The out-of-FOV attack is considered more probable than the Ground-to-LEO attack due to the significantly higher mobility of the LWS. In particular, the following values were considered: * **Ground LWS:** Available power ranging from 1 kW to 1 MW. * **Drone LWS:** Speed approximately 150 km/h; altitude about 5 km; laser power range 0.1-2 kW. * **Plane (e.g. C-17 Globemaster) LWS:** Speed approximately 830 km/h; altitude about 10 km; laser power range 1-100 kW. * **Stratospheric vehicle LWS:** Speed approximately 50 km/h; altitude about 30 km; laser power range 0.1-2 kW. * **Satellite LWS:** Speed approximately \(28,000\) km/h; altitude about 500 km; laser power range 0.1-2 kW. ## 5 Results and discussion We conducted numerical modelling as described in Section 4.2 for each scenario, considering both in-FOV and out-of-FOV attacks. The results and selected numerical outputs are detailed and presented in the following subsection. Subsequently, we assessed the risk as outlined in Section 4.3. Various potential mitigations are discussed in Section 6. However, it's important to note that many of these mitigations can be countered simply by employing an LWS with increased power, allowing the attacker to achieve a similar level of impact. ### Scenario discussion **Ground-LEO-Ground scenario.** In this scenario, only the out-of-FOV attack is considered, where a ground-based LWS illuminates the transmitting QKD satellite, causing a fraction of the light to scatter towards the receiving OGS [30]. The main advantage of this attack lies in the availability of high-energy lasers, ranging from tens to thousands of kW or even MW, for the ground LWS. Additionally, mitigating this attack is more challenging since the scattered light falls within the receiver's FOV. It was estimated in [30] that an LWS with an output power of about 1 kW is sufficient to achieve the mission, i.e., quantum DoS, assuming a QKD satellite the size of Micius and a distance of about \(1,000\) km between the LWS and OGS, which is very favourable for the attacker. Here, we show that even much lower laser power can be sufficient, as is illustrated in Fig. 3. \begin{table} \begin{tabular}{|l||l|l|l|l|} \hline **Likelihood/Impact** & **Negligible** & **Marginal** & **Critical** & **Catastrophic** \\ \hline **Improbable** & Low & Low & Medium & Medium \\ \hline **Remote** & Low & Medium & **Serious** & **Serious** \\ \hline **Probable** & Medium & Serious & Serious & **Full** \\ \hline **Frequent** & Medium & Serious & **Illinois** & **Illinois** \\ \hline \end{tabular} \end{table} Table 4: 4x4 risk assessment matrix. Given that the downlink form of space-ground communication is expected to be more common, it is reasonable to assume that this will be one of the most prevalent types of attacks. The out-of-FOV attack can be mitigated, particularly through the proper design of the QKD satellite, which will have a small surface area (some designs operate with a size of 6U3), along with a maximally reduced albedo. Footnote 3: The term “6U” refers to a design based on the size of six CubeSats, i.e., miniaturised satellites with a form factor of 10 cm cubes. It's important to note that the scattering model in [30] is quite simplistic, and real-world scattering will vary for each satellite design. It will depend not only on the surface (CubeSat-based satellites are expected to have a smaller surface area) but also on the reflectivity of the material on the satellite's surface. **Ground-LEO Scenario.** This scenario involves a direct attack on a quantum satellite in Low Earth Orbit (LEO) equipped with a quantum receiver. The out-of-FOV attack was described and modelled in [30]. The concept here is to illuminate the target (LEO satellite) and utilise highly off-axis photons received by the telescope, which then get internally scattered and coupled into the QKD receiver. Note that in this scenario, the out-of-FOV attack is considered less effective than the Ground-LEO-Ground scenario, primarily because the suppression parameter \(\kappa_{outFOV}\) is less favourable for the attacker compared to the approximately additional \(1/L^{2}\) factor corresponding to the loss during space-to-ground transmission after the reflection from the satellite. The potential performance of the out-of-FOV attack as a function of LWS initial power is depicted in Fig. 4. It is evident that even a 1 W laser, under good weather conditions and precise targeting, can lead to a DoS scenario. The risk could be mitigated significantly, for instance, by improving the design of the receiving telescope, which serves as the main entry point for off-axis illumination. As for the in-FOV attack from the ground, it is extremely challenging. With a realistic FOV angle of \(\phi_{FOV}=10-500\)\(\mu\)rad, the ground LWS terminal would need to be positioned, for example, within Figure 3: Initial LWS power, \(P_{ini}\), versus received power at the target’s quantum receiver, \(P_{recv}\). The elevation angle between the LWS and the satellite is set at 30\({}^{\circ}\), with zero zenith angle between the satellite and OGS. The hatched areas consider two extreme limits. The upper limit corresponds to a relevant satellite surface of 4 m\({}^{2}\) (such as for Micius) with an albedo of 1, while the lower limit assumes 0.01 m\({}^{2}\) (such as for a 1U CubeSat) with an albedo of 0.01, as described in Appendix A. The background colours represent various effects - possibly no effect (white), too much noise for quantum communication (grey), APD thermal blinding (red), CCD blinding (blue), APD permanent destruction (purple), optics destruction (orange), and general melting (yellow). approximately 5-250 meters of the transmitting OGS. This is an extremely challenging and highly improbable scenario. In the event of being within the FOV cone, the impact could be up to catastrophic, where 10 kW with Adaptive Optics (AO) or 100 kW without AO would be sufficient to permanently destroy the onboard SPD. **Ground-GEO Scenario.** This scenario is, in principle, similar to the one described above, with the difference being the increased distance of approximately \(40,000\) km (the LWS does not necessarily need to be placed on the equator). As a consequence, this scenario generally demands more energy, roughly by a factor of \(1,000\), for the LWS to achieve similar results as in an attack on LEO satellites. In principle, the out-of-FOV attack can be continuous, and in cases of insufficient design, the risk could be classified as _High_. In theory, the satellite's FOV cone could extend up to several kilometres in radius on Earth. Nevertheless, we still consider it highly improbable for ground-based LWS (e.g., on a truck or boat) to reach it. Furthermore, achieving permanent destruction from the in-FOV attack would require several MW, which is unlikely due to the nonlinear effects in the atmosphere that significantly diminish the laser beam. Additionally, a MW laser would necessitate a large installation or facility. **Air-Ground Scenario** involves placing the LWS on an airborne platform such as a drone (at an approximate altitude of \(h\sim 5\) km), a plane (\(h\sim 10\) km), or a stratospheric vehicle like a balloon (\(h\sim 30\) km). While an LWS in the air could potentially enter the OGS's FOV cone, its feasibility is highly mission-dependent. For instance, it would be unlikely to execute such a mission within an opponent's territory where the anti-aircraft defence would be presented. This would be the most common scenario, which is why we lean more towards the _Improbable_ likelihood. On the other hand, it could be more feasible, for instance, in the case of no anti-aircraft defence, an OGS placed on a ship in international waters or directly on a battlefield. In the potential case of the in-FOV attack, the effect of LWS power is visualised in Fig. 5 (solid lines). Here, the required power for permanently damaging the SPD is about tens of watts, which is very accessible for all the mentioned airborne platforms. One could envision a special undercover mission with a drone flying into the target's FOV and damaging the OGS's quantum receiver. The out-of-FOV likelihood was estimated as _Remote_ since even with only a \(30^{\circ}\) elevation angle between the OGS and the stratospheric device, a horizontal distance of about 50 km from the OGS would still Figure 4: LWS initial power, \(P_{ini}\), versus target received power impinging on the target’s quantum receiver, \(P_{recv}\), in the configuration of the out-of-FOV attack (with an elevation of \(30^{\circ}\)) for two distances between the LWS and the target. The hatched areas consider the range of \(\kappa_{outFOV}\) over three orders of magnitude, as described in Appendix A. The meaning of the background colours is the same as in Fig. 3. be required. We consider this as a remote probability due to the reasons described above - however, it is more probable than the in-FOV attack. **Air-LEO Scenario** is similar to the previous scenario, with the target being the LEO QKD satellite. This effectively means a longer distance, and from an altitude of approximately \(h\sim 10\) km, atmospheric effects can be neglected. In this case, the longer distance results in larger geometric beam spreading, which could potentially be an advantage for the attacker, allowing for a global attack. For example, considering the out-of-FOV attack from an elevation angle of \(30^{\circ}\) with a satellite at \(h=500\) km implies a possible distance of about \(850\) km from the zero zenith angle, which is accessible for most potential missions. The improbability of the in-FOV attack has the same reasoning as for the Air-Ground scenario. **LEO-Ground Scenario.** This scenario is similar to the Ground-LEO scenario, but the LWS is placed in LEO. The main differences lie in the smaller beam widening of the laser (due to propagation in the atmosphere only in the last approximately \(10\) km) and the limited available laser power at the satellite. The fact that there's smaller diffraction can roughly compensate for the required energy for the out-of-FOV attack, which is available in space. For instance, in the Ground-to-LEO configuration, approximately five times higher initial power is needed to deliver the same laser power as in the LEO-to-Ground configuration, both considering \(h=1,000\) km, different receiving aperture, etc. The out-of-FOV numerical results are depicted in Fig. 4 (dashed and dot-and-dashed lines). Here, even \(1\) W initial power can be sufficient to cause dazzling on the Ground receiver. The in-FOV attack now is not limited by the potential anti-aircraft defence, and the LWS satellites can potentially cover the whole Earth. However, such LWS satellites would need to be numerous since active manoeuvring in LEO will be very fuel-consuming and unfeasible in a longer timescale. As demonstrated in Fig. 5, a space-based LWS at \(h=500\) km would need approximately 2-3 kW laser power for SPD destruction, which is the upper limit of laser power considered for satellite-based lasers. Note that in the opposite case (Ground-LEO), for the same effect at the same distance, a power of more than \(15\) kW without AO or more than \(3\) kW with AO would be required. **LEO-LEO scenario**. This is technically a more complex scenario relevant for cases when the quantum receivers are present on satellites (uplink or inter-satellite links). Due to the absence of atmosphere, laser propagation and its effects are more straightforward. However, the in-FOV attack would have high Figure 5: LWS initial power, \(P_{ini}\), versus target received power, \(P_{recv}\), for \(810\) nm at altitudes \(L=10\) km (solid lines), \(500\) km (dashed lines), and \(1,000\) km (dot-and-dashed lines). The meaning of the background is the same as in Fig. 3. requirements for manoeuvrability and fuel consumption. The out-of-FOV attack, as in all other scenarios, is more accessible but also more challenging. **LEO-GEO scenario** is another technically very challenging scenario, nevertheless considerable for the out-of-FOV attack where the missing propagation through the atmosphere (compared to Ground-GEO) is a significant advantage of a factor of about 4 that can even compensate for lower available laser power at LEO, i.e. from the ground you need a 4-times higher energy to case same effect at the same distance. Moreover, due to the fast orbiting (LEO satellites circle the Earth several times per day depending on the altitude), we consider the possible out-of-FOV attack _remote_. **GEO-Ground scenario**. Assuming limited laser power at GEO, we only consider the out-of-FOV attack, as shown in Fig. 7. The main advantage here is the potential to cover a vast portion of the Earth's surface. This implies that, in principle, we can execute permanent attacks. The task in this scenario can be reformulated as follows: Let's assume a fixed initial laser power. Then, we can ask about the extent of the area that can be covered by received power \(\geq 10^{-15}\) W to cause a DoS by varying the angle. Taking into account a simple geometry (i.e., zero zenith angle) and initial powers \(W_{ini}=10\) and \(100\) W, we can estimate a dazzled area with a radius of \(r=1.1\) and \(3.4\) km on the Earth's surface. ### Risk estimation The primary results are presented in Table 5, along with the corresponding risk classification as outlined in the risk assessment table (Tab. 4) described in Section 4.3. Our risk assessment did not identify any scenario where the risk could be classified as high, indicating that quantum communication should not be pursued at all. Conversely, the out-of-FOV type of attack is considered as _Serious_ in many of the studied scenarios. However, as discussed later, this risk can be mitigated, especially through proper and advanced design, which can lower the risk to _Medium_ or even to _Low_. Figure 6: Same as Fig. 4 except in the LEO-to-Ground scenario. ## 6 Counter-countermeasures The key parameter for free-space (quantum) communication is the signal-to-noise ratio (SNR). As quantum information is encoded in individual photons, the primary approach to enhancing SNR lies in noise suppression. To this end, incoming photons must traverse an array of optical devices before reaching a single-photon detector, which selectively filters out noisy photons. Such noise may originate naturally (e.g., from the sun, moon, or stars) or from man-made sources (such as city lights or reflections from satellites) [53]. In our context, these photons can also be regarded as elements of countermeasures, specifically, as the LWS photons that ultimately reach the detector. Notably, the following improvements and risk mitigation elements are typically considered: * **Telescope Design.** The primary threat, as outlined earlier, is the out-of-FOV attack--receiving high off-axis photons. Therefore, whether placed on the ground, in the air, or in space, telescopes should be designed to limit these off-axis photons. This entails minimising scattering surfaces near the optics and augmenting internal baffling within the telescope [30]. In this context, the higher energy of the LWS may be required to overcome this protective measure. * **Satellite Architecture.** As described in the Ground-LEO-Ground scenario, the reflection of the laser from the transmitting satellite to the receiver poses a significant threat. Hence, the satellite bus should be engineered to minimise light reflection and scattering, which may involve employing suitable materials, such as anti-reflective coatings. However, this objective is partially at odds with \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{3}{c|}{**In-FOV attack**} & \multicolumn{3}{c|}{**Out-of-FOV attack**} \\ \hline **Scenario** & **Likelihood** & **Impact** & **Risk** & **Likelihood** & **Impact** & **Risk** \\ \hline Ground-LEO-Ground & No & No & None & Frequent & Marginal & **Serious** \\ \hline Ground-LEO & Improbable & Catastrophic & Medium & Probable & Marginal & **Serious** \\ \hline Ground-GEO & Improbable & Critical & Medium & Frequent & Marginal & **Serious** \\ \hline Air-Ground & Improbable & Critical & Medium & Remote & Marginal & **Medium** \\ \hline Air-LEO & Improbable & Critical & Medium & Frequent & Marginal & **Serious** \\ \hline LEO-Ground & Improbable & Marginal & Low & Probable & Marginal & **Serious** \\ \hline LEO-LEO & Improbable & Critical & Medium & Remote & Marginal & **Medium** \\ \hline LEO-GEO & Improbable & Marginal & Low & Remote & Marginal & **Medium** \\ \hline GEO-Ground & Improbable & Marginal & Low & Frequent & Marginal & **Serious** \\ \hline \end{tabular} \end{table} Table 5: Risk assessment results for each scenario for in-FOV and out-of-FOV attacks. Figure 7: Same as Fig. 4, except for the GEO-to-Ground scenario. the need for a highly reflective surface to address issues related to solar heating. In this context, the higher energy of the LWS may be required to overcome this protective measure. * **Time-Gate Filter.** The SPD is in the "on" mode only when signal photons are anticipated. The efficiency of this process hinges on the short duration during which the gate is open, which, in turn, relies on the precision of synchronisation with the transmitting party--an aspect falling under the purview of the APT system. This mitigation is less effective against a CW laser, whereas a dazzling pulsed laser would need to align with the narrow time gate. * **Wavelength Filter.** Much of the man-made background spans a broad wavelength range. Presently, wavelength filters for daylight QKD possess a linewidth of 25.6 pm, providing a noise suppression of over 44.7 dB across the 1380-1760 nm range, with an optical efficiency of 74.5% [74]. To bypass this filter, we must consider the precise wavelength used by quantum communication sources, accounting for corrections due to the Doppler shift. In such a scenario, this mitigation becomes irrelevant from the perspective of a laser attack. * **Spatial Filter.** The spatial filter refers to the size of the collecting telescope--a smaller telescope leads to lower noise. Despite advancements in laser and targeting technology, atmospheric turbulence remains the primary factor degrading the beam spot width, resulting in beam spreading and wandering, which can require a larger telescope or lens. * **QKD Receiver Design.** In [31], an array of single-photon avalanche diodes (SPAD) employing straightforward spatial discrimination was proposed as a counter-countermeasure effective against both in-FOV and out-of-FOV attacks. Experimental evidence supports the assertion that this solution can yield at least a 20 dB improvement in the signal-to-interference ratio compared to a single SPD [31]. * **Optical Attenuators.** Optical attenuators tailored for quantum communication incorporate a specialised passive component--an optical fuse that permanently disconnects itself once the injected power surpasses a predefined threshold [26]. * **Frequency Hopping.** Frequency hopping (or agility) is a technique well-known in classical RF systems, such as radar [75]. The dynamic alteration of frequency (or wavelength) in optical communication, when combined with a wavelength filter, can serve as an effective counter-countermeasure against narrow-band LWS. However, compared to the RF domain, implementing this in the optical realm would present a substantial technological challenge, especially concerning photon sources and spectral filters. * **Rerouting.** A natural mitigation strategy involves switching to backup Quantum Key Distribution (QKD) links or rerouting the quantum connection. This, however, can be particularly challenging in the early stages of quantum networks, where redundancy is minimal, and rerouting consumes substantial resources. For instance, a free-space QKD link employing a satellite as a trusted repeater would necessitate rerouting via a terrestrial quantum network, where repeater nodes are typically spaced approximately 100 km apart. * **Alternative Methods.** Especially in the case of QKD, alternative approaches--such as ensuring a sufficient number of pre-loaded keys or adopting a post-quantum cryptographic scheme--should be available for situations where QKD is not feasible [76]. * **Unknown Position.** Knowledge of the locations of fibre optic cables, ground optical stations, or the trajectories of quantum satellites is crucial for a successful attack. Hence, keeping such information classified and unknown is imperative. ## 7 Conclusions This paper has focused on investigating potential countermeasures in free-space quantum communication (quantum link) with the aim of inducing a temporary or permanent quantum denial of service (DoS) from a strategic perspective, across various scenarios. The studied scenarios encompass situations where a quantum receiver, being the most sensitive and thus the most susceptible element, is located either on the ground or in space, within a satellite context. The considered attack utilises a laser situated on the ground, on an airborne platform (such as a drone, plane, or stratospheric vehicle), or in space. The simulations conducted in this study serve primarily as illustrations, given the current unavailability of more extensive data for finer-grained simulations. Moreover, a more detailed simulation at a tactical level would necessitate the consideration of specific designs for QKD systems as well as for LWS. Furthermore, it is worth noting that the chosen simulation mechanism tends to underestimate the threat posed by LWS, rather than the other way around. Nevertheless, these simulations provide a clear perspective on the fundamental viability of potential countermeasures in quantum communication and the resulting DoS. Our simulations reveal that a direct in-FOV attack (where the attacking laser is within the Field of View of the target) can exert a significant impact, potentially leading to the permanent damage of single-photon detectors, even with relatively low initial power. However, the practical execution of such an attack is exceptionally challenging. Conversely, the impact of the out-of-FOV attack is relatively modest. Nevertheless, it remains sufficient to induce a denial of service due to the elevated noise levels in the quantum channel, even with an initially very low power. In principle, the primary formidable challenge in this scenario lies in possessing an adequate targeting system for the laser, with the Ground-LEO-Ground scenario potentially presenting a notable concern. In summary, we identify the out-of-FOV attack as the principal threat to future quantum communication. The prerequisites for a laser weapon in this context are relatively undemanding in terms of required power, and the wavelength should align with the targeted quantum communication. This implies that commercially available lasers could be employed. The foremost challenge lies in target acquisition--namely, obtaining intelligence regarding the ground station's position on one side, and effecting precise tracking and targeting of satellites on the other. This aspect stands as the pivotal parameter in discerning potential malicious actors in quantum computing countermeasures. Based on our research, we propose the following recommendations: * Place a strong emphasis on the development of counter-countermeasures for quantum communication. * Give particular priority to the advancement of receiver design, incorporating features like absorptive surfaces to minimise scattering. This should also involve the implementation of a system for detecting and localising the source of out-of-FOV attacks. * As a part of the certification and validation process, it is crucial to experimentally estimate both the off-axis receiving imprint (in-scattering profile) and the scattering imprint (reflection profile) for each quantum channel receiver. This assessment helps in estimating potential risks. * Maintain a policy of keeping the positions of quantum receivers (be they satellites or ground-based) as confidential as possible. This strategy enhances security. * Research efforts should be directed towards exploring how laser weapons and laser attacks could be prohibited or, at the very least, limited by international agreements. This falls under the umbrella of qualitative arms control. ## Acknowledgement I am deeply grateful to Jurgen Altmann for his invaluable expertise in laser weapons, particularly their deployment in space. ## Author declaration Michal Krelina acknowledges the administrative support and article publishing charges provided by Czech Technical University in Prague. The author commenced work at the European Union Agency for the Space Programme (EUSPA) during the course of this research. However, it is declared that no information or data related to EUSPA has been used in this study. ## Appendix A Laser propagation simulation In the case of LWS, we consider the Gaussian intensity profile at a distance \(z\) and at the radius \(r\), given by \[I(z,r,\phi)=\frac{2P_{ini}}{\pi w_{tot}^{2}}\exp\left(-\frac{2r^{2}}{w_{tot}^{ 2}}\right)\tau_{tot}S_{tot} \tag{1}\] where \(P_{ini}\) is the initial power, \(w_{tot}^{2}\) represents the beam-waist radius, \(\tau_{tot}\) denotes the total optical transmittance, \(S_{tot}\) stands for the total Strehl ratio, and \(\phi\) is the path zenith angle. Then, the receiver power at a distance \(z\) is given by \[P_{recv}(z,r,\phi)=\tau_{r}\int_{0}^{D_{r}/2}dr\,r2\pi I(z,r,\phi)=\tau_{r}P_{ini }\left(1-e^{-\frac{D_{r}^{2}}{2w_{tot}}}\right)\tau_{tot}S_{tot}\quad\mbox{[W ]} \tag{2}\] where \(D_{r}\) represents the receiver aperture (see Tab. 3), and \(\tau_{r}\) denotes the optical loss of the receiver, where we assume an efficiency of one for both the transmitting and receiving parts for simplicity. The beam waist reads \[w_{tot}^{2} = w_{d}^{2}+w_{t}^{2}+w_{j}^{2} \tag{3}\] where \(w_{d}^{2}\) accounts for the contribution from beam diffraction, \(w_{t}^{2}\) from the turbulence effect, and \(w_{j}^{2}\) from the jitter effect. The effect from diffraction for a Gaussian beam is given by [68, 77] \[w_{d}^{2} = \frac{M^{2}z^{2}}{k^{2}w_{0}^{2}}+w_{0}^{2}\left(1-\frac{z}{F} \right)^{2} \tag{4}\] where \(\lambda\) is the wavelength, \(k=2\pi/\lambda\) is the wave number, \(M^{2}\) is the laser quality factor (we consider the ideal case, \(M=1\)), \(z\) is the distance to the target, \(F\) is focal range (we consider \(F\rightarrow\infty\)), and the initial beam waist is connected to the aperture as \[w_{0}=\frac{D}{2\sqrt{2}}. \tag{5}\] The effect of turbulence depends on whether we consider uplink (ground-to-space propagation) or downlink (space-to-ground propagation). For downlink scenarios (i.e., \(h_{0}>h_{1}\)), the final turbulence waist is given by [58] \[w_{t}^{2} = \frac{w_{d}^{2}}{M^{2}}\left(\frac{D}{r_{0}}\right)^{5/3} \tag{6}\] where \(r_{0}\) is the Fried parameter or Fried's coherence length, defined as \[r_{0} = 0.431575k^{2}\sec^{11/6}(\phi)\mu_{u/d}. \tag{7}\] In the case of a downlink (i.e. \(h_{0}>h_{1}\)), we have \[\mu_{d} = \int_{h_{0}}^{h_{1}}dh\,C_{n}^{2}(h)\left(\frac{h-h_{0}}{h_{1}-h_ {0}}\right)^{5/3} \tag{8}\] and for uplink, \[\mu_{u} = \int_{h_{0}}^{h_{1}}dh\,C_{n}^{2}(h)\left(1-\frac{h-h_{0}}{h_{1}-h_{ 0}}\right)^{5/3} \tag{9}\] where always \(h_{0}<h_{1}\). Here, \(C_{n}^{2}(h)\) represents the refractive index structure coefficient. We employ the Hufnagel-Valley model [78] \[C_{n}^{2}(h)=0.00594\left(\frac{v}{27}\right)(10^{-5}h)^{10}e^{-h/1000}+2.7 \times 10^{-16}e^{-h/1500}+A_{0}e^{-h/100} \tag{10}\] where \(A_{0}\) defines the turbulence strength at ground level, and \(v\) is the RMS (root mean square) wind speed at high altitude. In the HV5/7 variant [79], values \(A_{0}=1.7\times 10^{-14}\) m\({}^{-2/3}\) and \(v=21\) m/s are used. In the case of utilising adaptive optics (AO), the turbulence waist is given by [80] \[w_{t}^{2} = \frac{w_{d}^{2}(1-S_{ao})}{S_{ao}} \tag{11}\] where \[S_{ao} = e^{-\sigma_{ae}^{2}}, \tag{12}\] \[\sigma_{ao}^{2} = \sigma_{WFS}^{2}+\sigma_{fit}^{2}+\sigma_{temp}^{2},\] (13) \[\sigma_{WFS}^{2} \approx \frac{4}{SNR^{2}},\] (14) \[\sigma_{fit}^{2} \approx \kappa\left(\frac{r_{s}}{r_{0}}\right)^{5/3},\] (15) \[\sigma_{temp}^{2} \approx \left(\frac{f_{G}}{f_{BW}}\right)^{5/3}, \tag{16}\] where typical values [80] are \(\kappa=0.34\), \(r_{s}=100\) cm, \(f_{BW}=20\) Hz, and \(f_{G}=8-40\) Hz. We consider high SNR, \(SNR=50\), thus \(\sigma_{WFS}^{2}\approx 0\). The jitter effect waist is given by [68] \[w_{j}^{2} = 2\langle\theta_{rms}^{2}\rangle z^{2}. \tag{17}\] Here, \(\theta_{rms}\) defines the total root mean square angular displacement due to all contributions from the jitter. We consider \(\theta_{rms}=2.0\times 10^{-6}\) rad [81]. The total optical transmittance reads \[\tau_{tot} = \tau_{a}\tau_{t}\tau_{p} \tag{18}\] where \(\tau_{a}\) is the atmosphere transmittance computed using MODTRAN [82] for the selected wavelengths (810 and 1550 nm), \(\tau_{t}\approx 1\) is the optical loss of the transmitter, and the pointing error [59] \[\tau_{p}=\frac{w_{t}^{2}}{w_{t}^{2}+4\sigma_{p}^{2}}\approx 1, \tag{19}\] where \(\sigma_{p}\) is the variance of the pointing probability density that follows a Gaussian distribution. We consider a very fine-tracking control with a very small deviation \(\sigma_{p}\) compared to \(w_{t}^{2}\). In general, the final Strehl ratio reads \[S_{tot}=\frac{1}{1+\sum_{i}(S_{i}^{-1}-1)}. \tag{20}\] where one could account for the thermal blooming effect [68] \[S_{i=TB} = \frac{1}{1+0.0625N_{D}^{2}}. \tag{21}\] where \(N_{D}\) is the thermal distortion number. For simplicity, we consider scenarios with LWS where the thermal blooming effect can be neglected, similar to other nonlinear atmospheric effects. The out-of-FOV attack was simulated by adapting the scattering profile model from [30] to modify the approximated received power \[P_{recv}(z,\phi)=\tau_{r}I(z,r=0,\phi)\sigma_{rec}\quad\text{[W]} \tag{22}\] where the in-scattering profile from [30] reads \[\sigma_{rec}=\kappa_{outFOV}\frac{\pi D_{r}^{2}}{4}\cos^{2}\phi. \tag{23}\] Here, we introduce a parameter \(\kappa_{outFOV}\), estimated as \(10^{-7}\) in [30]. For our demonstration, we consider a range of \(\kappa_{outFOV}\) values, specifically \(10^{-6}\) to \(10^{-9}\), and a zenith angle of \(\phi=60^{\circ}\) (elevation \(30^{\circ}\)). In the case of Ground-LEO-Ground, we employ the approximated optical scattering cross section (reflection profile) from [30] \[\sigma_{sat}=S\epsilon\sqrt{\cos\phi} \tag{24}\] where \(S\) is the area of the reflecting surface, and \(\epsilon\) is the albedo. In our analysis, we consider a range of values for \(S\), specifically 4 and 0.1 m\({}^{2}\), corresponding to the Micius and 1U CubeSat, respectively. The albedo is considered from maximal reflection (1) to suppressed reflection of 0.01. It is important to note that the scattering profiles from [30] are only very approximate and require precise estimation for each system.
2301.09377
Notes on Quantum oscillation for Hatsugai-Kohmoto model
Motivated by the non-Fermi liquid (NFL) phase in solvable Hatsugai-Kohmoto (HK) model and ubiquitous quantum oscillation (QO) phenomena observed in strongly correlated electron systems, e.g. cuprate high-Tc superconductor and topological Kondo insulator SmB$_{6}$, we have studied the QO in HK model in terms of a combination of analytical and numerical calculation. In the continuum limit, the analytical results indicate the existence of QO in NFL state and its properties can be described by Lifshitz-Kosevich-like formula. Furthermore, numerical calculations with Luttinger's approximation on magnetic-field-dependent density of state, magnetization and particle's density agree with the findings of analytical treatment. Although numerical simulation from exact diagonalization exhibits certain oscillation behavior, it is hard to extract its oscillation period and amplitude. Therefore, more work (particularly the large-scale numerical simulation) on this interesting issue is highly desirable and we expect the current study on HK model will be helpful to understand generic QO in correlated electron materials.
Yin Zhong
2023-01-23T11:45:46Z
http://arxiv.org/abs/2301.09377v2
# Notes on Quantum oscillation for Hatsugai-Kohmoto model ###### Abstract Motivated by the non-Fermi liquid (NFL) phase in solvable Hatsugai-Kohmoto (HK) model and ubiquitous quantum oscillation (QO) phenomena observed in strongly correlated electron systems, e.g. cuprate high-Tc superconductor and topological Kondo insulator SmB\({}_{6}\), we have studied the QO in HK model in terms of a combination of analytical and numerical calculation. In the continuum limit, the analytical results indicate the existence of QO in NFL state and its properties can be described by Lifshitz-Kosevich-like formula. Furthermore, numerical calculations with Luttinger's approximation on magnetic-field-dependent density of state, magnetization and particle's density agree with the findings of analytical treatment. Although numerical simulation from exact diagonalization exhibits certain oscillation behavior, it is hard to extract its oscillation period and amplitude. Therefore, more work (particularly the large-scale numerical simulation) on this interesting issue is highly desirable and we expect the current study on HK model will be helpful to understand generic QO in correlated electron materials. ## I Introduction Many-body physics beginning with the exploration of interacting electron gas, has been an essential issue in modern condensed matter physics, particularly after the discovery of cuprate high-Tc superconductors and fractional quantum Hall effect.[1] Recently, many researchers have focused on the so-called Hatsugai-Kohmoto (HK) model,[2; 3; 4] which acts as a unique lattice fermion model since its solvability results from infinite-ranged interaction and it behaves as a Hubbard atom in momentum space. Motivated by above solvability in any spatial dimensional and electron filling, interesting and intriguing extensions of HK model have been invented and studied, including unconventional electron pairing, Fermi arc, Kondo impurity, many-band system and so on.[5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19] Unlike other well-established solvable models in recent decades, the HK-like models do not need any quenched disorder as in Sachdev-Ye-Kitaev model or local \(Z_{2}\) gauge symmetry in Kitaev's toric code and honeycomb model.[20; 21; 22; 23; 24; 25; 26; 27] For the original HK model, it has translation invariance with topologically trivial nature, but surprisingly, it provides a strictly exact playground for non-Fermi liquid (NFL) and featureless Mott insulator in any spatial dimension, which is rare in statistical mechanics and condensed matter physics. (See Fig. 1) The solvability of HK model results from its locality in momentum space and one can diagonalize HK Hamiltonian (just diagonal \(4\times 4\)-matrix) for each momentum. The current studies have mainly focused on an interesting extension of HK model, i.e. the superconducting instability from the intrinsic NFL state in HK model,[7] which is inspired by ubiquitous NFL behaviors and their link to unconventional superconductivity in cuprate, iron-based superconductors (SC) and many heavy fermion compounds. Unexpected properties such as topological \(s\)-wave pairing and two-stage superconductivity have been discovered.[9; 10; 14] At the same time, it is well-known that physical observables like magnetization and resistivity in metals show periodic oscillation under external magnetic field.[29] (de Haas-van Alphen and Shubnikov-de Haas effect) Such phenomena is called quantum oscillation (QO) and is generally believed to result from the oscillation of electron's density of state at Fermi energy when Landau level crosses Fermi energy periodically. The underlying microscopic description is captured by the famous Lifshitz-Kosevich (LK) formula, which provides a practical tool for metallic systems to extract electrons' effective mass and Fermi surface from period and amplitude of oscillation. Furthermore, validity of LK formula has been intensively examined and it is still valid if electron-electron interaction effect does not alter the Fermi liquid nature of the considered system.[30; 31] For practical calculations, the extended LK formula with electron's self-energy is widely used and it is found that several NFL-like self-energy still leads to LK-like results.[31 Figure 1: (a) The Hatsugai-Kohmoto (HK) model with hopping strength \(t\) and interaction \(U\) in one spatial dimension and (b) on a square lattice. (c) The exact ground-state phase diagram for HK model exhibits a Mott insulator and a non-Fermi-liquid-like metal. (\(U_{c}=W\) with \(W\) being the bandwidth) The transition from metallic state to gapped Mott insulating phase belongs to the universality of the continuous Lifshitz transition. However, a natural question arises when the system is not described by Fermi liquid theory and the concept of quasiparticle breaks down. Thus, we may ask how about is the QO relating to NFL phenomena and the fate of LK theory.[32] This question is partially motivated by recent experiments of QO in many strongly correlated electron systems, including cuprate and iron-based superconductor,[33; 34] heavy fermion compounds,[35] topological Kondo insulator,[36] excitonic insulator and twisted-bilayer graphene.[37; 38] It is noted that in spite of the unambiguous NFL behaviors in cuprate and heavy fermion compound, or even insulating nature in topological Kondo insulator and excitonic insulator, QO has been firmly established in above quantum materials and certain tentative theoretical explanations have been put forward.[39; 40; 41] In literature, electron correlation effect in QO has been widely studied by Hartree-Fock mean-field approximation,[42; 43; 44] Hubbard-I approximation,[45] string-theory-inspired holographic duality,[46; 47] slave-particle theory with large-N approximation and dynamic mean-field theory.[48; 49; 50; 51; 52] It should be emphasized that all these mentioned treatments involve either uncontrollable approximation like Hartree-Fock mean-field theory, artificial large-N limit or infinite-dimension limit, thus our understanding on QO in interacting many-electron system is obviously not complete. In this work, inspired by the NFL state in solvable HK model, we study its possible QO. With the Luttinger's approximation, Hofstadter butterfly exists in all phases of the ground-state whatever they are NFL or Mott insulator. By examining the magnetic-field-dependent density of state, magnetization and particle's density, we find NFL states indeed show QO and their zero-temperature behaviors are captured by LK-like formula, which reflects the existence of two-Fermi-surface structure of the (non-Landau) quasiparticle in NFL. We have also performed an exact diagonalization calculation to go beyond the Luttinger's approximation. Due to the limited size of the system, although numerical results exhibit certain oscillation behavior, it is hard to extract its oscillation period and amplitude. Therefore, more work (particularly the large-scale numerical simulation) on this interesting issue is highly desirable and we expect the current study on HK model will be helpful to understand generic QO in strongly correlated electron materials. The remaining parts of this paper are organized as follows. In Sec. II, the HK model will be introduced with a quick review about its basic properties. In Sec. III, with Luttinger's approximation, we calculate and discuss QO in HK model. Then, Sec. IV provides an alternative calculation based on exact diagonalization. Sec. V is devoted to some discussions. Finally, we summarize our work in Sec. VI. ## II The Hatsugai-Kohmoto model ### Quick review of Hatsugai-Kohmoto model The Hatsugai-Kohmoto model we will study in this work has the following Hamiltonian, \[\hat{H} = -\sum_{i,j,\sigma}t_{ij}\hat{c}^{\dagger}_{i\sigma}\hat{c}_{j \sigma}-\mu\sum_{j\sigma}\hat{c}^{\dagger}_{j\sigma}\hat{c}_{j\sigma} \tag{1}\] \[+ \frac{U}{N_{s}}\sum_{j_{1},j_{2},j_{3},j_{4}}\delta_{j_{1}+j_{3}= j_{2}+j_{4}}\hat{c}^{\dagger}_{j_{1}\uparrow}\hat{c}_{j_{2}\uparrow}\hat{c}^{ \dagger}_{j_{3}\downarrow}\hat{c}_{j_{4}\downarrow}.\] Here, the above model is defined on certain lattice, such as a one-dimensional chain or square lattice in Fig. 1(a) and (b). Other lattices like triangular or Kagome lattice can also be considered, however such multi-sublattice structure may lead to extra complexity and the above model must to be modified.[12] In Eq. 1, we use \(\hat{c}^{\dagger}_{j\sigma}\) to denote the creation operator of conduction electron (\(c\)-electron) at site \(j\) with spin flavor \(\sigma=\uparrow,\downarrow\). It satisfies the standard fermionic anti-commutation rule \(\{\hat{c}_{i\sigma},\hat{c}^{\dagger}_{j\sigma^{\prime}}\}=\delta_{ij}\delta_ {\sigma\sigma^{\prime}}\). Next, \(t_{ij}\) is hopping integral between \(i,j\) sites. Furthermore, to fix the electron's density, the chemical potential \(\mu\) is added. \(N_{s}\) is the number of sites. The last term of \(\hat{H}\) is the HK interaction,[2] which is an infinite-ranged interaction between four electrons but preserves the center of motion for \(c\)-electron due to the constraint of \(\delta\) function. Most importantly, if we use the Fourier transformation \(\hat{c}_{j\sigma}=\frac{1}{\sqrt{N_{s}}}\sum_{k}e^{ikRj}\hat{c}_{k\sigma}\), it is found that the Hamiltonian is local in momentum space, which means, \[\hat{H} =\sum_{k}\hat{H}_{k}\] \[\hat{H}_{k} =\sum_{\sigma}(\varepsilon_{k}-\mu)\hat{c}^{\dagger}_{k\sigma} \hat{c}_{k\sigma}+U\hat{c}^{\dagger}_{k\uparrow}\hat{c}_{k\uparrow}\hat{c}^{ \dagger}_{k\downarrow}\hat{c}_{k\downarrow}, \tag{2}\] where \(\varepsilon_{k}\) are dispersion of electrons and can be found from Fourier transformation of \(t_{ij}\). In some sense, the interaction term \(U\hat{c}^{\dagger}_{k\uparrow}\hat{c}_{k\uparrow}\hat{c}^{\dagger}_{k \downarrow}\hat{c}_{k\downarrow}\) is just as the Hubbard interaction in momentum space and it may stabilize a non-trivial NFL fixed point due to the breaking of a hidden \(Z_{2}\)-symmetry.[13] However, it seems that the true nature of such symmetry-breaking is still unclear. Let us return to the discussion of Hamiltonian Eq. 2. If we choose Fock state \[|n_{1},n_{2}\rangle\equiv(\hat{c}^{\dagger}_{k\uparrow})^{n_{1}}|0\rangle(\hat {c}^{\dagger}_{k\downarrow})^{n_{2}}|0\rangle \tag{3}\] with \(n_{i}=0,1\) as basis, \(\hat{H}_{k}\) can be written as a diagonal \(4\times 4\) matrix, whose eigen-energy is \(0,\varepsilon_{k}-\mu,\varepsilon_{k}-\mu,2(\varepsilon_{k}-\mu)+U\) and the corresponding eigen-state is \(|0\rangle_{k}\equiv|00\rangle,|\sigma=\uparrow\rangle_{k}\equiv|10\rangle,| \sigma=\downarrow\rangle_{k}\equiv|01\rangle,|\uparrow\uparrow\rangle_{k} \equiv|11\rangle\), which means states are empty, single occupation with spin-up and spin-down, and double occupation. Therefore, the many-body ground-state of \(\hat{H}\) is just the direct-product state of each \(\hat{H}_{k}\)'s ground-state, i.e. \(|\Psi_{g}\rangle=\prod_{k\in\Omega_{0}}|0\rangle_{k}\prod_{k\in\Omega_{1}}| \sigma\rangle_{k}\prod_{k\in\Omega_{2}}|\uparrow\uparrow\rangle_{k}.(\Omega_{0},\Omega_{1},\Omega_{2}\) are the momentum range for different occupation) If \(\Omega_{0}=\Omega_{2}=0\), each (momentum) state is only occupied by one electron, the system is a Mott insulator. Otherwise, we obtain a metallic state with non-Fermi liquid properties, e.g. the appearance of Luttinger surface, violation of Luttinger theorem,[7] non-Pauli spin susceptibility and exclusion statistics.[4; 53] Similarly, excited states and their energy are easy to be constructed, so \(\hat{H}\) (Eq. 1) has been solved since all eigen-states and eigen-energy are found. For our purpose, it is useful to present the single-particle Green's function and some ground-state or thermodynamic quantities for HK model. For example, the single-particle Green's function can be obtained in terms of equation of motion, which reads as (See Appendix.A) \[G_{\sigma}(k,\omega) = \frac{1+\frac{U(\hat{n}_{k\sigma})}{\omega-(\varepsilon_{k}-\mu+ U)}}{\omega-(\varepsilon_{k}-\mu)} \tag{4}\] \[= \frac{1-\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-(\varepsilon _{k}-\mu)}+\frac{\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-(\varepsilon_ {k}-\mu+U)}\] where \(\langle\hat{n}_{k\bar{\sigma}}\rangle\) is the expectation value of electron number operator \(\hat{n}_{k\bar{\sigma}}=\hat{c}_{k\bar{\sigma}}^{\dagger}\hat{c}_{k\bar{ \sigma}}\) with spin \(\bar{\sigma}=-\sigma\). For paramagnetic solution, one finds \(\langle\hat{n}_{k\bar{\sigma}}\rangle=\frac{f_{F}(\varepsilon_{k}-\mu)}{f_{ F}(\varepsilon_{k}-\mu)+1-f_{F}(\varepsilon_{k}-\mu+U)}\) with \(f_{F}(x)=1/(e^{x/T}+1)\) being the Fermi distribution function. The pole of the above Green's function tells us that there exist two kinds of quasiparticle, i.e. holon \(\hat{h}_{k\sigma}^{\dagger}=\hat{c}_{k\sigma}(1-\hat{c}_{k\bar{\sigma}}^{ \dagger}\hat{c}_{k\bar{\sigma}})\) and doublon \(\hat{d}_{k\sigma}^{\dagger}=\hat{c}_{k\sigma}^{\dagger}\hat{c}_{k\bar{\sigma}} ^{\dagger}\hat{c}_{k\bar{\sigma}}\). To see their significance, one finds that \(\hat{h}_{k\uparrow}^{\dagger}|10\rangle=|00\rangle\), \(\hat{h}_{k\uparrow}^{\dagger}|00\rangle=\hat{h}_{k\uparrow}^{\dagger}|01 \rangle=\hat{h}_{k\uparrow}^{\dagger}|11\rangle=0\), which means \(\hat{h}_{k\uparrow}^{\dagger}\) creates the state with no electron, i.e. a hole state. Similarly, \(\hat{d}_{k\uparrow}^{\dagger}|01\rangle=|11\rangle\), \(\hat{d}_{k\uparrow}^{\dagger}|00\rangle=\hat{h}_{k\uparrow}^{\dagger}|10 \rangle=\hat{h}_{k\uparrow}^{\dagger}|11\rangle=0\) and it states that \(\hat{d}_{k\uparrow}^{\dagger}\) creates the state with two occupied electrons. The corresponding quasiparticle energy bands are \(\varepsilon_{k}-\mu,\varepsilon_{k}-\mu+U\). We should emphasize that they are not Landau quasiparticle since adiabatically continuity into non-interacting limit \(U=0\) does not work. Next, at finite-\(T\), the thermodynamics of HK model is determined by its free energy density \(f\), which is related to partition function \(\mathcal{Z}\) as \[f=-\frac{T}{N_{s}}\ln\mathcal{Z},\mathcal{Z}=\mathrm{Tr}e^{-\beta\hat{H}}=\prod _{k}\mathrm{Tr}e^{-\beta\hat{H}_{k}}=\prod_{k}f_{k} \tag{5}\] Here, one notes that the partition function is easy to calculate since each \(k\)-state contributes independently. We have defined \(f_{k}=1+2z_{k}+z_{k}^{2}e^{-\beta U}\) and \(z_{k}=e^{-\beta(\varepsilon_{k}-\mu)}\). At zero temperature, the free energy density reduces into the ground-state energy density, which has very simple expression, \[e_{g}=\frac{1}{N_{s}}\sum_{k}[(\varepsilon_{k}-\mu)\theta(\mu-\varepsilon_{k}) +(\varepsilon_{k}-\mu+U)\theta(\mu-\varepsilon_{k}-U)], \tag{6}\] where \(\theta(x)\) is the standard unit-step function (\(\theta(x)=1\) for \(x>0\) and \(\theta(x)=0\) if \(x<0\)). Therefore, the electron density at \(T=0\) is found to be \[n=\frac{1}{N_{s}}\sum_{k}[\theta(\mu-\varepsilon_{k})+\theta(\mu-\varepsilon_ {k}-U)]. \tag{7}\] which indicates two Fermi surfaces located at \(\varepsilon_{k}=\mu\) and \(\varepsilon_{k}=\mu-U\). ### Adding orbital magnetic field Because we are interested in possible QO of HK model, the orbital magnetic field should be included. This can be realized easily via Pierls substitution as \(t_{ij}\to t_{ij}e^{ia_{ij}}\), where phase factor \(a_{ij}=\frac{e}{\hbar}\int_{r_{j}}^{r_{i}}d\vec{r}\cdot\vec{A}(\vec{r}^{ \prime})\) and \(\vec{A}\) is the vector potential of external magnetic field. Then, we can write HK model as \(\hat{H}=\hat{H}_{0}+\hat{H}_{\mu}+\hat{H}_{U}\) \[\hat{H}_{0} = -\sum_{i,j,\sigma}t_{ij}e^{ia_{ij}}\hat{c}_{i\sigma}^{\dagger} \hat{c}_{j\sigma},\] \[\hat{H}_{\mu} = -\mu\sum_{j\sigma}\hat{c}_{j\sigma}^{\dagger}\hat{c}_{j\sigma},\] \[\hat{H}_{U} = \frac{U}{N_{s}}\sum_{j_{1},j_{2},j_{3},j_{4}}\delta_{j_{1}+j_{3}=j _{2}+j_{4}}\hat{c}_{j_{1}\uparrow}^{\dagger}\hat{c}_{j_{2}\uparrow}\hat{c}_{j_{ 3}\downarrow}^{\dagger}\hat{c}_{j_{4}\downarrow}. \tag{8}\] Frankly speaking, the above Hamiltonian is hard to solve since adding external magnetic field breaks the translation invariance, which leads to the violation of solvability of the original HK model (Eq. 1). Instead, following the original treatment of Luttinger,[30; 31] whose approximation is able to capture QO in interacting electron systems such as Fermi liquid, we can still use results of last section but only need to replace single-particle energy \(\varepsilon_{k}\) with eigen-energy of \(\hat{H}_{0}\) (denoted as \(\{E_{m}\}\)). In the language of many-body physics, such replacement is a statement that the external magnetic field only modifies the single-particle spectrum but not the expression of self-energy function. Therefore, the ground-state energy density under external magnetic field (\(\vec{B}=\nabla\times\vec{A}\)) has the new formalism, \[e_{g}=\frac{1}{N_{s}}\sum_{m}[(E_{m}-\mu)\theta(\mu-E_{m})+(E_{m}-\mu+U)\theta( \mu-E_{m}-U)], \tag{9}\] and the related electron density is \[n=\frac{1}{N_{s}}\sum_{m}[\theta(\mu-E_{m})+\theta(\mu-E_{m}-U)]. \tag{10}\] Thus, the magnetization and susceptibility in ground-state can be found by \(M=-\frac{\partial e_{g}}{\partial B},\chi_{s}=\frac{\partial M}{\partial B}\). For thermodynamics at finite \(T\), the only modification is \(f_{k}\to f_{m}=1+2z_{m}+z_{m}^{2}e^{-\beta U}\) with \(z_{m}=e^{-\beta(\varepsilon_{m}-\mu)}\) in Eq. 5. To be specific, let us consider the square lattice on \(xy\)-plane with nearest-neighbor-hopping (\(t_{ij}\to t\)) and the magnetic field is along the \(z\)-axis \(\vec{B}=B\vec{e}_{z}\). In Landau gauge, the corresponding vector potential is \(\vec{A}=Bx\vec{e}_{y}\), so \(a_{i,i+x}=0\) and \(a_{i,i+y}=\frac{e}{k}Ba^{2}i_{x}\), (\(a\) is the lattice constant and will be set to unit) which gives rise to \[\hat{H}_{0}=-t\sum_{i,\sigma}(\hat{e}_{i\sigma}^{\dagger}\hat{e}_{i+x,\sigma}+ e^{i\frac{e}{k}Bi_{z}}\hat{c}_{i\sigma}^{\dagger}\hat{c}_{i+y,\sigma}). \tag{11}\] and the phase factor part can be parameterized as \(e^{i\frac{e}{k}Bi_{z}}=e^{i\frac{e}{k}b_{z}}=e^{i2\pi wi_{x}}\) with flux \(\Phi=2\pi\frac{h}{e}w\) on each plaquette. In fact \(\hat{H}_{0}\) is just the famous Hofstadter model with spin degree of freedom and has been widely studied in recent years due to its realization in cold atom experiments and its relevance to moire superlattice systems.[54; 55; 56; 57; 58; 59; 60] After diagonalization of \(\hat{H}_{0}\), we obtain its eigen-energy \(\{E_{m}\}\) for given magnetic field \(B\), thus interesting quantities can be calculated straightforwardly. ## III Quantum Oscillation in Non-Fermi Liquid: The Case Study on Hatsugai-Kohmoto Model ### Lattice calculation In this section, we study the HK model under magnetic field on square lattice whose single-particle Hamiltonian is \(\hat{H}_{0}\) (Eq. 11). Firstly, for parameters \(t=e=\hbar=1\), the single-particle spectrum \(\{E_{m}\}\) has been shown in Fig. 2, in which we consider a \(60\times 60\) system with periodic boundary condition. It is clear to see the well-known butterfly spectrum of non-interacting Hamiltonian \(\hat{H}_{0}\) for \(E_{m}\) versus flux \(\Phi=2\pi w=B\).[54] #### iii.1.1 Density of state of electron Now, we turn to the interacting case with the full Hamiltonian \(\hat{H}=\hat{H}_{0}+\hat{H}_{\mu}+\hat{H}_{U}\). Since the concept of single-particle spectrum is meaningless in interacting case, we consider the density of state (DOS) of electron, whose expression is found to be \[N(\omega,\Phi) = \frac{1}{N_{s}}\sum_{m}[(1-n_{m})\delta(\omega-E_{m}+\mu) \tag{12}\] \[+ n_{m}\delta(\omega-E_{m}+\mu-U)],\] where \(n_{m}=(\theta(\mu-E_{m})+\theta(\mu-E_{m}-U))/2\). In Fig. 3, we have plotted \(N(\omega,\Phi)\) for \(U/t=0,4,8,12\) at half-filling (\(\mu=U/2\)). These interaction parameters are chosen to represent typical regimes in the ground-state phase diagram without magnetic field, i.e. non-interacting regime (\(U/t=0\)), NFL metal regime (\(U/t=4\)), NFL-Mott insulator transition point \(U/t=8\) and Mott insulating regime (\(U/t=12\)). For completeness, the electron's distribution function \(n(k)\) in momentum space for different interaction \(U\) at half-filling has also been shown in Fig. 4. One notes that at half-filling, the NFL phase always has two-Fermi-surface structure, which is embodied by the jumps of electron's occupation from 1 to 0.5 and from 0.5 to 0. It is interesting to find that in all cases we have studied, the Hofstadter butterfly exists in spite of the increasing of interaction. A noticeable feature of interaction is that it can split one butterfly in \(U/t=0\) into two identical pieces. If the HK interaction is larger than the bandwidth of non-interacting electron, (Fig. 3(d)) there exists sensible energy gap between two butterflies, which is the reminiscent of the lower and upper Hubbard bands without external magnetic field. We note that these features are similar to the findings in Falicov-Kimball model above the charge-density-wave transition, where numerically exact Monte Carlo simulation can be performed.[61] We all know that the conventional wisdom of QO relies on the oscillation behavior of density of state of electron at Fermi energy (\(N(0,\Phi)\equiv N(0)\)). If \(N(0)\) has sharp oscillation behavior versus magnetic field \(\Phi\), then physical observables like magnetization \(M\), specific heat \(C_{V}\) and Figure 3: The density of state of electron \(N(\omega,\Phi)\) for different interaction \(U\) at half-filling. resistivity \(\rho\) must show QO as well. (recall that in Fermi liquid or Fermi gas, we have \(M\sim N(0),C_{V}\sim N(0)T\)) Because Mott insulator has vanished \(N(0)\), QO is not expected in this insulating phase. Therefore, we focus on metallic NFL state and in Fig. 5, it is clear to see QO of \(N(0)\). (To have regular data on square lattice, we have used periodic boundary condition along \(y\)-direction while an open boundary condition is assumed for \(x\)-direction.) Performing (fast) Fourier transformation for \(N(0)\), one is able to extract the period (or frequency) of QO, which relates to the area of the underlying Fermi surface. For example, Fig. 5(a) tells us that the frequency for non-interacting case is about \(19.38\), which is quite near the exact Fermi surface area \(A_{F}=(2\pi)^{2}/2=19.74\). (see also Fig. 4(a)) Furthermore, Fig. 5(b) gives two Fermi surface areas \(A_{F1}=11.73\) and \(A_{F2}=23.50\), which should be contrast with results \(A_{F1}=11.99,A_{F2}=26.99\) from Fig. 4(b). Therefore, our analysis on oscillation of density of state indicates that the QO in NFL of HK model is rooted on the Fermi surface structure although the Landau quasiparticle disappears in this situation due to HK interaction. #### iii.2.2 Magnetization and particle density Since the de Haas-van Alphen effect is encoded by studying the magnetization \(M\) versus magnetic field, Fig. 6 has given examples on \(M\) for different \(U\) in NFL state. Obviously, QO in \(M\) is identified and as expected, it gives identical frequency of QO to \(N(0)\). This fact reflects that the thermodynamics is indeed contributed from quasiparticles but not any mysterious quantum excitations like Majorana fermions proposed in topological Kondo insulator.[62; 63] Furthermore, a close look at data implies that the oscillation in \(M\) is sharper and more regular than \(N(0)\). At the same time, using Eq. 10, we have plotted particle density \(n\) versus \(\Phi\) and \(1/\Phi\) for different interaction strength \(U\). Because of the particle-hole symmetry, \(n\) at half-filling (\(\mu=U/2\)) will not show any oscillation behavior, therefore, we fix \(\mu=0\) such that we are inspecting the system away from half-filling. It is not surprising that a clear signature of QO still exists in the particle density as shown in Fig. 7. Figure 4: The electron’s distribution function \(n(k)\) in momentum space for different interaction \(U\) at half-filling. Figure 5: The density of state of electron at Fermi energy \(N(0)\) for different interaction \(U\) at half-filling. (a)(b) Frequency of QO extracted from \(N(0)\) for \(U/t=0,2\). Figure 6: The magnetization \(M\) versus \(1/\Phi\) for different interaction \(U\). Figure 7: The particle density \(n\) versus \(\Phi\) (a) and \(1/\Phi\) (b) for different \(U/t\). ### Finite temperature effect After presenting the results at \(T=0\), here, we discuss the effect of temperature on QO. Specifically, we extract the amplitude of QO (\(R(T)\)) in NFL regime, which is a function of temperature. In Fig. 8, we have seen that the magnetization \(M\) changes for different temperature \(T\) and its amplitude decreases if \(T\) increases. (\(U/t=2,\mu=U/2\)) Moreover, a fit with \(ae^{-bT/t}\) works well if we try to fit the amplitude \(R(T)\), which seems to agree with the well-established LK formula, i.e. \(R_{LK}(T)\sim\frac{T}{\sinh T}\sim e^{-T}\). ### Continuum limit Although the direct lattice calculation provides the information of QO, in this subsection, we try to understand the origin of QO from the continuum limit. In continuum limit, one can approximate electron's dispersion \(\varepsilon_{k}\) with free particle energy \(\frac{k^{2}}{2m_{e}}\). Meanwhile, \(\varepsilon_{m}=\hbar\omega_{c}(m+1/2)\) if we consider the usual Landau gauge. (\(\omega_{c}=eB/m_{e}\)) Then, the electron's density reads \[n=\frac{1}{N_{\Lambda}}\sum_{k_{y}}\sum_{m=0}^{\Lambda}[\theta(\mu-\hbar \omega_{c}(m+1/2))+\theta(\mu-\hbar\omega_{c}(m+1/2)-U)]. \tag{13}\] Here, \(\Lambda\) is the cutoff of Landau level index, which acts as a band-width. The factor \(N_{\Lambda}\) has been introduced to give sensible definition of electron's density. The summation over \(k_{y}\) gives the degeneracy of Landau level (\(D\)). To proceed, we can rewrite \(n\) as follows, \[n =\frac{D}{N_{\Lambda}}\lim_{\beta\to\infty}\int_{0}^{\infty}d \omega\frac{\sum_{m}\delta(\omega-\varepsilon_{m})+\sum_{m}\delta(\omega- \varepsilon_{m}-U)}{e^{\beta(\omega-\mu)}+1}\] \[=\frac{D}{N_{\Lambda}}\lim_{\beta\to\infty}\int_{0}^{\infty}d \omega\frac{1}{e^{\beta(\omega-\mu)}+1}\] \[\times\mathrm{Re}\int_{0}^{\infty}d\tau\sum_{m}(e^{i(\omega- \varepsilon_{m})\tau}+e^{i(\omega-\varepsilon_{m}-U)\tau}). \tag{14}\] Then, using Poisson summation formula \(\sum_{m}e^{-im\hbar\omega_{c}\tau}=\frac{2\pi}{\hbar\omega_{c}}\sum_{p=0,\pm 1,\pm 2\dots}\delta(\tau-\frac{2\pi p}{\hbar\omega_{c}})\), and integrate over \(\tau\), one finds \[n =\frac{2\pi D}{N_{\Lambda}\hbar\omega_{c}}\lim_{\beta\to\infty} \int_{0}^{\infty}d\omega\left(\frac{1}{e^{\beta(\omega-\mu)}+1}+\frac{1}{e^{ \beta(\omega-\mu+U)}+1}\right)\] \[\times\mathrm{Re}\sum_{p}e^{i\frac{2\pi p}{\hbar\omega_{c}}(- \frac{\hbar\omega_{c}}{2})}\] \[\propto\lim_{\beta\to\infty}\frac{2\pi}{\beta\hbar\omega_{c}} \mathrm{Re}\sum_{\omega_{n}>0}\sum_{p>0}(-1)^{p}\] \[\times\left(e^{i\frac{2\pi p}{\hbar\omega_{c}}(\mu+i\omega_{n})} +e^{i\frac{2\pi p}{\hbar\omega_{c}}(\mu-U+i\omega_{n})}\right)\] \[=\lim_{\beta\to\infty}\sum_{p>0}\frac{(-1)^{p}}{2\pi p}\frac{X}{ \sinh X}\left(\cos\frac{2\pi p\mu}{\hbar\omega_{c}}+\cos\frac{2\pi p(\mu-U)}{ \hbar\omega_{c}}\right). \tag{15}\] Here, \(X=\frac{2\pi^{2}pT}{\hbar\omega_{c}}\) and one defines \(R_{T}=\frac{X}{\sinh X}\) as damping factor due to temperature. It is amusing to note that the above Eq. 15 is just the well-known LK formula with oscillation frequency \(\mu/(\hbar\omega_{c})\) and \((\mu-U)/(\hbar\omega_{c})\). When \(T\to 0\) or \(\beta\to\infty\), we obtain \(n\propto\sum_{p>0}\frac{(-1)^{p}}{2\pi p}\left(\cos\frac{2\pi p\mu}{\hbar \omega_{c}}+\cos\frac{2\pi p(\mu-U)}{\hbar\omega_{c}}\right)\). It is interesting to see that although the system is in NFL state and no Landau quasiparticle exits, the interaction does not lead to Dingle-like factor \(e^{-\frac{U}{\hbar\omega_{c}}}\). This may be due to the coherent feature of non-Landau quasiparticles, i.e. doublon \(\hat{d}_{k\sigma}\) and holon \(\hat{h}_{k\sigma}\). Thus, a Boltzman transport theory for those non-Landau quasiparticles could be constructed if one can generalize the work on exclusion statistics.[64] We expect this theory will be useful to understand semi-classical transport of HK model. Thus, it seems that under the approximation of Luttinger, despite the NFL nature, QO exists in HK model and it has two oscillation periods determined by chemical potential \(\mu\) and \(\mu-U\). We note that the finding of two oscillation periods is consistent with the Fourier analysis in previous subsection. ## IV Exact calculation based on exact diagonalization Until now, our calculation is based on Luttinger's approximation, however, to our knowledge, the validity of such approximation for the HK model is not known. The even worse situation may be that the basis of Luttinger's approximation, namely the Luttinger-Ward functional, is ill defined since its skeleton Feynman diagrams expansion breaks down.[65] Instead, one may use (numerical) exact diagonalization to extract reliable information from our model (Eq.8 and 11). Since \(y\)-direction of our model has been chosen to have periodic boundary condition under Landau gauge, Figure 8: (a) Magnetization \(M\) versus \(\Phi\) for different temperature. (b) The amplitude of QO (\(R(T)\)) versus temperature \(T\). (\(U/t=2,\mu=U/2\)) we can use Fourier transformation to write \[\hat{c}_{j\sigma}=\frac{1}{\sqrt{L_{y}}}\sum_{k_{y}}e^{ik_{y}j_{\sigma}}\hat{c}_{j _{x}\sigma}(k_{y}), \tag{16}\] where \(x-\)direction remains to be intact. Then, the Hamiltonian reads as \(\hat{H}=\sum_{k_{y}}\hat{H}(k_{y})\), \[\hat{H}(k_{y}) =-t\sum_{j_{x}\sigma}(\hat{c}_{j_{x}\sigma}^{\dagger}(k_{y})\hat{c }_{j_{x}+1\sigma}(k_{y})+\hat{c}_{j_{x}+1\sigma}^{\dagger}(k_{y})\hat{c}_{j_{ x}\sigma}(k_{y}))\] \[+\sum_{j_{x}\sigma}(-2t\cos(\Phi j_{x}+k_{y})-\mu)\,\hat{c}_{j_{x} \sigma}^{\dagger}(k_{y})\hat{c}_{j_{x}\sigma}(k_{y})\] \[+\frac{U}{L_{x}}\sum_{j_{ix},j_{ix},j_{3x},j_{4x}}\delta_{j_{ix}+ j_{3x}=j_{2x}+j_{4x}}\hat{c}_{j_{1x}\uparrow}^{\dagger}(k_{y})\hat{c}_{j_{2x} \uparrow}(k_{y})\] \[\times\hat{c}_{j_{3x}\downarrow}^{\dagger}(k_{y})\hat{c}_{j_{4x} \downarrow}(k_{y}). \tag{17}\] The above Hamiltonian means that for given momentum \(k_{y}\), one can just study the reduced object \(\hat{H}(k_{y})\), which is an effective one-dimensional HK model with site-dependent potential \(-2t\cos(\Phi j_{x}+k_{y})\). In principle, if one is interested in the exact thermodynamics or dynamics of HK model, each Hamiltonian \(\hat{H}(k_{y})\) can be solved in terms of exact diagonalization up to \(L_{x}\) sites along \(x-\)direction (i.e. \(j_{x}=1,2...L_{x}\)). Unfortunately, the largest system size which can be diagonalized is about \(L_{x}^{max}=16\) and the infinite-ranged HK interaction must reduce the value of \(L_{x}^{max}\). In Fig. 9, we have shown the particle density \(n\) versus \(\Phi\) for \(L_{x}=6\) and \(L_{y}=16\). However, due to the limited size, particularly the smallness of \(L_{x}\), although certain oscillation behavior appears, it is hard to detect clear QO and extract its oscillation period and amplitude. Therefore, much larger size of \(L_{x}\) should be used to explore the true signature of QO in HK model. In this direction, matrix-product-state or density-matrix-renormalization-group algorithm is the method of choice,[66] but one has to be careful since the infinite-ranged HK interaction will introduce non-local long-ranged entanglement between sites, which may invalid the assumption of these sophisticated tools (area-law of entanglement). ## V Discussions ### Relations to other works Firstly, although the Ref. [67] begins with the HK model, their calculation is only performed in continuum limit since the projection into Landau level can be performed effectively for continuum models. In contrast, we focus on the lattice model and the lattice effect has been included, e.g. the Hofstadter butterfly spectrum in non-Fermi liquid state, quantum critical point and Mott insulator. Secondly, Ref. [31] seems to be the only review, which gives detailed treatment on the quantum oscillation with interaction. The main assumption of this review is that the self-energy has no strong momentum dependence, so one can neglect its magnetic field dependence when considering QO. Under this assumption, a generalized LK formula with imaginary-frequency-dependent self-energy can be derived. However, we know that the self-energy of HK model has explicit momentum dependence, e.g. at half-filling (\(\mu=U/2\)), \(\Sigma(k,\omega)=\frac{(U/2)^{3}}{\omega-\varepsilon_{k}}\).[7] Therefore, the theory of Ref. [31] cannot be used without non-trivial modification. Finally, the work of Ref. [68] seems to be an interesting application of the theory in Ref. [31]. However, due to the mentioned momentum dependence in self-energy, in our opinion, his results are not related to our model unless one can include the effect of momentum-dependence. ### Toward a theory for quantum oscillation in strongly correlated system As what has been discussed in last subsection, the widely-used generalized LK theory may not be useful if the self-energy has strong momentum-dependence. Currently, we have no ideas on how to bypass such difficulty. As emphasized in Ref. [32], a system with self-energy like \(\Sigma(k,\omega)\sim(\omega-v_{F}(k-k_{F}))^{\alpha}\) may be a good starting point. In addition, models solved by large-scale Monte Carlo simulation will provide new insights because NFL states, which violate Luttinger's theorem and exhibit strange metal behaviors, have been discovered.[69; 70] It is frank to say that after 60 years, our understanding on QO in interacting systems is still based on the work of Luttinger.[30] How to extend the framework of Luttinger into generic strongly correlated electron systems without Landau quasiparticle is an open question.[32; 46] ## VI Conclusion and future direction In conclusion, we have taken Hatsugai-Kohmoto model as an example to study the quantum oscillation behavior in strongly correlated system. With Luttinger's approximation, it is found that although the non-Fermi Figure 9: The particle density \(n\) versus \(\Phi\) for \(U/t=0,0.5,1.0,1.5,2.0\) from exact diagonalization. liquid state has no Landau quasiparticle, the quantum oscillation indeed appears and one can use Lifshitz-Kosevich-like formula to extract its basic properties. As a byproduct, Hofstadter butterfly exists in all phases of the ground-state whatever they are non-Fermi liquid or Mott insulator. We have also performed a small size exact diagonalization calculation, and its results exhibit certain oscillation behavior but we cannot extract exact value of oscillation period and amplitude. Therefore, we expect that future work on this interesting issue is highly desirable, particularly the large-scale numerical simulation. _Note added_: After completing this work, we have noticed the paper of Leeb and Knolle,[67] whose results generally agree with ours though their calculation is based on the projection into Landau level and works in continuum limit. ## Appendix A Derivation of singe-particle Green's function Follow the treatment of Hubbard model,[71] let us define the single-particle Green's function as \(G_{\sigma}(k,\omega)=\langle\langle\hat{c}_{k\sigma}|\hat{c}_{k\sigma}^{ \dagger}\rangle\rangle_{\omega}\), which is just the Fourier transformation of the retarded Green's function \[G_{\sigma}(k,t)=-i\theta(t)\langle[\hat{c}_{k\sigma}(t),\hat{c}_{k\sigma}^{ \dagger}]_{+}\rangle.\] Then, in terms of \[[\hat{c}_{k\sigma},\hat{H}]=(\varepsilon_{k}-\mu)\hat{c}_{k\sigma }+U\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}},\] \[[\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}},\hat{H}]=(\varepsilon_{ k}-\mu+U)\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}},\] we find \[\omega\langle\langle\hat{c}_{k\sigma}|\hat{c}_{k\sigma}^{\dagger}\rangle \rangle_{\omega}=1+(\varepsilon_{k}-\mu)\langle\langle\hat{c}_{k\sigma}|\hat{ c}_{k\sigma}^{\dagger}\rangle\rangle_{\omega}+U\langle\langle\hat{c}_{k\sigma}\hat{n}_{k \bar{\sigma}}|\hat{c}_{k\sigma}^{\dagger}\rangle\rangle_{\omega}\] and \[\omega\langle\langle\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}}|\hat{c}_{k\sigma }^{\dagger}\rangle\rangle_{\omega}=\langle\hat{n}_{k\bar{\sigma}}\rangle+( \varepsilon_{k}-\mu+U)\langle\langle\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}}| \hat{c}_{k\sigma}^{\dagger}\rangle\rangle_{\omega}\] Since above equations are closed, we obtain \[\langle\langle\hat{c}_{k\sigma}\hat{n}_{k\bar{\sigma}}|\hat{c}_{k\sigma}^{ \dagger}\rangle\rangle_{\omega}=\frac{\langle\hat{n}_{k\bar{\sigma}}\rangle}{ \omega-\varepsilon_{k}+\mu-U}\] and \[G_{\sigma}(k,\omega) =\frac{1+\frac{U\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-( \varepsilon_{k}-\mu+U)}}{\omega-(\varepsilon_{k}-\mu)}\] \[=\frac{1-\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-( \varepsilon_{k}-\mu)}+\frac{\langle\hat{n}_{k\bar{\sigma}}\rangle}{\omega-( \varepsilon_{k}-\mu+U)}\] which is just the wanted Eq. 4 in the main text.
2301.11538
Goal-Image Conditioned Dynamic Cable Manipulation through Bayesian Inference and Multi-Objective Black-Box Optimization
To perform dynamic cable manipulation to realize the configuration specified by a target image, we formulate dynamic cable manipulation as a stochastic forward model. Then, we propose a method to handle uncertainty by maximizing the expectation, which also considers estimation errors of the trained model. To avoid issues like multiple local minima and requirement of differentiability by gradient-based methods, we propose using a black-box optimization (BBO) to optimize joint angles to realize a goal image. Among BBO, we use the Tree-structured Parzen Estimator (TPE), a type of Bayesian optimization. By incorporating constraints into the TPE, the optimized joint angles are constrained within the range of motion. Since TPE is population-based, it is better able to detect multiple feasible configurations using the estimated inverse model. We evaluated image similarity between the target and cable images captured by executing the robot using optimal transport distance. The results show that the proposed method improves accuracy compared to conventional gradient-based approaches and methods that use deterministic models that do not consider uncertainty.
Kuniyuki Takahashi, Tadahiro Taniguchi
2023-01-27T05:20:46Z
http://arxiv.org/abs/2301.11538v1
Goal-Image Conditioned Dynamic Cable Manipulation through Bayesian Inference and Multi-Objective Black-Box Optimization ###### Abstract To perform dynamic cable manipulation to realize the configuration specified by a target image, we formulate dynamic cable manipulation as a stochastic forward model. Then, we propose a method to handle uncertainty by maximizing the expectation, which also considers estimation errors of the trained model. To avoid issues like multiple local minima and requirement of differentiability by gradient-based methods, we propose using a black-box optimization (BBO) to optimize joint angles to realize a goal image. Among BBO, we use the Tree-structured Parzen Estimator (TPE), a type of Bayesian optimization. By incorporating constraints into the TPE, the optimized joint angles are constrained within the range of motion. Since TPE is population-based, it is better able to detect multiple feasible configurations using the estimated inverse model. We evaluated image similarity between the target and cable images captured by executing the robot using optimal transport distance. The results show that the proposed method improves accuracy compared to conventional gradient-based approaches and methods that use deterministic models that do not consider uncertainty. 1 Footnote 1: An accompanying video is available at the following link: [https://youtu.be/AMDJ7RNEbek](https://youtu.be/AMDJ7RNEbek) ## I Introduction Robotic cable manipulation has been used in many fields, such as cable harnessing in factories and sewing operations in the medical field. However, cables are prone to deform into various shapes during manipulation. There have been many cable manipulation studies, which can be broadly classified into two research perspectives. First, we classify cable manipulation into physics-based modeling and learning-based approaches. Analytical physics-based modeling methods, such as mass-spring systems, position-based dynamics, and finite element methods, are used to model cables [1]. These are approximate predetermined models and require accurate cable parameters. Some methods model an approximate of the cable by tracking the cable from sensor information [2]. Since these models make a number of assumptions, they sacrifice accuracy and may not be applicable to cables in any environment. Learning-based approaches can learn directly from data and are useful for flexible objects that are difficult to model. Therefore, they have the advantage of being applied to arbitrary cables without the need to give a model of the cable. Next, we classify cable manipulation into their use in static and dynamic tasks. Cable manipulation is treated as a quasi-static task rather than a static task. Quasi-static assumes that the motion is slow and there are no inertial effects. Although the use of deep learning has seen increased adoption in cable manipulation, most tasks assume quasi-static [3, 4, 5, 6, 7, 8, 9, 10, 11]. For quasi-static tasks, the deterministic approach, which means that once the initial state is determined, all subsequent states are determined, can be handled by modifying the prediction error between each step as necessary. It is more challenging to modify prediction errors for dynamic motions because of the motion's rapidity. Furthermore, there are two major sources of uncertainty when dealing with dynamic motion. The first is that slight differences in friction conditions or delays in the robot's motion in response to commands can cause differences in the state of the cable (aleatory uncertainty). Second, because flexible objects have infinite states, such that cables change shape in various ways, learning-based approaches require a large amount of data. However, collecting data that covers every possible outcome is infeasible, leading to possible errors when the model encounters them (epistemic uncertainty). Some related studies dealing with dynamic motion have used additional learning by readjusting the simulation parameters when errors occur in the model [12]. Not many methods deal with dynamic motion yet, but their approaches are deterministic [12, 13, 14, 15]. Fig. 1: (a) Robot setup used for the experiment of cable manipulation and examples of RGB images and binarized images by setting a threshold so that only the cable is extracted, and binarized images with strings thickened by image processing at step \(t\) and step \(t+1\), respectively. (b) Probabilistic graphical model representing the dependency between the joint angle \(J_{t}\) and the image \(I_{t}\). The challenge is that errors between the model and reality can affect the prediction due to uncertainty. Therefore, the task we address in this research is the dynamic motion of a cable considering uncertainty. The task is to optimize the target joint angle \(J_{t+1}^{*}\) of the robotic arm with redundant degrees of freedom attached to a cable to realize the target image \(I_{target}\) from the current image \(I_{t}\) and joint angle \(J_{t}\) (Fig. 1). The straightforward approach is to directly predict the joint angle \(J_{t+1}\) from the current image \(I_{t}\), the joint angle \(J_{t}\), and the target image \(I_{target}\). This is challenging to train a neural network for because of infinite solutions by redundant degrees of freedom. Another approach is to prepare a trained neural network that predicts \(I_{inferred}\) from current image \(I_{t}\) and joint angle \(J_{t}\) & \(J_{t+1}\), as in the method of [13] (Fig. 2 (a)). Then, when \(I_{t}\), \(J_{t}\), and \(I_{target}\) are given to the neural network, the optimal \(J_{t+1}^{*}\) is calculated using gradient-based methods with mean squared error (MSE) between \(I_{target}\) and \(I_{inferred}\) (Fig. 2 (b)). However, there are several challenges in using gradient-based methods. The neural network model typically has to be differentiable to calculate the gradient. Moreover, the value that converges by gradient-based methods depends on the initial value, and complex neural networks are prone to local minima. Although the robot's range of motion is predetermined, gradient-based methods can cause it to exceed its range of motion unless some effort is made to devise a cost function. Furthermore, in the task, the inverse model to optimize the joint angle \(J_{t+1}^{*}\) from \(I_{target}\) is expected to have numerous solutions. However, neural networks do not acquire a perfect forward model. Due to the nature of gradient-based methods, the solution is calculated where the error is smaller due to the imperfect forward model; only solutions that converge at a few possible solutions can be optimized. The challenges can be summarized by the following four points: 1) Due to uncertainty, the deterministic approach in a task with dynamic motion causes a gap between the prediction and the robot motion. 2) The neural network model has to be differentiable to calculate the gradient and the gradient approach falls into the local minima. 3) It is time-consuming to apply arbitrary constraints with gradient-based optimization without affecting its performance or correctness. 4) There are numerous solutions for inverse kinematics (IK). Our approach to these four challenges and the contributions of this paper are the followings. 1) We formulate the task of cable manipulation as a stochastic forward model. We then propose a method to address the uncertainty that maximizes the expected value, considering estimation errors. 2) The method introduces black-box optimization (BBO), non-differentiable optimization, to optimize \(J_{t+1}^{*}\) without the need for gradients. Among BBO, we use the Tree-structured Parzen Estimator (TPE) [16, 17], a type of Bayesian optimization. 3) TPE also easily incorporates arbitrary constraints on the range of solutions to optimize within the range of motion. 4) Since TPE is population-based, it can better detect multiple feasible configurations using the estimated inverse model. Note that if the conditions from 2) to 4) of the challenges are satisfied, then anything other than a TPE can be used as a BBO. ## II Method Given a current joint angle \(J_{t}\) and image \(I_{t}\), and a target image \(I_{target}\), our proposed method optimizes the target joint angle \(J_{t+1}^{*}\) by considering uncertainty to realize the target image \(I_{target}\) with high accuracy. The proposed method consists of training and inference phases. ### _Training_ The training phase is supervised learning in which the system can be trained to predict image \(I_{t+1}\) when image \(I_{t}\), joint angle \(J_{t}\), and joint angle \(J_{t+1}\) are given as inputs. In other words, this is equivalent to learning the forward dynamics of the cable. Considering this in a graphical model can be shown as Fig. 1 (b). The neural network's architecture based on this graphical model is shown in Fig. 2 (a). The formulation of the training of this graphical model is described below. n number of datasets \(\mathbf{x}=\{I_{t}^{i},~{}I_{t+1}^{i},~{}J_{t}^{i},~{}J_{t+1}^{i}\}_{i=1}^{n}\) comprises the images \(I_{t}\) & \(I_{t+1}\) and joint angles \(J_{t}\) & \(J_{t+1}\) at step \(t\) and \(t+1\). Let \(P_{\theta}(x)\) estimate the true probability Fig. 2: Proposed method and details of the network model \(P_{data}(x)\) of any \(x\). Then the maximum likelihood estimation of \(\theta\) is defined by the following equation: \[\underset{\theta}{\arg\underset{\theta}{\max}}\ \log\prod\nolimits_{i}P_{ \theta}(I_{t+1}^{i}|I_{t}^{i},J_{t}^{i},J_{t+1}^{i}) \tag{1}\] \[= \underset{\theta}{\arg\underset{\theta}{\max}}\ \sum\limits_{i}\log P_{ \theta}(I_{t+1}^{i}|I_{t}^{i},J_{t}^{i},J_{t+1}^{i})\] \[\propto \underset{\theta}{\arg\underset{\theta}{\min}}\ \frac{1}{2n}\sum \limits_{i}(I_{t+1}^{i}-f_{\theta}(I_{t}^{i},J_{t}^{i},J_{t+1}^{i}))^{2}\] \[= \underset{\theta}{\arg\underset{\theta}{\min}}\ \frac{1}{2n}\sum \limits_{i}(I_{t+1}^{i}-I_{inferred}^{i})^{2}\] where \(f_{\theta}(\cdot)\) is function to predict \(I_{inferred}\) from \(I_{t}^{i},J_{t}^{i},J_{t+1}^{i}\). ### _Inference_ In the inference phase, the neural network model trained in the previous section is given a current image \(I_{t}\) and joint angle \(J_{t}\), and a target image \(I_{target}\), and the joint angle \(J_{t+1}^{*}\) is optimized so that the neural network predicts \(I_{inferred}\) it is close to the target image \(I_{target}\). The graphical model in inference is the flow shown by the red arrow in Fig. 1 (b). A schematic is shown in Fig. 2 (c). We will formulate this. For readability, \(I_{t}^{i}\), \(J_{t}^{i}\), and \(I_{t+1}^{i}\) are written as \(I_{t}\), \(J_{t}\), and \(I_{t+1}\). Given a current image \(I_{t}\) and joint angle \(J_{t}\), and a target image \(I_{target}\), the goal is to optimize the joint angle \(J_{t+1}^{*}\) to realize \(I_{target}\), the joint distribution can be written as follows. \[\log P(I_{t},J_{t},I_{t+1},J_{t+1}^{*}) \tag{2}\] \[\simeq \log\left\{P_{\theta}(I_{t+1}|I_{t},J_{t},J_{t+1}^{*})P_{\theta} (J_{t+1}^{*})P_{\theta}(I_{t},J_{t})\right\}\] \[= \log\left\{P_{\theta}(I_{t+1}|I_{t},J_{t},J_{t+1}^{*})P_{\theta} (J_{t+1}^{*})\right\}+const.\] This leaves it vulnerable to uncertainty. We maximize the likelihood expectation for this equation by considering the estimation error rather than the log-likelihood itself. This study applies a stochastic approach, as described in Section I. The joint angle \(J_{t+1}\) is reparameterized by \(\mu+\sigma^{2}\epsilon\), where \(\epsilon\sim N(0,I)\), \(\mu\) is the average joint angle, and \(\sigma^{2}\) is variance as a parameter for handling uncertainty. \[\log\int P_{\theta}(I_{t+1}|I_{t},J_{t},\mu+\sigma^{2}\epsilon)P_ {\epsilon\sim N(0,I)}(\mu+\sigma^{2}\epsilon)d\epsilon \tag{3}\] \[\geq \int P_{\epsilon\sim N(0,I)}(\mu+\sigma^{2}\epsilon)\log P_{ \theta}(I_{t+1}|I_{t},J_{t},\mu+\sigma^{2}\epsilon)d\epsilon\] \[(\because\ Jensen^{\prime}s\ inequality)\] \[= \mathbb{E}_{P_{\epsilon\sim N(0,I)}}[\log P_{\theta}(I_{t+1}|I_{t },J_{t},\mu+\sigma^{2}\epsilon)]\] \[\simeq \frac{1}{2m}\sum\limits_{i}^{m}\left(I_{t+1}-f_{\theta}(I_{t},J_{ t},\mu+\sigma^{2}\epsilon_{i})\right)^{2}+const.\] \[:= g_{1}(\mu,\sigma)\] where \(m\) is the number of sampling. Although the goal is to minimize \(g_{1}\), \(\sigma^{2}\) is introduced to address the uncertainty. The smaller this value is, the smaller the effect of uncertainty will be treated as negligible. Therefore, the approach becomes deterministic when the variance \(\sigma^{2}\) is set to 0. Under the given conditions of \(I_{t}\) and \(J_{t}\), the smaller the change in \(I_{inferred}\) is for a significant change in \(J_{t+1}^{*}\), the more robust it is against uncertainty. Therefore, we will compute a multi-objective optimization of the following two functions. \[\underset{\mu\in[-90,\ 90]}{\arg\underset{\theta}{\min}}g_{1}(\mu,\sigma),\ where\ \ \underset{\sigma\in[0.0,\ 1.0]}{\arg\underset{\theta}{\max}}\ \sigma \tag{4}\] Note that the joint angles and variances range can be freely changed depending on the task and the experiment. Multi-objective black-box optimization (MOBBO) is used for this multi-objective optimization. Since it is an optimization of two cost functions, it will have a Pareto front. ## III Experimental Setup The task setting of this research is to manipulate a cable with a robotic arm. Given the current joint angle of the robot \(J_{t}\) and image of the cable \(I_{t}\), and a target image of the cable \(I_{target}\), the task is to optimize the joint angle \(J_{t+1}^{*}\) to realize the target image \(I_{target}\). ### _Robot Setup for Data Collection System_ Our robotic system, shown in Fig. 1, consists of a self-designed 3-DoF robotic arm with a cable attached to the robot arm's end-effector. Three Dynamixel XM430-W350-R are used for the robot's actuator. Furthermore, we use an Intel RealSense Depth Camera D415 to overlook the workspace of the robot arm and cable and use them to capture RGB images. Note that no depth information is retrieved in this study. The robot and Realsense are connected to a MacBook Pro PC 2021 model running macOS Monterey. The camera is mounted at the height of 650 mm from the workplace. The workspace size for the cable manipulation is 785 mm\(\times\)515 mm. The length of the robot is 220 mm, and the length of the cable is 250 mm. ### _Data Collection Process_ We built a data collection system to collect data on cable manipulation. The data collection process is as follows. First, an image of the cable in the workspace is captured using Realsense installed on the ceiling. At the same time, the current joint angles of the robot are recorded. Next, the target joint angle for the next step is randomly selected in the range of \([-90\ 90]\). Note that the selected process is repeated until the end-effector position is within the workspace. The robot's end-effector position is calculated based on the selected target joint angles. Then, the robot executes the motion using the selected target joint angles. Since the robot moves quickly, the cable moves with inertia. However, since the recording image and the robot's joint angle are performed at 1 \(Hz\), the image and joint angles are recorded after the robot and cable motion completely stop. The discrepancy between the selected target joint and the actual motion is negligible. This data collection process was performed for 12001 seconds, and 12001 images and corresponding joint angles of the robot were collected. From this data, 12000 datasets were created from which the current joint angle \(J_{t}\) and image \(I_{t}\), the following joint angle \(J_{t+1}\) and image \(I_{t+1}\), were paired. The joint angles were normalized to be \([0\ 1]\). The images were binarized by setting a threshold so that only the cable was extracted from the image. Then, because the cable was too thin to train the neural network with this image well, image processing was performed to make the cable thicker using the cv2.dilate() function with the kernel size (3, 3). The original RGB image at time \(t\) and \(t+1\), the binarized image, and the image processed to make it thicker are shown in Fig. 1. Of the 12,000 samples collected, 10,000 were used for training, and 2,000 were used for validation. For inference, the robot's current joint angle \(J_{t}\), the cable's image \(I_{t}\), and the target cable's image \(I_{target}\) are given to the trained neural network, and the joint angle \(J_{t+1}^{*}\) to realize the target image \(I_{target}\) is optimized. This optimized joint angle \(J_{t+1}^{*}\) is given as the command value of the robot's target joint angle, and the robot moves to the joint angle. ### _Deep Learning & Training & Inference_ Our neural network model comprises CNNs and FCs layers, and details of the neural network architecture are shown in Fig. 2. We used PyTorch as a deep learning library and Optuna [18] as a BBO library for implementation. We trained the neural networks and conducted inference on a machine equipped with an Intel Xeon Gold 6524 CPU @ 3.10 \(GHz\) and Tesla V100 SXM2 with 16GB. Training for the model is about 5 minutes. ## IV Experiment Results In this experiment, we evaluate the following four items. 1) Comparison of optimization approaches between gradient-based approaches and BBO (Section IV-A). 2) Comparison of gradient-based methods and BBO when the robot has numerous solutions with redundant degrees of freedom (Section IV-B). 3) Evaluate when using the proposed method, MOBBO (Section IV-C). 4) Evaluate the performance of the proposed method when the cable manipulation is executed by the robot (Section IV-D). ### _Comparison between Gradient-based Methods and BBO_ This section compares the methods using gradient and BBO. For the optimization using gradient-based methods, a randomly set initial value was given as the joint angle \(J_{t+1}^{init}\) in the manner described in Section I. For the method using BBO, the variance \(\sigma^{2}\) of the proposed method was set to 0 in eq. (4), which means treated deterministically rather than stochastically. Therefore, the cost functions of gradient-based methods and BBO are the same design, but only the optimization methods differ, and it can evaluate the differences between the two optimization methods. Examples of pairs of inferred images \(I_{inferred}\) using gradient-based methods, the target image \(I_{target}\), and images inferred with randomly set initial values \(J_{t+1}^{init}\) are shown in Fig. 3. As shown in Fig. 3 on the 1* row, there are entirely different between the target image \(I_{target}\) and the inferred image \(I_{inferred}\) using optimized joint angle \(J_{t+1}^{*}\) to minimize the difference between target image \(I_{target}\) and inferred image \(I_{inferred}\). From the figures on the 2* row, it can also be seen that it depends on the initial values \(J_{t+1}^{init}\). The reasons for the failure of gradient-based methods can be explained as follows. When using MSE to calculate from the target image \(I_{target}\) and the inferred image \(I_{inferred}\) using \(J_{t+1}^{init}\) at the beginning of optimization of the joint angle or in the process of optimization, if the pixels where the cable exists do not match, the error value is treated as the same whether the cable is misaligned by 1 pixel or 100 pixels. If the position of the cable in the inferred image \(I_{inferred}\) and the target image \(I_{target}\) are far apart, the slight change in joint angle will not change MSE values, and the gradient will vanish (Compare the 1* and 2* raw in Fig. 3). To address this challenge, [13] transformed the target image \(I_{target}\) into a distance image to avoid gradient vanishing instead of a binary 0 and 1 image with the representation of cable present or absent. The results of gradient-based methods on an image converted to a distance image by the cv2.distanceTransform() function are shown in Fig. 3 on the 3*d-4* row. There are cases where it works and fails, and it can be seen that it depends on the initial values of the joint angles \(J_{t+1}^{init}\). It may be that the latent expression is too complex and is prone to local minima. Next, the results for the case using BBO are shown in Fig. 3 on the bottom row. Even in cases where gradient-based methods failed, the method using BBO successfully inferred an image \(I_{inferred}\) that was close to the target image \(I_{target}\). Since the initial TPE (BBO) sampling stage explores a broader parameter space, the possibility of overlap between the image inferred from the sampled joint angles and the pixels in the target image \(I_{target}\) increases. As a result, even when gradient-based methods fail, BBO can optimize joint angles that produce an inferred image \(I_{inferred}\) closer to the target image \(I_{target}\). ### _Multiple plots of Gradient-Based methods and BBO_ In the task of the experiment in this paper, the robotic arm with 3-DoF executes the motion in the two-dimensional plane. Therefore, there are numerous solutions for solving inverse kinematics (IK) given the end-effector coordinates (x, y). We examine how gradient-based methods and BBO behave for such a task. Fig. 3: Comparison of inferred image \(I_{inferred}\) with optimized joint angle \(J_{t+1}\) using gradient-based methods, distance transform, and BBO. Note that because of the uncertainty in the cable’s behavior, inferred image \(I_{inferred}\) of the cable will be blurry. For gradient-based methods, optimization of joint angle with randomly given initial values \(J_{t+1}^{init}\) was performed with 200 identical pairs as a current joint angle \(J_{t}\), image \(I_{t}\), and target image \(I_{target}\). Even in BBO, the sampling values of the joint angle \(J_{t+1}^{init}\) to be optimized start from random values. Approximate solutions for IK were also calculated from the end-effector coordinates calculated by forward kinematics using the joint angles when the target image \(I_{target}\) data was collected. To calculate the approximate solution for IK, the joint angles within \(\pm\delta\) for the end-effector coordinates were searched by grid search. The optimized joint angles \(J_{t+1}^{*}\) for several pairs of different joint angles \(J_{t}\), images \(I_{t}\), and target images \(I_{target}\) are plotted in Fig. 4 using the same target image \(I_{target}\) but differ in joint angles \(J_{t}\) and images \(I_{t}\). The black plots are approximate solutions for IK, the blue plots are those calculated by BBO, and the red plots are those calculated using gradient-based methods. In this experimental setup, the solution of IK is basically an elliptical trajectory. However, due to the limitation of each joint motion range, \([-90~{}90]\), only a portion of the elliptical trajectory may be plotted. Other than that, the positions of the end-effector coordinates may be points due to the robot's posture. It can be seen that the optimized joint angles \(J_{t+1}^{*}\) are distributed as a region when BBO is used (Fig. 4). IK uses only the robot's end-effector coordinates at step \(t\), so the solution is different from the BBO method, which calculates the optimized joint angle \(J_{t+1}^{*}\) from a current image \(I_{t}\), joint angle \(J_{t}\), and target image \(I_{target}\), but the BBO method has numerous solutions. In addition, when the target image \(I_{target}\) is the same, but the joint angles \(J_{t}\) and images \(I_{t}\) differ, optimized joint angles \(J_{t+1}^{*}\) are changed when BBO is used. This indicates that the neural network acquires a forward model of cable manipulation that considers the dynamics and can be calculated as an inverse model using BBO. On the other hand, with gradient-based methods, the optimized joint angles \(J_{t+1}^{*}\) converge within a narrow range at several locations (The red plots in Fig. 4). Theoretically, numerous optimized joint angles \(J_{t+1}^{*}\) are plotted as trajectories as in the BBO. The neural network does not acquire a perfect forward model; thus, slight differences exist in each value, even if they can theoretically be the same. Due to the characteristics of gradient-based methods, the solution is calculated where the error is smaller due to the imperfect forward model, so it converges to a point as shown in Fig. 4). When the optimization starts, the points that converge depend on the initial values of joint angle \(J_{t+1}^{init}\). Some fall into local minima, resulting in entirely different between target images \(I_{target}\) and images \(I_{inferred}\) inferred using the optimized joint angles \(J_{t+1}^{*}\) by gradient-based methods. Note that when numerous solutions exist, using the ensemble method to reduce the bias and variance of the neural network and calculating the average of the optimal solution output by each model does not necessarily give good results. It can be said that using a neural network does not always mean that gradient-based methods are a good choice. The proposed method is expected to decrease the model's bias and variance and will be validated in the Section IV-D. ### _Multi-Objective Black-Box Optimization_ In the proposed method, multi-objective black-box optimization (MOBBO) is performed with joint angle \(J_{t+1}\) and variance \(\sigma^{2}\) as optimization parameters, as shown in eq. (4). A plot of the Pareto solution of the MOBBO for a given current image \(I_{t}\), joint angle \(J_{t}\), and the target image \(I_{target}\), is shown in Fig. 5. In this MOBBO, the number of trials is 1000, and the range of each joint angle is searched in the integer range as \([-80~{}80]\) and the range of variance as \([0~{}0.5]\) in eq. (4). The number of sampling \(m\) is 200 in eq. (3). The Pareto front is shown in the red plot. The \(I_{inferred}\) shown in Fig. 5 is calculated with reparameterized joint angle \(\mu+\sigma^{2}\epsilon~{}(=J_{t+1})\) of \(\mu\). As the variance \(\sigma^{2}\) increases, the reconstruction loss in eq. (3) increases because the difference from the target image \(I_{target}\) is more significant in the calculation. However, when the variance \(\sigma^{2}\) is 0, if there is uncertainty in the trained neural network and robot environment, there would be a probability that the target image \(I_{target}\) would not be close to the image \(I_{robot}\) when the robot moves using optimized joint angle \(J_{t+1}^{*}\). Therefore, selecting an appropriate variance \(\sigma^{2}\) for the model from the Pareto front becomes necessary. In this study, optimization is performed in the range of variance \([0.03~{}0.05]\), and the robot is tested using the optimized joint angle \(J_{t+1}^{*}\) with the lowest reconstruction loss (See Section IV-D). ### _Robot Experiment_ This section shows the evaluation results of cable manipulation with the robot motion. The following three methods Fig. 4: Plots of approximate solution of IK and optimized joint angles \(J_{t+1}^{*}\) using gradient method and BBO when the target images \(I_{target}\) are the same, but image \(I_{t}\) and joint \(J_{t}\) are different. Fig. 5: Plots of Pareto solution with multi-objective black-box optimization are evaluated: gradient-based methods, BBO, which is the proposed method with variance \(\sigma^{2}\) set to 0 in eq. (4), and the proposed method. \(\sigma^{2}=0\) in eq. (4) means that the method is treated deterministically rather than stochastically. To prepare the target image \(I_{target}\), the joint angle \(J_{t}\), and the image of the cable \(I_{t}\) with diversity in the state of the cable, we let the robot perform 6 steps. The third step was set as the target image \(I_{target}\), the joint angle of the robot after 6 steps were set as \(J_{t}\), and the image of the cable was set as \(I_{t}\). Then, optimized joint angle \(J_{t+1}^{*}\) is calculated. If MSE is used as an evaluation method, a 1-pixel and 100-pixel shift are considered the same error, making it difficult to evaluate the result appropriately. Therefore, the optimal transport distance, Earth Mover's Distance (EMD), is used. EMD is also known as the Wasserstein metric. Given image A and image B, it is the cost of minimizing the product of the pixel shift distance and the shift amount to make image A resemble image B. Note that since EMD is computationally expensive, it could not be used as a substitute for the MSE, which is used for training errors, inference, and MOBBO costs in this study. This EMD is used to compare three images. (1) the target image \(I_{target}\) and the inferred image \(I_{inferred}\), (2) target image \(I_{target}\) and the image of the cable obtained when the robot is moved \(I_{robot}\) using the optimized joint angles \(J_{t+1}^{*}\), and (3) \(I_{robot}\) and \(I_{inferred}\). In (1), it can be evaluated whether realizing the target image \(I_{target}\) is difficult with a one-step motion or whether the optimal joint angle using gradient-based methods falls into the local minima. (2) evaluates how close the target image \(I_{target}\) is to the image \(I_{robot}\) captured by the robot's motion with the optimized joint angles \(J_{t+1}^{*}\), which allows the method's effectiveness to be assessed. In (3), the accuracy of the neural network's predictions can be evaluated by assessing how close the inferred image \(I_{inferred}\) is to the image \(I_{robot}\) obtained from the actual robot's motion. The results of the evaluation with these values are shown in Table I. For each method, 150 trials of motion were performed to evaluate the results. The mean and variance of the cost of EMD are shown. When the images \(I_{robot}\) captured when the robot executed the motion deviated from the target image \(I_{target}\) significantly, the value of the cost of EMD sometimes became huge, resulting in a large mean and variance. Therefore, we also calculated the percentages of EMD cost values below 125 and 175, respectively. We calculated the mean and standard deviation of success rates by bootstrapping. The EMD cost values less than 125 to 175 are similar to the target image \(I_{target}\) and the images \(I_{robot}\) taken by the robot as it moves the cable (Fig. 6). Although the EMD does not take cable topology into account, the results of this experiment suggest that if the EMD cost value is around 125-175, the cable misalignment is slight, and topology is less of an issue. When gradient-based methods are used, the average cost of EMD between Target-Inferred is much larger than the other two. This indicates that gradient-based methods fall into the local minima, as mentioned in Section IV-A. Next, we compare the proposed method with the BBO (deterministic approach) case between Robot-Inferred. The proposed method has a smaller value of the average EMD cost and larger values of w/n 125 and w/n 175 compared to BBO. In other words, if the optimal joint angles \(J_{t+1}^{*}\) using the deterministic approach (BBO) are selected and the robot moves, the trained neural network has uncertainties, leading to misalignments. In contrast, the proposed method considers these uncertainties to increase the probability of achieving an image \(I_{robot}\) close to the target image \(I_{target}\). As a result, the proposed method performs better than the other methods when the robot performs motions. This shows the effectiveness of the proposed method. ## V Conclusion This paper formulated dynamic cable manipulation as a stochastic forward model to realize the target image. We proposed a method to handle uncertainties that maximizes the expectation and considers estimation errors. This method avoids the risk of falling into local solutions by applying BBO. Moreover, by incorporating the BBO, namely TPE, as a constraint, we could obtain a solution within the range of motion. Furthermore, since TPE is population-based, we were able to confirm that it can handle numerous solutions. We used EMD to evaluate the similarity of the cable's image \(I_{robot}\) to the target image \(I_{target}\) when the robot moves. The results confirmed the improved accuracy of the proposed method compared to conventional methods, such as the gradient-based methods and the deterministic approach that does not consider uncertainty. ## Acknowledgment The authors would like to thank Dr. Shin-ichi Maeda, Avinash Ummadisingu, and Dr. Naoki Fukaya for the many discussions about this research. This work was supported by JST [Moonshot R&D][Grant Number JPMJMS2033]. \begin{table} \begin{tabular}{c|c||c|c c} \hline EMD five & Methods & EMD \(mean\pm std\) & w/n 125 & w/n 175 \\ \hline \hline Target- & Gradient & \(2013.1\pm 2842.9\) & \(20.0\pm 4.6\) & \(30.0\pm 5.3\) \\ Informed & BBO & \(528.8\pm 1557.5\) & \(38.4\pm 5.6\) & \(52.7\pm 5.7\) \\ & Ours & \(\mathbf{452.5\pm 5339.9}\) & \(\mathbf{46.7\pm 5.8}\) & \(\mathbf{58.7\pm 5.7}\) \\ \hline Target- & Gradient & \(2509.7\pm 216.4\) & \(17.3\pm 4.3\) & \(20.6\pm 4.6\) \\ Robot & BBO & \(72.3\pm 1786.7\) & \(28.0\pm 5.2\) & \(36.7\pm 5.5\) \\ & Ours & \(\mathbf{506.5\pm 531.6}\) & \(\mathbf{30.0\pm 5.3}\) & \(\mathbf{42.0\pm 5.7}\) \\ \hline Robot- & Gradient & \(429.7\pm 415.1\) & \(26.0\pm 5.1\) & \(40.7\pm 5.7\) \\ Informed & BBO & \(334.8\pm 402.3\) & \(43.4\pm 5.7\) & \(54.0\pm 5.8\) \\ Ours & \(\mathbf{241.1\pm 343.1}\) & \(\mathbf{53.3\pm 5.8}\) & \(\mathbf{64.7\pm 5.5}\) \\ \hline \end{tabular} \begin{tabular}{c|c|c|c c} \hline \multicolumn{2}{c}{**Inferred image**} & \multicolumn{2}{c}{**Image by**} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Inferred image**} & \multicolumn{2}{c}{**Inferred image**} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Inferred image**} & \multicolumn{2}{c}{**Inferred image**} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Inferred image**} & \multicolumn{2}{c}{**Inferred image**} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Inferred image**} & \multicolumn{2}{c}{**Inferred image**} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Inferred image**} & \multicolumn{2}{c}{**Inferred image**} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Inferred image**} & \multicolumn{2}{c}{**Inferred image**} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Inferred image**} & \multicolumn{2}{c}{**Inferred image**} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Inferred image**} & \multicolumn{2}{c}{**Inferred image**} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Inferred image**} & \multicolumn{2}{c}{**Inferred image**} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Inferred image**} & \multicolumn{2}{c}{**Inferred image**} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Inferred image**} \\
2303.11341
What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring
As advanced machine learning systems' capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other's compliance with potential future international agreements on advanced ML development. This work analyzes one mechanism to achieve this, by monitoring the computing hardware used for large-scale NN training. The framework's primary goal is to provide governments high confidence that no actor uses large quantities of specialized ML chips to execute a training run in violation of agreed rules. At the same time, the system does not curtail the use of consumer computing devices, and maintains the privacy and confidentiality of ML practitioners' models, data, and hyperparameters. The system consists of interventions at three stages: (1) using on-chip firmware to occasionally save snapshots of the the neural network weights stored in device memory, in a form that an inspector could later retrieve; (2) saving sufficient information about each training run to prove to inspectors the details of the training run that had resulted in the snapshotted weights; and (3) monitoring the chip supply chain to ensure that no actor can avoid discovery by amassing a large quantity of un-tracked chips. The proposed design decomposes the ML training rule verification problem into a series of narrow technical challenges, including a new variant of the Proof-of-Learning problem [Jia et al. '21].
Yonadav Shavit
2023-03-20T13:50:05Z
http://arxiv.org/abs/2303.11341v2
# What does it take to catch a Chinchilla? ###### Abstract As advanced machine learning systems' capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other's compliance with potential future international agreements on advanced ML development. This work analyzes one mechanism to achieve this, by monitoring the computing hardware used for large-scale NN training. The framework's primary goal is to provide governments high confidence that no actor uses large quantities of specialized ML chips to execute a training run in violation of agreed rules. At the same time, the system does not curtail the use of consumer computing devices, and maintains the privacy and confidentiality of ML practitioners' models, data, and hyperparameters. The system consists of interventions at three stages: (1) using on-chip firmware to occasionally save snapshots of the the neural network weights stored in device memory, in a form that an inspector could later retrieve; (2) saving sufficient information about each training run to prove to inspectors the details of the training run that had resulted in the snapshotted weights; and (3) monitoring the chip supply chain to ensure that no actor can avoid discovery by amassing a large quantity of un-tracked chips. The proposed design decomposes the ML training rule verification problem into a series of narrow technical challenges, including a new variant of the Proof-of-Learning problem [Jia et al. '21]. ## 1 Introduction Many of the remarkable advances of the past 5 years in deep learning have been driven by a continuous increase in the quantity of _training compute_ used to develop cutting-edge models [25, 21, 54]. Such large-scale training has been made possible through the concurrent use of hundreds or thousands of specialized accelerators with high inter-chip communication bandwidth (such as Google TPUs, NVIDIA A100 and H100 GPUs, or AMD MI250 GPUs), employed for a span of weeks or months to compute thousands or millions of gradient updates. We refer to these specialized accelerators as _ML chips_, which we distinguish from consumer-oriented GPUs with lower interconnect bandwidth (e.g., the NVIDIA RTX 4090, used in gaming computers). This compute scaling trend has yielded models with ever more useful capabilities. However, these advanced capabilities also bring with them greater dangers from misuse [7]. For instance, it is increasingly plausible that criminals may soon be able to leverage heavily-trained code-generation-and-execution models to autonomously identify and exploit cyber-vulnerabilities, enabling ransomware attacks on an unprecedented scale. 1 Even absent malicious intent, rival companies or countries trapped in an AI "race dynamic" may face substantial pressure to cut corners on testing and risk-mitigation, in order to deploy high-capability ML systems in the hopes of outmaneuvering their competitors economically or militarily. The edge-case behaviors of deep learning models are notoriously difficult to debug [20], and without thorough testing and mitigation, such bugs in increasingly capable systems may have increasingly severe consequences. Even when rival parties would all prefer to individually do more testing and risk-mitigation, or even forgo developing particularly dangerous types of ML models entirely [60], they may have no way to verify whether their competitors are matching their levels of caution. In the event that such risks do emerge, governments may wish to enforce limits on the large-scale development of ML models. While law-abiding companies will comply, criminal actors, negligent companies, and rival governments may not, especially if they believe their rule-violations will go unnoticed. It would therefore be useful for governments to have methods for reliably _verifying_ that large-scale ML training runs comply with agreed rules. These training runs' current need for large quantities of specialized chips leaves a large physical and logistical footprint, meaning that such activities are generally undertaken by sizable organizations (e.g., corporate or governmental data-center operators) well-equipped to comply with potential regulations. Yet even if the relevant facilities are known, there is no easily-observable difference between training a model for social benefit, and training a model for criminal misuse -- they require the same hardware, and at most differ in the code and data they use. Given the substantial promise of deep learning technologies to benefit society, it would be unfortunate if governments, in a reasonable attempt to curtail harmful use-cases but unable to distinguish the development of harmful ML models, ended up repressing the development of beneficial applications of ML as well. Such dynamics are already appearing: the US Department of Commerce's rationale for its October 2022 export controls denying the sale of high-performance chips to the People's Republic of China, while not specific to ML, was based in part on concern that those chips might be used to develop weapons against the United States or commit human rights abuses [5]. If the US and Chinese governments could reach an agreement on a set of permissible beneficial use-cases for export-controlled chips, and had a way to verify Chinese companies' compliance with that agreement, it may be possible to prevent or reverse future restrictions. Such a system of verification-based checks and balances, distinguishing between "safe" and "dangerous" ML model training, might seem infeasible. Yet a similar system has been created before. At the dawn of the nuclear age, nations faced an analogous problem: reactor-grade uranium (used for energy) and weapons-grade uranium (used to build nuclear bombs) could be produced using the same types of centrifuges, just run for longer and in a different configuration. In response, in 1970 the nations of the world adopted the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) and empowered the International Atomic Energy Agency (IAEA) to verify countries' commitments to limiting the spread of nuclear weapons, while still harnessing the benefits of nuclear power. This verification framework has helped the world avoid nuclear conflict for over 50 years, and helped limit nuclear weapons proliferation to just 9 countries while spreading the benefits of safe nuclear power to 33 [40]. If future progress in machine learning creates the domestic or international political will for enacting rules on large-scale ML development, it is important that the ML community is ready with technical means for verifying such rules. ### Contributions In this paper, we propose a monitoring framework for enforcing rules on the _training_ of ML2 models using large quantities of specialized ML chips. Its goal is to enable governments to verify that companies and other governments have complied with agreed guardrails on the development of ML models that would otherwise pose a danger to society or to international stability. The objective of this work is to lay out a possible system design, analyze its technical and logistical feasibility, and highlight important unsolved challenges that must be addressed to make it work. Footnote 2: Throughout the text, we use “ML” to refer to deep-learning-based machine learning, which has been responsible for much of the progress of recent years. The proposed solution has three parts: 1. To prove compliance, an ML chip owner employs firmware that logs limited information about that chip's activity, with their employment of that firmware attested via hardware features. We propose an activity logging strategy that is both lightweight, and maintains the confidentiality of the chip-owner's trade secrets and private data, based on the NN weights present in the device's high-bandwidth memory. 2. By inspecting and analyzing the logs of a sufficient subset of the chips, inspectors can provably determine whether the chip-owner executed a rules-violating training run in the past few months, with high probability. 3. Compute-producing countries leverage supply-chain monitoring to ensure that each chip is accounted for, so that actors can't secretly acquire more ML chips and then underclaim their total to hide from inspectors. The system is compatible with many different rules on training runs (see Section 2.1), including those based on the total chip-hours used to train a model, the type of data and algorithms used, and whether the produced model exceeds a performance threshold on selected benchmarks. To serve as a foundation for meaningful international coordination, the framework aspires to reliably detect violations of ML training rules _even in the face of nation-state hackers attempting to circumvent it_. At the same time, the system does not force ML developers to disclose their confidential training data or models. Also, as its focus is restricted to specialized data-center chips, the system does not affect individuals' use of their personal computing devices. Section 2 introduces the problem of verifying rules on large-scale ML training. Section 3 provides an overview of the solution, and describes the occasional inspections needed to validate compliance. Sections 4, 5, and 6 discuss the interventions at the chip-level, data-center-level, and supply-chain respectively. Section 7 concludes with a discussion of the proposal's benefits for different stakeholders, and lays out near-term next steps. ### Limitations The proposed system's usefulness depends on the continued importance of large-scale training to produce the most advanced (and thus most dangerous) ML models, a topic of uncertainty and ongoing disagreement within the ML community. The framework's focus is also restricted only to training runs executed on specialized data-center accelerators, which are today effectively necessary to complete the largest-scale training runs without a large efficiency penalty. In Appendix A, we discuss whether these two trends are likely to continue. Additionally, hundreds of thousands of ML chips have already been sold, many of which do not have the hardware security features required by the framework, and may not be retrofittable nor even localable by governments. These older chips' importance may gradually decrease with Moore's Law. But combined with the possibility of less-efficient training using non-specialized chips, these unmonitored compute sources present an implicit lower bound on the minimum training run size that can be verifiably detected by the proposed system. Still, it may be the case that frontier training runs, which result in models with new emergent capabilities to which society most needs time to adapt, are more likely to require large quantities of monitorable compute. More generally, the framework does not apply to small-scale ML training, which can often be done with small quantities of consumer GPUs. We acknowledge that the training of smaller models (or fine-tuning of existing large models) can be used to cause substantial societal harm (e.g., computer vision models for autonomous terrorism drones [44]). Separately, if a model is produced by a large-scale training run in violation of a future law or agreement, that model's weights may from then on be copied undetectably, and it can be deployed using consumer GPUs [55] (as ML inference requires far lower inter-chip communication bandwidth). Preventing the proliferation of dangerous trained models is itself a major challenge, and beyond the scope of this work. More broadly, society is likely to need laws and regulations to limit the harms from bad actors' misusing such ML models. However, exhaustively _enforcing_ such rules at the hardware-level would require surveilling and policing individual citizens' use of their personal computers, which would be highly unacceptable on ethical grounds. This work instead focuses attention upstream, regulating whether and how the most dangerous models _are created in the first place_. Lastly, rather than proposing a comprehensive shovel-ready solution, this work provides a high-level solution design. Its contribution is in isolating a set of open problems whose solution would be sufficient to enable a system that achieves the policy goal. If these problems prove unsolvable, the system's design will need to be modified, or its guarantees scaled back. We hope that by providing a specific proposal to which the community can respond, we will initiate a cycle of feedback, iteration, and counter-proposals that eventually culminates in an efficient and effective method for verifying compliance with large-scale ML training rules. ### Related Work This paper joins an existing literature examining the role that compute may play in the governance of AI. Early work by Hwang [23] highlighted the potential of computing power to shape the social impact of ML. Concurrent work by Sastry et al. [51] identifies attributes of compute that make it a uniquely useful lever for governance, and provides an overview of policy options. Closely-related work by Baker [4] draws lessons from nuclear arms control for the compute-based verification of international agreements on large-scale ML. Rather than focusing on specific policies, the work proposes a technical platform for verifying many possible regulations and agreements on ML development. Already, the EU AI Act has proposed establishing risk-based regulations on AI products [61], while US senators have proposed an "Algorithmic Accountability Act" to oversee algorithms used in critical decisions [11], and the Cyberspace Administration of China (CAC) has established an "algorithm registry" for overseeing recommender systems [43]. Internationally, many previous works have discussed the general feasibility and desirability of AI arms control [47, 12, 37], with [52] highlighting the importance of verification measures to the success of potential AI arms control regimes. Past work has also explored the benefits of international coordination on non-military AI regulation [13]. The proposed solution involves proving that a rule-violating ML training run was _not_ done, in part by proving which other training runs _were_ done. The analysis of the latter problem is heavily inspired by the literature on Proof-of Learning [24; 15] (discussed further in Section 5). Other works has have used tools from cryptography to train NN models securely across multiple parties [63], and to securely prove the correctness of NN inference [30]. However, these approaches suffer large efficiency penalties and cannot yet be scaled to cutting-edge model training, rendering them nonviable as a method for verifying rules on large-scale training runs. ## 2 The Problem: Detecting Violations of Large-Scale ML Training Rules We focus on the setting in which one party (the "Verifier") seeks to verify that a given set of ML training rules is being followed, and another party (the "Prover") is developing the ML system and wants to prove to the Verifier that it is complying with those rules. The Verifier can request that the Prover take actions, such as disclosing information on training runs, in order to help the Verifier determine the Prover's compliance. The Prover is a "covert adversary" [2] - they may benefit from _violating_ the ML training rule, but will only seek to violate the rule _if they can still appear compliant_ to the Verifier. There are two real-world Prover-Verifier relationships we are particularly interested in: * _Domestic Oversight_: Governments have a clear interest that the ML systems developed by companies operating within their borders comply with certain rules. Regulators can level both civil and criminal penalties on organizations caught violating rules, and often require organizations to maintain records that prove regulatory compliance (e.g., financial transaction record-keeping requirements). * _International Oversight_: The most significant types of ML training rules may be those enforced internationally (on companies and governments in multiple countries), and verified by other governments or international bodies. These include enforcing globally-beneficial rules (e.g., combatting disinformation), and verifying arms control agreements (e.g., limiting the development of autonomous code-generating cyberneapons). There is precedent for countries abiding by international agreements with strict monitoring regimes when they stand to benefit, such as Russia's historically allowing random U.S. inspections of its missiles as a part of the START treaties, in exchange for certainty that the U.S. was abiding by the same missile limits [53]. Thus, the problem we address is: what minimal set of verifiable actions can the Verifier require the Prover to take that would enable the Verifier to detect, with high probability, whether the Prover violated any training rules? ### What types of rules can we enforce by monitoring ML training? It is important that standards and agreements on ML training focus on preventing concrete harm, and otherwise leave society free to realize the broad benefits of highly-capable ML systems. Indeed, there are many types of ML models that should not only be legal to train, but that should open-sourced so that all of society can benefit from them [58]. The proposed framework focuses only on enforcing rules on the training of those more dangerous models whose creation and distribution would substantially harm society or international security. Indeed, as mentioned in Section 1.2, this framework _could not_ prevent smaller-scale training of ML models, and thus limits the risk of overreach by authoritarian Verifiers. Below are some informative properties that a Verifier could determine by monitoring the training process of an ML model: * _Total training compute_, which has proven to be an indicator for ML models' capabilities [25; 59]. * _Properties of the training data_, such as whether a language model's text dataset contains code for cybersecurity exploits. * _Properties of the hyperparameters_, such as the fraction of steps trained via reinforcement learning. * _The resulting model's performance on benchmarks designed to elicit its capabilities_, including whether the model's capabilities exceed agreed-on thresholds, and including interactive benchmarks (e.g. finetuning the model on a particular task). * Combinations of the above -- for example, "if a model was trained on RL-for-code-generation for greater than \(X\) FLOPs, then it should not be trained beyond \(Y\) performance on \(Z\) benchmarks." Ultimately, these rule thresholds should be selected based on the model capabilities that would result. Current "scaling law" extrapolations are not yet able to reliably predict ML models' downstream capabilities [16], so finding principled methods for deciding on rule-thresholds that achieve desired policy outcomes is an important area for future work. If a Verifier can reliably detect the aforementioned training run properties, that would allow them to mandate several types of rules, such as: * _Reporting requirements_ on large training runs, to make domestic regulators aware of new capabilities or as a confidence-building measure between companies/competitors [22]. * _Bans or approval-requirements_ for training runs considered overly likely to result in models that would threaten society or international stability. Approval could be conditioned on meeting additional requirements (e.g., willingness to comply with downstream regulations on model use, increased security to prevent model-theft, greater access for auditors). * _Requiring that any trained model be modified to include post-hoc safety mitigations_ if the unmodified model could be expected to pose a severe accident risk absent those mitigations. Such safety assessments and mitigations (such as "Helpful and Harmless" finetuning [3]) may involve a prohibitive upfront cost that companies/governments would otherwise avoid. However, once they have been forced to make the investment and built a less accident-prone model, they may then prefer to use the safer version. Such rules allow all parties to coordinate spending more resources on safe and responsible innovation, without fearing that their competitors may secretly undercut them by rushing ahead without addressing negative externalities. ### Other Practical Requirements There are several other considerations for such a monitoring system to be practical. Its cost should be limited, both by limiting changes to current hardware, and by minimizing the ongoing compliance costs to the Prover and enforcement costs to the Verifier. The system should also not pose a high risk of leaking the Prover's proprietary information, including model weights, training data, or hyperparameters. Most importantly, the system must be robust to cheating attempts, even by highly-resourced adversaries such as government hacking groups, who may be willing to employ sophisticated hardware, software, and even supply-chain attacks. ## 3 Solution Overview In this section, we outline a high-level technical plan, illustrated in Figure 1, for Verifiers to monitor Provers' ML chips for evidence that a large rule-violating training occurred. The framework revolves around chip inspections: the Verifier will inspect a sufficient random sample of the Prover's chips (Section 3.2), and confirm that none of these chips contributed to a rule-violating training run. For the Verifier to ascertain compliance from simply inspecting a chip, we will need interventions at three stages: on the chip, at the Prover's data-center, and in the supply chain. * _On the chip_ (Section 4): When the Verifier gets access to a Prover's chip, they need to be able to confirm whether or not that chip was involved in a rule-violating training run. Given that rule violation depends only Figure 1: Overview of the proposed monitoring framework. on the code that was run, our solution will necessitate that ML chips logging infrequent traces of their activity, with logging done via hardware-backed firmware. We suggest that ML chips' firmware occasionally log a copy of the current state of the chip's high-bandwidth memory to long-term storage, and in particular, that it logs the shard of the NN's weights stored in memory. These _weight-snapshots_ can serve as a fingerprint of the NN training that took place on each chip. * _At the data-center_ (Section 5): The Verifier needs a way to interpret the chips' logs, and determine whether or not they are evidence for a rule-violating training run. To that end, the Prover, who is training the model, will be required to store a transcript of the training process -- including training data, hyperparameters, and intermediate weight checkpoints -- for each model they train. Using protocols similar to "Proof-of-Learning" [24], these training transcripts may serve as provenance for the logged weight-snapshots, which are themselves the result of the same training process. In practice, for each (hash of a) weight-snapshot logged by a chip, the Prover provides the Verifier (the hashed version of) the matching training transcript. Then the Prover and Verifier jointly and securely verify that, with high probability, retraining using the training transcript would have indeed resulted in the logged weight-snapshot (and that no other valid training transcript could have resulted in that snapshot). Finally, now that the Verifier knows an approximate training transcript of the training run that had been executed on that chip at that time, they can examine properties of the training transcript to confirm that the Prover has complied with the agreed upon rules. * _At the supply chain_ (Section 6): The Verifier needs to know which ML chips the Prover owns, so that the Verifier can randomly inspect a representative sample of those chips, to confirm their ownership and that their logging mechanisms are reporting correctly. Without this chip-ownership verification step, a Prover might covertly acquire a large quantity of chips and use them for training without ever notifying the Verifier. Keeping track of chip-ownership is viable because the cutting-edge data-center chip supply chain is highly concentrated, meaning that chips originate from a few initial chokepoints and can be monitored thereafter. These steps, put together, enable a chain of guarantees. * When any organization wants to train a large rule-violating ML model, they must do so using chips that the Verifier is aware of. * These chips will occasionally log weight-snapshots. Each time a chip logs a weight-snapshot, the Prover must report the log to the Verifier, along with (hashes of) training transcripts that establish the provenance of that weight-snapshot. * If the Prover tries to hide the logs from every chip involved in the training run, the Verifier will eventually find out, because it will sample and physically inspect at least one of those chips with high probability. * Also, the hashed training transcripts that the Prover provides along with the logged weight-snapshot need to be authentic, reflecting the training run that was run on the chip. If they are not authentic, they will be caught by the Verifier's transcript verification procedure. * If the training transcript _is_ authentic, and the run violated the rules, then the Verifier can tell, and will catch the Prover. Thus, so long as the Prover complies with the Verifier's steps, the Verifier will detect the Prover's rule-violation with high probability. Just as in financial audits, a Prover's refusal to comply with the verification steps would itself represent an indication of guilt. ### Chip inspections Ideally, chips could remotely report their logs, with on-chip firmware and remote attestation being sufficient to guarantee that those logs were truthfully reported (see Section 4). Such guarantees may be sufficient to deter most Provers from misbehavior. Unfortunately, these measures may not restrain well-resourced adversarial Provers: it is generally understood within the hardware security world that, given unlimited physical access to a chip (which the Prover certainly would have), a chip's attestation and signed-firmware-enforcement mechanisms can be undermined [56]. However, by physically inspecting chips after the fact, it may be possible to reliably detect such tampering. This is because most hardware attacks require physically damaging the chip in ways that would be nearly-impossible to hide from a Verifier's subsequent inspection, especially if ML chips are supplemented with tamper-evident packaging measures [64]. Luckily, rather than needing to inspect every chip, we only need to inspect enough to catch at least one chip from each large training run (Section 3.2). A chip inspection can be done in one of two ways. The Verifier can send an inspector to the facility in which a sampled chip is stored, which may be cost-effective if the facility contains multiple chips selected for sampling. Alternatively, the ML chips can be _transported a neutral inspection site_, as these chips are generally housed in easily-removable servers. (Sensitive components like storage can be removed prior to transport.) The transporting option may be more attractive to security-conscious Provers who would prefer not to allow Verifier inspectors into their data-centers. A chip inspection involves several steps. First, the Verifier confirms the chip's serial number (often burned into chips and accessible both digitally and physically) matches the serial number they requested from the Prover. This confirms that this is the correct randomly-sampled chip, and also that the Prover still owns the chip and has not secretly resold it to an untrusted party. Second, the verifier checks the on-chip logs, from which they extract the weight snapshot hashes which they will check against the Prover's reported training transcripts (Section 5). Finally, the Verifier checks that the chip's logging mechanisms, such as its secure firmware authentication mechanism (Section 4), have not been physically tampered with. ### How many ML chips does the Verifier need to inspect? Sections 4, 5, and 6 will together guarantee that if the Verifier observes even a single ML chip that logged a weight snapshot hash during a rules-violating training run, then the Verifier can use that weight snapshot to prove that the Prover violated the rules. For the Verifier to make sure they'd observe at least one such snapshot from a large training run with high probability (or alternatively, catch a chip that'd been tampered with), they would need to inspect at least \(s\) of the Prover's \(C\) chips every \(T_{m}\) days. We now seek to calculate \(s\). We define a large-scale training run as any training run using more than \(H\) FLOPs. We only seek to verify rules on training runs using \(>H\) FLOPs, meaning that any weight-snapshots from the first \(H\) FLOPs are not incriminating (as the Prover could have stopped the training run after that snapshot). Catching a chip that happened to save a weight-snapshot at time \(H+\epsilon\) is vanishingly unlikely for small \(\epsilon\), so we instead provide guarantees on detecting the case where the Prover's training run is executed for \(\geq 2H\), substantially exceeding \(H\).3 This means that the training run was executed for an additional second period of at least \(H\) FLOPs, during which any weight snapshot would capture evidence of an ML model that had already been trained for at least \(>H\) FLOPs. From hereon, when we describe detecting a snapshot of a "large-scale training run", we refer to this second stage of the training run. If a Verifier wanted to catch a Prover _before_ they had completed a \(H\) rule-violating training run, they can simply shrink their target threshold to a fraction of \(H\), and sample chips on a rolling basis. Footnote 3: We can always keep the detection threshold the same by cutting \(H\) in half. Also, the literature on neural scaling laws [25, 49, 62] suggests that model loss decreases logarithmically at best given additional compute, so failing to catch a less-than-2x violation may have limited consequences. The Verifier, whether due to specific suspicions or routine due diligence, wants to audit a particular Prover's \(C\) chips. Let \(a\) be the FLOPs per day for those chips. We will conservatively assume that NN training can be perfectly parallelized and utilize all the FLOPs on each chip. Let \(f\) be the expected number of weight-snapshots saved by the chip per day (Section 4) snapshots are Poisson distributed. The Verifier wants to catch a training run within at most \(T_{m}\) days of its concluding, though the training run itself may have lasted for a longer period 4\(T>T_{m}\). This implies that the Prover must have used a minimum of \(\frac{H}{aT}\) chips. One might expect that by increasing \(T\), the Prover could use fewer chips and therefore evade detection, but this turns out to not to meaningfully improve the Prover's evasion probability (a \(<5\%\) increase) because each chip is sampled \(T/T_{m}\) times as often. (See Appendix C for details.) Thus for simplicity, we assume the Verifier will choose \(s\) as though \(T\) is fixed. Footnote 4: For simplicity, we assume \(T\) is divisible by \(T_{m}\) To detect at least one chip with a snapshot from a rule-violating training run with probability \(p\), the Prover needs to sample \(s\) chips each monitoring period \(T_{m}\), where: \[s\geq\frac{\log\left(1-p\right)}{\log\left(1-\frac{H}{aT^{C}}\left(1-e^{-fT_{m }}\right)\right)}\cdot\frac{T_{m}}{T} \tag{1}\] We provide a derivation for Equation 1 in Appendix B. In Table 1, we provide the total required samples-per-year \(365\cdot s/T_{m}\) to catch several past and projected training runs, for Provers who own different quantities of chips \(C\). As the table shows, the number of inspected chips remains approximately at a fixed fraction of the full stockpile, decreasing with training run size \(H\). An additional difficulty is the possibility that multiple Provers collude and each contribute a small fraction of their chips to a shared training run, such that if only one of the Provers is audited, the fraction of their \(C\) chips they used for the training run is much smaller than \(\frac{H}{aT_{m}^{2}}\). This is particularly relevant in the International Oversight setting, where a government Prover might force the participation of multiple firms. One solution to this is for an international Verifier to do background monitoring of _all_ Provers, in effect treating them all as one large Prover with access to the global chip stockpile. Predictably, this would require a much greater number of samples, and is likely to only be feasible for regulating the largest-scale training runs (which can be caught with fewer samples). Yet, for training runs at the scale of \(10^{25}\) FLOPs or greater, monitoring could be done with a bureaucracy similar in size to the IAEA.5 Footnote 5: We want to estimate the number of inspectors needed to catch a Chinchilla-280B-sized training run, with \(10^{25}\) FLOPs, given several more years of hardware progress and global production. Given \(C=10^{7}\) worldwide chips (\(>5\times\) global stocks as of 2022), each of which can output \(a=3\cdot 10^{15}\cdot 86400\) FLOPs per day (\(3\times\) more FLOP/s than the NVIDIA H100), detecting a Chinchilla-280B-sized run within \(T=30\) days of its completion anywhere on earth with 90% probability would require roughly 232,000 worldwide chip samples per year. A single inspector might be expected to verify at least 1000 chips a year, especially if those chips are brought to a central location (see Section 3.1). This would require \(\approx 232\) inspectors, slightly smaller than the 280 active IAEA inspectors as of 2021. [39]. As individual chips become more powerful (\(a\) grows) and algorithmic efficiency increases (\(H\) shrinks), the number of required samples would need to grow in order to verify rules on a fixed-capability training run. Still, it may be that the training runs of greatest societal concern are those _near the frontier_ of scaling, and those are likely to remain detectable with few samples for the foreseeable future, even if Provers attempt to undermine them with physical tampering. ## 4 On the chip In an NN training cluster, a large number of ML devices (each containing an ML chip, and connected to a "host" CPU chip) collaborate to simultaneously compute gradient updates. Large NNs have too many weights to fit in the high-bandwidth memory of a single ML device, so efficiently training a model requires splitting the model's weights into _weight shards_, and placing a shard on each chip [57]. Each such shard is a vector of millions or billions of values. Shards are updated after each gradient step so that new gradients can always be computed with respect to the latest weight values. This means that at every timestep, the chip contains an up-to-date result from the training run, which we will show in Section 5 can serve as a type of checksum for the training run. In this section, we will describe how it may be possible to have ML chips log the values of these weight-shard vectors, for Verifiers to retroactively inspect. We will assume the presence of certain hardware features6 on the ML chip, though the details of the approach may need to be modified depending on the specifics of the device. For example, if the ML chip does not have the hardware required for remote firmware attestation, the chip may need to be permanently mated to a host CPU that does have such hardware, with the CPU then orchestrating the logging operations. Footnote 6: These include standard components of a hardware security module: firmware verification and attestation, firmware anti-rollback protection, and the ability to sign, authenticate, and encrypt messages. It also includes the ability for firmware to raise interrupts, read device memory, and (to efficiently hash weight snapshots) execute code on the chip’s processor. A chip's weight-shards can be captured by _snapshotting_ the chip's high-bandwidth memory, which contains the weights. These snapshots must be saved at random times, as otherwise the Prover can swap out the true weights for a fake set of weights right before the scheduled logging. With some small probability at every clock cycle, the firmware present on each of the Prover's chips should raise an interrupt and briefly freeze the chip's operations. The chip should then send a copy of the weight shard regions \(\omega\) to its host CPU to save in Prover-controlled long-term storage. The firmware also computes a hash \(h=\mathcal{H}(\omega)\) of the relevant memory regions, 7 using a publicly-agreed cryptographic hash function \(\mathcal{H}\). Footnote 7: Such hashing can be done cheaply so long as the firmware can leverage the ML chip’s processing power. \begin{table} \begin{tabular}{l r r r r r r} **Model** & **Training** & **H100-** & **H100s to** & **Samples/yr** & **Samples/yr** & **Samples/yr** \\ & **FLOPs \(H\)** & **days \(H/a\)** & **train in 1 yr** & **if \(C=10^{3}\)** & **if \(C=10^{5}\)** & **if \(C=10^{7}\)** \\ \hline GPT-3 [6] & 3.14e+23 & 3.64e+3 & 10 & 243 & 2.43e+4 & 2.43e+6 \\ Chinchilla [21] & 5.76e+23 & 6.67e+3 & 19 & 132 & 1.33e+4 & 1.33e+6 \\ PaLM [10] & 2.56e+24 & 2.96e+4 & 82 & 29 & 2.98e+3 & 2.99e+5 \\ Chinchilla-280B \({}^{\dagger}\) & 9.9e+24 & 1.15e+5 & 314 & 7 & 771 & 7.72e+4 \\ Chinchilla-1T \({}^{\dagger}\) & 1.27e+26 & 1.47e+6 & 4.03e+3 & — & 60 & 6.02e+3 \\ Chinchilla-10T \({}^{\dagger}\) & 1.3e+28 & 1.5e+8 & 4.12e+5 & — & — & 58 \\ \end{tabular} \end{table} Table 1: Example numbers of required total _annual_ samples \(365\cdot s/T_{m}\) to catch a chip from every large-scale training run within \(T=30\) days, given \(a=10^{15}\cdot 24\cdot 3600\) (the daily 16-bit Tensor Core FLOPs of an NVIDIA H100 5XM GPU [42]), \(f=0.1\) weight snapshots per day (see Section 4), and the Verifier’s desired probability of catching a rule-violating training run \(p=0.9\). Models marked with \({\dagger}\) are projections for future training requirements [21]. This hash \(h\) is then saved for the Verifier to find it later, either by having the firmware store it on-chip8 in nonvolatile memory writable only by the firmware, or by having the chip sign the hash with its private key and then sending it to a Verifier-trusted central server (which must return an acknowledgement of receipt before the chip can resume operating). Later, a Verifier can use \(h\) to ask the Prover to provide them with the original weight shard \(\omega\), as this is the only input to \(\mathcal{H}\) whose hash matches \(\mathcal{H}(\omega)=h\). Footnote 8: Keeping the hash in local NVRAM is useful if the ML chip is in a secure data-center and is restricted from communication with the outside world, such as via air-gapping. More frequent snapshots mean more performance overhead for the Prover, especially if all the other synchronized chips in the training run must hang until the single chip's snapshotting concludes. This freezing may become expensive if the rest of the training run must hang until it completes, especially if it's done by many chips. 9 The frequency of snapshots should be set as low as possible while still ensuring that used chips reliably contain a relevant weight snapshot if they were employed in a large-scale training run (see Eq. (1)). Footnote 9: A possible alternative would be to orchestrate less-frequent snapshots via a Verifier-trusted central server, though this would require a solution to the problem of the Prover swapping out the chip’s weights before propagating the Verifier’s signal. Though the Prover may claim to have used firmware that logged snapshots as described above, the Verifier may not trust the Prover to have indeed used the appropriate logging firmware. For this reason, ML chips also need hardware mechanisms for enforcing that firmware be signed, and in turn, chipmakers will need to only sign firmware if it implements appropriate logging mechanisms. 101112 Footnote 10: A similar firmware mechanism was used by NVIDIA to enforce cryptocurrency mining limits on its GPUs [17]. Footnote 11: Assuming that logging-free versions have been signed in the past, ML chips may need to include anti-rollback functionality [28]. The Prover might also delay updating the chip’s firmware until shortly before inspection. To avoid this, ML chips may need to be made to remotely attest at an earlier time that their firmware has been updated. An obstacle to logging all weight-shards stored in ML device memory is that different ML training code will store a model's weights in different regions of memory. The chip's firmware must be able to determine which region of memory the weights are stored in. It may be possible to determine the weight-regions retroactively, by logging the regions of memory claimed to correspond to the weights, along with a copy of the compiled on-device code, which can then be matched to Prover-provided source code and its memory allocation pattern analyzed. 13314 As a more invasive alternative, the Prover could proactively certify that its chip-level ML training code stores the weights in a specific memory region, by having its chip-code verified and then signed by a Verifier-trusted server before it can be executed by the firmware. 15 Footnote 11: Additional challenges arise if the Verifier suspects that the chipmaker (who authors the firmware) and Prover have collocated to create firmware with backdoors that disable logging. Increasing Verifiers’ confidence in the firmware may be an important consideration when verifying the operations of powerful nation-state Provers. Footnote 13: It may even be possible to modify standard libraries for generating chip-level ML training code (e.g., PyTorch-generated CUDA) to make their memory allocation processes more easily checkable by a subsequent Verifier. Footnote 14: Revealing the Prover’s source code to the Verifier directly may be unacceptable, demanding a more complicated verification procedure like that described in Section 5.2. Footnote 15: The iOS App Store uses a similar method to ensure Apple devices only run signed programs [1]. A more challenging problem is that "ML chips" are general-purpose hardware accelerators, and thus are frequently used for other high-performance computing workloads like climate modeling. There is no straightforward way to determine whether an ML chip is running a neural network training job (and therefore should log weight snapshots), or an unrelated type of job exempt from Verifier oversight. 16 Relatedly, it would also be desirable to be able to exempt a fraction of chips from monitoring, for use in education or small-scale R&D where the overhead of compliance would be prohibitive. 17 Until we find methods for reliably distinguishing large-scale ML training, some fraction of ML chips must be exempted from using weight-logging firmware. The larger system's guarantee would then be that _for those of the Provers' chips that the Prover allows the Verifier to monitor_, no rule-violating training occurred. The Verifier may then use their leverage to incentivize Provers into opting-in a larger and larger fraction of their chips over time. Footnote 17: This might be addressable by having these ML chips’ interconnect topology restricted to prevent their use in large-scale training. However, methods for Verifiers to retroactively confirm the topology that ML chips were configured in at a data-center are beyond the scope of this work. ## 5 At the data-center From Sections 3.2 and 4, we know how a Verifier can get access to at least one weight shard \(\omega\) snapshotted during a large-scale training run. In this section, we will show how the Verifier can use this weight shard to determine whether the associated training run followed the agreed-upon rules. On its own, knowing \(\omega\) is useless to the Verifier, since the Verifier does not know the values of the other weight-shards comprising the NN, or even how many other weight shards there are. Yet \(\omega\) holds the key to the training run's history. Any given weight-shard vector is incredibly unlikely to be yielded by the training process (there are \(\gg 2^{10^{6}}\) possibilities even if we round to 1-bit precision). But this particular \(\omega\) is the nearly-deterministic result of the training run's hyperparameters \(\mathbb{M}\) (including its random seed) and sequence of training data \(\mathbb{D}\). We refer to this information, along with information yielded throughout the training run \(\mathbb{W}\) (such as intermediate weight snapshots), as the model's _training transcript_\(\mathbb{T}=\{\mathbb{M},\mathbb{D},\mathbb{W}\}\). It is plausible that it would be very difficult for the Prover to find a second "legitimate" training transcript \(\mathbb{T}^{\prime}=\{\mathbb{M}^{\prime},\mathbb{D}^{\prime},\mathbb{W}^{ \prime}\}\neq\mathbb{T}\) that would exactly yield a full set of weights that included \(\omega\) as a contiguous weight-shard. It is also plausible that, if the Prover tried to lie and provide an \(\mathbb{H}^{\prime},\mathbb{D}^{\prime}\), and \(\mathbb{M}^{\prime}\) that would _not_ result in \(\omega\) but which they _claimed_ would result in \(\omega\), then the Verifier might find a suite of tests that would detect such a faked transcript reliably. If the Verifier could reliably detect such kales, then by asking the Prover to provide a training transcript that would yield each logged snapshot \(\omega\), the Verifier could force the Prover to reveal the true data \(\mathbb{D}\), and hyperparameters \(\mathbb{M}\) used in every large-scale training run (along with the full weights of the resulting model, not just \(\omega\)). This would be sufficient information to check that the training run complied with any of the training rule types listed in Section 2.1. Unfortunately, having the Verifier simply re-run the full training process to verify a transcript's correctness is unworkable for several reasons. First, the Prover would likely not be willing to reveal their training data, model weights, and hyperparameters, so the Verifier must do any verification without direct access to the data. (We address this in Section 5.2.) Second, the compute cost to fully re-run the training transcript would be massive, as large as every original training run. Third, the training run would likely not be perfectly reproducible: due to hardware-level noise, even two repetitions of the same sequence of training updates would gradually diverge. Fourth, the Prover _may_ be able to construct a second "spoof" training transcript, that yields an exact match for \(\omega\) but differs from the original training run that yielded \(\omega\) in the first place.18 Footnote 18: As a trivial example, the Prover could claim that \(\omega\) was simply the value of the random initialization, and no training had happened at the time of the snapshot. Thanfully, a close variant of this problem has already been studied in the literature, known as "Proof of Learning" [24]. The goal of a Proof-of-Learning (PoL) schema is to establish proof of ownership over a model \(W_{t}\) (e.g., to corroborate IP claims) by having the model-trainer save the training transcript \(\mathbb{T}\) (including hyperparameters \(\mathbb{M}\), data sequence \(\mathbb{D}\), and a series of intermediate full-model weight checkpoints19\(\mathbb{W}=\{W_{0},W_{k},W_{2k}\dots\}\)) which only the original model trainer would know. Jia et al. [24] propose a verification procedure that makes it difficult for any third party to construct a spoofed transcript \(\mathbb{T}^{\prime}\), if they only have access to \(W_{t}\) and the unordered dataset. Footnote 19: We use “weight checkpoints” as shorthand, but if using an optimizer like Adam [26], the optimizer state should also be included. The solution of [24] is as follows: once a Prover reports a training transcript \(\mathbb{T}\), the Verifier checks that the initialization appears random, and then chooses a number of pairs of adjacent weight snapshots that are \(k\) gradient steps apart \((W_{i},W_{i+k}),\dots,(W_{j},W_{j+k})\). Then, rather than re-running all of training, the Verifier only reruns the training of these specific segments, starting at \(W_{i}\) and progressing through the appropriate data batches \(D_{i}\dots D_{i+k}\) to yield a \(W^{\prime}_{i+k}\). The Verifier then confirms that the resulting checkpoint is approximately correct: \(\|W_{i+k}-W^{\prime}_{i+k}\|_{2}<\epsilon\) for some small \(\epsilon\) that depends on the reproduction noise and the number of steps \(k\). The original paper proposed a heuristic for choosing which segments to verify, though subsequent papers have demonstrated spoofing attacks that evade the original verification approach [70; 15]. Thus far, the literature on PoL has been heuristic-based, and discovering PoL protocols with higher degrees of reliability is the subject of ongoing work [15]. ### Proof-of-Training-Transcript Our requirement, which we will define as "Proof-of-Training-Transcript" (PoTT), differs from Proof-of-Learning because our Prover knows a true training transcript that leads to the final weight-snapshot (since they executed the original training run), and the Verifier wants to verify that the Prover has reported the _original_ transcript that yielded \(\omega\) (or at least one very close to the original). Further, unlike PoL, we allow that an adversarial Prover might be willing to spend _more than the original training compute_\(H\) in order to construct a spoof (e.g., an adversary may be willing to pay double the cost in order to execute a rule-violating training run without being detected). Thus, we must hope to push the cost to construct a spoof to be as high as possible, so as to make evasion impractical. **Definition 1**.: _A PoTT protocol \((R,V)\) is defined by a Prover training and reporting protocol \(R\) and a Verifier verification protocol \(V\). A target weight vector \(W_{t}\) is produced as the true result of a training run which complies with a training and reporting protocol \(R\), and which can be summarized via a training transcript \(\mathbb{T}\in\mathcal{T}\), where \(\mathbb{T}=\{\mathbb{M},\mathbb{D},\mathbb{W}\}\), \(\mathbb{M}\) is the metadata required to reproduce the training run (including hyperparameters, random seed, and loss function), \(\mathbb{D}\) is an ordered sequence of sets of training data batches, and \(\mathbb{W}\) is a collection of intermediate weight checkpoints resulting from the training procedure. The verification protocol \(V\) should accept any such true training transcript with high probability, \(\Pr[V(\mathbb{T},W_{t})=\text{accept}]>1-\delta_{1}\) for some small \(\delta_{1}\). A "spoofed" training transcript \(\mathbb{T}^{\prime}=\{\mathbb{M}^{\prime},\mathbb{D}^{\prime},\mathbb{W}^{ \prime}\}\) is a transcript, which may not correspond to any valid training run, and which is substantially different from the original transcript \(\mathbb{T}\) in its data or hyperparameters: \(d_{1}(\mathbb{D},\mathbb{D}^{\prime})\geq\delta_{3}\) for some edit distance \(d_{1}\) quantifying the number of data point insertions/deletions, and/or \(d_{2}(\mathbb{M},\mathbb{M}^{\prime})\geq\delta_{4}\) for some hyperparameter distance \(d_{2}\). A reporting/verification protocol pair \((R,V)\) is \(J\)-efficient and \(F\)-hard if \(V\) runs in at most \(J\) time, and there does not exist any spoof-generation algorithm \(A\in\mathcal{A}:\mathcal{T}\rightarrow\mathcal{T}\) such that \(\Pr[V(A(\mathbb{T}),W_{t})=\text{accept}]>1-\delta_{2}\) where \(A\) runs in less than \(F\) time._ Colloquially, we want a Prover training and reporting protocol and Verifier verification protocol such that the Verifier only accepts _original_ training transcripts that would result in a final weight checkpoint which contains a shard matching our on-chip weight-shard snapshot \(\omega\). We leave the problem of finding provably secure, efficient methods for PoTT as an important avenue for future work, but we discuss a few promising directions below. PoTT appears to be strictly harder than PoL, as it requires robustness to a better-resourced adversary has additional information (i.e., they know the true transcript \(\mathbb{T}\)) and has more compute-time to construct a spoof. Given that existing PoL schemes are still heuristic-based and not yet provably secure, there may be a long way to go until we have PoTT methods that are both efficient and hard to spoof. Still, one dynamic favoring the Verifier is that the Prover must _commit_ to a training transcript without knowing the Verifier's verification strategies. Thus, Verifiers can amass secret collections of verification heuristics, much as the IAEA does not disclose all its methods for detecting nuclear activity. Even if PoTTs are only ever heuristic-based, the presence of this dynamic may dissuade Provers from taking the risk of being detected by an unexpected test. Defining conditions on the types of legitimate training runs is another useful source of leverage. For example, one Prover cheating strategy could be for the Prover to report one long training run as many shorter training runs, each claimed to be "initialized" where the previous training run ended. A simple prevention would be for the training-and-reporting protocol \(R\) to require the Prover to initialize every training run's weights via a known pseudorandom generator and a short seed. This means that the randomness of the initial weights can later be confirmed by the Verifier. Another promising strategy may be to require the Prover to _pre-commit_ to portions of its training transcript (e.g., the hyperparameters \(\mathbb{M}\) and training batches \(\mathbb{D}\)) at the start of training. This could be done by having the ML chip firmware log a hash of this precommitment, which would prove that the precommitment preceded the chip's snapshot \(\omega\). At the time of precommitment, the Prover does not know what trajectory the training run will follow or at what time it will be snapshotted, as the training has not yet been performed. The Prover would be unable to construct a spoofed training transcript that would end at \(\omega\) and precommit to it, because \(\omega\) isn't known yet. However, it is not obvious how to extend this approach to online learning settings like online RL, where the training data cannot be known ahead of time. A final complication of our setting derives from the fact that the Verifier observes only a shard of the weights \(\omega\), and not the full weight vector \(W_{t}\). It could be easier to construct a spoofed training transcript for some \(\hat{W}\) which contains a shard matching \(\omega\), but which differs from the true original weights \(W_{t}\neq\hat{W}\) on the portion of the weight vector outside the shard. We briefly describe an intuition for why this is likely to be as hard as general PoTT-spoofing. Assuming \(\omega\) must contain weights from more than a single linear layer, any Prover must at minimum construct a valid PoTT for this smaller NN represented by \(\omega\), except without any original training transcript to start from (making it similarly hard to the original Proof of Learning problem). Alternatively, if the Prover tries to reuse the original training transcript, it will require them to synthesize fake data and labels to exactly reproduce the true inputs/gradients, and these must correspond to a reasonable loss function that is actually decreasing over time. If the Prover alternatively attempts to fake the surrounding layers' weights, then the weights in these surrounding layers must also have their own valid training trajectories as part of the training transcript _and_ must generate the appropriate inputs/gradients to yield \(\omega\), simultaneously. ### Real-world implementation As the literature uncovers new methods for verifying training transcripts, the Verifier can run these protocol to check that, for each of the logged weight snapshots on the Prover's chips that the Verifier inspects, the Prover has provided a legitimate corresponding training transcript.20 The Verifier can then examine the data, hyperparameters, and resulting models as reported by the training transcript, and thereby detect whether the Prover compiled with the training rules. Footnote 20: Note that this requires the Prover to save and report training transcripts for all training runs corresponding to chips sampled by the Verifier, not just the largest-scale ones. This is because, without a matching training transcript, it’s impossible for the Verifier to determine whether a given weight-shard was part of a large-scale training run or a small-scale one. Alternate methods for proving that a chip was only involved in a short training run are an important direction for future work. A key design constraint is that the Prover does not disclose the raw transcripts (which likely contain confidential data that the Prover would prefer the Verifier did not see). We propose a method for the Verifier to verify the Prover's previously-reported transcript's correctness without ever observing the raw data. First, the Prover and Verifier utilize a "trusted cluster": a set of chips that both parties trust, which is air-gapped to prevent the Prover's sensitive data from leaking, and which is equipped to execute small amounts of ML inference and training (so as to segment-wise approximately reproduce the Prover's original training process). 21 Second, instead of reporting the full transcript, the Prover initially only provides the Verifier with hashes of the training transcript \(\mathbb{T}\). Then, at a later verification stage, the Prover supplies the true \(\mathbb{M},\mathbb{D}\), and \(\mathbb{W}\) within the trusted cluster (which must match the reported hashes), executes the verification protocol, and reports whether both steps succeeded. Finally, the Verifier can use this same trusted cluster to run programs checking that the Prover's training transcript complies with all agreed ML training rules (e.g., measuring the performance of the final model weights on agreed benchmarks). We provide more details on a possible procedure in Appendix D. Footnote 21: Maintaining such compatible training clusters may prove quite challenging. One desirable direction for future work is in verification methods by which the Verifier does not need to directly reexecute model training. For example, it may be possible for the Verifier to interactively query the Prover for additional information on segments of the training run beyond what was included in the training transcript. There may be queries that have valid answers _only if_ the original training transcript was authentic (e.g., a series of weight sub-checkpoints between two checkpoints, each with progressively lower loss), and the Prover could dynamically recompute answers to these queries using their own data-center. While some properties of the verification would still need to be confirmed using a neutral cluster to maintain the confidentiality of the query-responses, such clusters may not need to be equipped for large-scale training, and thus be much easier to maintain. When evaluating which training transcript verification protocols to pursue, Verifiers should prioritize those verification strategies that get the most detection-probability, for the lowest cost. Beyond the upfront costs of building trusted clusters or modifying chip hardware, the system has three ongoing operating costs: the efficiency loss from pausing to save weight checkpoints and the weight-shard snapshots (as described in Section 4), the storage costs for maintaining training transcripts (and in particular the weight-checkpoints, each of which may require terabytes) until the Verifier inspects them, and the compute costs to execute the training-transcript verification protocols on the trusted clusters. These costs seem likely to scale linearly with the total compute used by the Prover, and will ultimately depend on the efficiency with which training transcripts can be verified. Even though governments could in principle pressure Provers into paying the costs of compliance, a 1% overhead for each dollar spent on training compute would be much easier for Provers to comply with than a 20% overhead. Indeed, for International Verifiers, the history of arms control suggests that maximally-stringent verification measures may have limited utility, as they may reduce the likelihood of compliance [46]. One important avenue for future work is finding cheaper, lower-fidelity alternatives to NN-retraining-based verification, which need only establish limited properties of the weight-shard's corresponding training run, and which could prompt more expensive verification methods if needed. ## 6 At the supply chain We need supply-chain monitoring to accomplish two goals: to construct a "chip directory" of who owns each ML chip, for the purposes of sampling; and to ensure that each chip has the hardware features needed to provably log its training activity as in Section 4. Unlike the chip and data-center interventions (Sections 4 and 5), monitoring the international ML chip supply chain cannot be done by a single Verifier. Instead, an international consortium of governments may need to implement these interventions on behalf of other Verifiers (much as the IAEA runs inspections on behalf of member states). ### Creating a chip-owner directory For a Verifier to be confident that a Prover is reporting the activity of all the Prover's ML chips, they need to know both which ML chips the Prover owns, and that there are no secret stockpiles of chips beyond the Verifier's knowledge. Such ownership monitoring would represent a natural extension of existing supply chain management practices, such as those used to enforce U.S. export controls on ML chips. It may be relatively straightforward to reliably determine the total number of cutting-edge ML chips produced worldwide, by monitoring the production lines at high-end chip fabrication facilities. The modern high-end chip fabrication supply chain is extremely concentrated, and as of 2023 there are fewer than two dozen facilities worldwide capable of producing chips at a node size of 14nm or lower [32], the size used for efficient ML training chips. As [4] shows, the high-end chip production process may be monitorable using a similar approach to the oversight of nuclear fuel production (e.g., continuous video monitoring of key machines). As long as each country's new fab can be detected by other countries (e.g., by monitoring the supply chain of lithography equipment), an international monitoring consortium can require the implementation of verification measures at each fab, to provide assurances for all Verifiers. After processing, each wafer produced at a fab is then sent onward for dicing and packaging. Since the facilities required for postprocessing wafers are less concentrated, it is important for the wafers (and later the dies) to be securely and verifiably transported at each step. If these chip precursors ever go missing, responsibility for the violation would lie with the most recent holder. This chain of custody continues until the chip reaches its final owner, at which point the chip's unique ID is associated with that owner in a _chip owner directory_ trusted by all potential Verifiers and Provers. This ownership directory must thereafter be kept up-to-date, e.g., when chips are resold or damaged.22 The continued accuracy of this registry can be validated as part of the same random sampling procedure discussed in Section 3.1. As a second layer of assurance, chips could also be discovered by inspecting datacenters, if those datacenters are detectable via other signals [4]. Footnote 22: In the rare scenario where a large number of chips owned by the same Prover are lost or destroyed beyond recognition, the Verifier or international consortium can launch an investigation to determine whether the Prover is lying to evade oversight. Given the high prices and large power and cooling requirements of these ML chips, they are largely purchased by data-center operators. These organizations are well-suited to tracking and reporting transfers of their ML chips, and complying with occasional inspections. Though a small fraction of data-center ML chip purchases are made by individuals, so long as these are a small fraction of chips they may be exempted from the overall monitoring framework. ### Trusting secure hardware We require in Section 4 that each ML chip produced by the semiconductor supply chain is able to provably log traces of its usage. The second goal of supply-chain monitoring is to provide Verifiers with high confidence in the reliability of these on-chip activity-logging mechanisms. This requires ML chip designers to integrate security features into their hardware and firmware designs, especially in ways that make them externally-legible to Verifiers that may not trust the chip-designer. Key priorities include the immutability of the chip's burned-in ID, the integrity of the hardware-backed mechanism for only booting signed firmware, and the resilience of the on-chip hardware-roots-of-trust to side-channel attacks that could steal the chip's encryption keys [27; 9] and thus fake its logs. A concern for Verifiers checking the conduct of powerful Provers (e.g., states verifying each others' ML training runs) is the possibility of supply-chain attacks [48], which could enable a Prover to undetectably disable/spoof the ML chips' logging functionality. Fully mitigating the threat of supply-chain attacks is a major global issue and beyond the scope of this paper. However, one particularly useful step for building trust in ML chip mechanisms' integrity would be for ML chip designers to use open-source Hardware-Roots-of-Trust. This transparency means that chips' designs can be validated by untrusting Verifiers to confirm there are no backdoors. For example, Google's Project OpenTitan has produced such an HRoT [31], and many major ML chip designers (Google, Microsoft, NVIDIA, and AMD) have agreed to integrate the Open Compute Project's "Caliptra" Root of Trust. [45] ## 7 Discussion The described additions to the production and operation of ML training chips, if successfully implemented, would enable untrusting parties (like a government and its domestic companies, or the US and Chinese governments) to verify rules and commitments on advanced ML development using these chips. There are many useful measures that governments and companies could begin taking today to enable future implementation of such a framework if it proved necessary, and that would simultaneously further businesses' and regulators' other objectives. * Chipmakers can include improved hardware security features in their data-center ML chips, as many of these are already hardware security best practices (and may already be present in some ML chips [42]). These features are likely to be independently in-demand as the costs of model training increase, and the risk of model theft becomes a major consideration for companies or governments debating whether to train an expensive model that might simply be stolen. * Similarly, many of the security measures required for this system (firmware and code attestation, encryption/decryption modules, verification of produced models without disclosing training code) would also be useful for "cloud ML training providers", who wish to prove to security-conscious clients that the clients' data did not leave the chips, and that the clients' models did not have backdoors inserted by a third party [34]. Procurement programs like the US's FedRAMP could encourage such standards for government contracts, and thereby incentivize cloud providers and chipmakers to build out technical infrastructure that could later be repurposed for oversight. * Individual companies and governments can publicly commit to rules on ML development that they would like to abide by, if only they could have confidence that their competitors would follow suit. * Responsible companies can log and publicly disclose (hashed) training transcripts for their large training runs, and assist other companies in verifying these transcripts using simple heuristic. This would not prove the companies _hadn't also_ trained undisclosed models, but the process would prove technical feasibility and create momentum around an industry standard for (secure) training run disclosure. * Companies and governments can build trusted neutral clusters of the sort described in Section 5.2. These would be useful for many other regulatory priorities, such as enabling third-party auditors to analyze companies' models without leaking the model weights. 23 Footnote 23: For similar reasons, the US Census Bureau operates secured Federal Statistical Research Data Centers to securely provide researchers access to sensitive data [8]. * Governments can improve tracking of ML chip flows via supply-chain monitoring, to identify end-users who own significant quantities of ML chips. In the West, such supply-chain oversight is already likely to be a necessary measure for enforcing US-allied export controls. * Responsible companies can work with nonprofits and government bodies to practice the physical inspection of ML chips in datacenters. This could help stakeholders create best practices for inspections and gain experience implementing them, while improving estimates of implementation costs. * Researchers can investigate more efficient and robust methods for detecting spoofed training transcripts, which may be useful for in proving that no backdoors were inserted into ML models. For the hardware interventions, the sooner such measures are put in place, the more ML chips they can apply to, and the more useful any verification framework will be. Starting on these measures early will also allow more cycles to catch any security vulnerabilities in the software and hardware, which often require multiple iterations to get right. ### Politics of Implementation Given the substantial complexity and cost of a monitoring and verification regime for large-scale ML training runs, it will only become a reality if it benefits the key stakeholders required to implement it. In this last section, we discuss the benefits of this proposal among each of the required stakeholders. * _The global public_: Ordinary citizens should worry about the concentration of power associated with private companies possessing large quantities of ML chips, without any meaningful oversight by the public. Training run monitoring is a way to make powerful companies' advanced ML development accountable to the public, and not just the free market. Most importantly, ordinary people benefit from the security and stability enabled by laws and agreements that limit the most harmful applications of large-scale ML systems. * _Chipmakers and cloud providers_: Absent mechanisms for verifying whether ML chips are used for rule-violating training runs, governments may increasingly resort to banning the sale of chips (or even cloud-computing access to those chips) to untrusted actors [5]. By enabling provable monitoring of large-scale ML training runs, chipmakers may reverse this trend and may even be able to resume sales to affected markets. * _AI companies_: Responsible AI companies may themselves prefer not to develop a particular capability into their products, but may feel they have no choice due to competitive pressure exerted by less-scrupulous rivals. Verifying training runs would allow responsible AI companies to be recognized for the limits they impose on themselves, and would facilitate industry-wide enforcement of best practices on responsible ML development. * _Governments and militaries_: Governments' and militaries' overarching objective is to ensure the security and prosperity of their country. The inability to coordinate with rivals on limits to the development of highly-capable ML systems is a threat to their own national security. There would be massive benefit to a system that enabled (even a subset of) countries to verify each others' adherence with ML training agreements, and thus to maintain an equilibrium of responsible ML development. Even if only a subset of responsible companies and governments comply with the framework, they still benefit from verifiably demonstrating their compliance with self-imposed rules by increasing their rivals' and allies' confidence in their behavior [22] (and thus reducing their rival's uncertainty and incentive towards recklessness). Finally, we highlight that the discussed verification framework requires continuous participation and consent by the Prover. This makes the framework fundamentally non-coercive, and respects national sovereignty much as nuclear nonproliferation and arms control agreements respect national sovereignty. Indeed, the ongoing success of such a system relies on all parties' self-interest in continuing to live in a world where no one - neither they, nor their rivals - violates agreed guardrails on advanced ML development. ## Acknowledgements The author would like to thank Tim Fist, Miles Brundage, William Moses, Gabriel Kaptchuk, Cynthia Dwork, Lennart Heim, Shahar Avin, Mauricio Baker, Jacob Austin, Lucy Lim, Andy Jones, Cullen O'Keefe, Helen Toner, Julian Hazell, Richard Ngo, Jade Leung, Jess Whittestone, Ariel Procaccia, Jordan Schneider, and Rachel Cummings Shavit for their helpful feedback and advice in the writing of this work.
2302.06514
Multiple Appropriate Facial Reaction Generation in Dyadic Interaction Settings: What, Why and How?
According to the Stimulus Organism Response (SOR) theory, all human behavioral reactions are stimulated by context, where people will process the received stimulus and produce an appropriate reaction. This implies that in a specific context for a given input stimulus, a person can react differently according to their internal state and other contextual factors. Analogously, in dyadic interactions, humans communicate using verbal and nonverbal cues, where a broad spectrum of listeners' non-verbal reactions might be appropriate for responding to a specific speaker behaviour. There already exists a body of work that investigated the problem of automatically generating an appropriate reaction for a given input. However, none attempted to automatically generate multiple appropriate reactions in the context of dyadic interactions and evaluate the appropriateness of those reactions using objective measures. This paper starts by defining the facial Multiple Appropriate Reaction Generation (fMARG) task for the first time in the literature and proposes a new set of objective evaluation metrics to evaluate the appropriateness of the generated reactions. The paper subsequently introduces a framework to predict, generate, and evaluate multiple appropriate facial reactions.
Siyang Song, Micol Spitale, Yiming Luo, Batuhan Bal, Hatice Gunes
2023-02-13T16:49:27Z
http://arxiv.org/abs/2302.06514v4
# Multiple Appropriate Facial Reaction Generation in Dyadic Interaction Settings: What, Why and How? ###### Abstract According to the Stimulus Organism Response (SOR) theory, all human behavioral reactions are stimulated by context, where people will process the received stimulus and produce an _appropriate_ reaction. This implies that in a specific context for a given input stimulus, a person can react differently according to their internal state and other contextual factors. Analogously, in dyadic interactions, humans communicate using verbal and nonverbal cues, where a broad spectrum of listeners' nonverbal reactions might be _appropriate_ for responding to a specific speaker behaviour. There already exists a body of work that investigated the problem of automatically generating an appropriate reaction for a given input. However, none attempted to automatically generate multiple appropriate reactions in the context of dyadic interactions and evaluate the appropriateness of those reactions using objective measures. This paper starts by defining the facial Multiple Appropriate Reaction Generation (fMARG) task for the first time in the literature and proposes a new set of objective evaluation metrics to evaluate the appropriateness of the generated reactions. The paper subsequently introduces a framework to predict, generate, and evaluate multiple appropriate facial reactions. Dyadic interaction, Multiple appropriate reactions generation, Facial expression, Action Units, Deep learning ## 1 Introduction According to the Stimulus Organism Response (SOR) model - proposed by Mehrabian and Russel Mehrabian and Russel (2013) - all behavioural responses or psychological changes in people are stimulated by their environment (or context), and people will inductively process the stimulus and modify their interactions to produce an appropriate response (Han et al., 2018; Soh et al., 2020), specific for each individual. The SOR model explains the relationship between stimuli (e.g., a speaker behavior) that have an impact on organisms internal states (e.g., listener cognitive and affective states) and the reaction that people generate to the stimuli (e.g., listener non-verbal behavior). Analogously, in dyadic interactions when humans communicate with each other using verbal and non-verbal cues, for a given input stimulus (i.e., from the speaker), a broad spectrum of responses (verbal) and reactions (non-verbal) might be _appropriate_ for an individual (i.e., listener) to generate according to their internal state. For example (see Figure 1), during a conversation between an employee (Listener 1) and an employer (Speaker 1) who displays a specific behavior (or input stimuli, \(b(S_{1})^{t}\)) to compliment their employee in a given context (\(C_{1}\)), the employee (Listener 1) might respond and react with much excitement (\(f_{1}\)). If the same information was communicated with the same behavior (\(b(S_{1})^{t}\)) but in a different context (\(C_{m}\), where \((b(S_{1})^{t})_{1}=(b(S_{m})^{t})_{m}\)), Listener 1's response and reaction (\(f_{1}\)) could differ significantly (e.g., with less excitement) depending on their contextual circumstances (e.g., having a particularly bad day), resulting in \((f_{1})_{1}\neq(f_{1})_{m}\). Analogously, another employee (Listener 2) would respond and react differently (\(f_{2}\), e.g., with more excitement) than Listener 1 to the same stimuli produced by Speaker 1, resulting in \(f_{1}\neq f_{2}\). As this example illustrates and the SOR theory suggests [13], multiple responses and reactions are appropriate for a given input. Recent years have witnessed an increasing number of studies targeting human-human dyadic interaction analysis, thanks to the wide application scenarios - such as, among others, surveillance, transportation, health, information security, and intelligent human agent interaction - and the advancement of pattern recognition, cognitive science, and neural networks [17]. Past works [7; 20; 22; 23; 26] have investigated the problem of automatically generating an appropriate response or reaction for a given input. Most of those studies focused on the generation of appropriate responses (e.g., using chat-bots [22]) without considering the non-verbal reactions that enrich the message conveyed. Very few explored the generation of appropriate reactions via non-verbal behaviors [26], limiting their evaluation to a _single appropriate_ generated reaction - specifically hand gestures - via _subjective measures_. As previously discussed, multiple reactions can be appropriate for the same context [13]. However, none of the existing works attempted to automatically generate _multiple appropriate reactions_ in dyadic interaction settings and evaluate the appropriateness of those reactions using objective measures. This paper defines the **Multiple Appropriate Reaction Generation** (MARG) task and proposes a new set of objective evaluation metrics to evaluate the appropriateness of the generated reactions for the first time in the literature. As a first step towards addressing the MARG research problem, it introduces a framework to predict, generate, and evaluate multiple appropriate _facial_ reactions in the same context. The facial MARG (fMARG) task specifically is a very challenging problem that has not been investigated yet due to multiple open research questions. First, compared to standard machine learning and facial behavior analysis tasks [11], where each input data only correspond to one solution (e.g., a specific combination of action units corresponding to the emotion label of joy/happiness, 1-to-1 problem), the fMARG task is more difficult as it is a 1-to-N problem where each input stimulus may correspond to multiple appropriate facial reactions (research challenge 1, **RC1**). Second, most of the past works [8] explored the generation of an appropriate facial reaction proposing frameworks which only generate a single appropriate reaction. To the best of our knowledge, a framework to simulate and analyze the fMARG problem has not been defined yet, given the complexity Figure 1: Multiple appropriate reaction generation in dyadic interaction settings. The same input stimuli (\((b(S_{1})^{t})_{1}=(b(S_{1})^{t})_{m}\)) under different contexts (\(C_{1}\neq C_{m}\)) of the speaker \(S_{1}\) can elicit different reactions for the same listener (\((f_{1})_{1}\neq(f_{1})_{m}\)) and also for different listeners (\(f_{1}\neq f_{2}\neq f_{3}\)). and uncertainty of generating multiple appropriate facial reactions (research challenge 2, **RC2**). Third, the current available human-human dyadic interaction datasets [3; 15; 19] are designed to address various facial analysis tasks (e.g., facial affect recognition, personality recognition, etc.), but they are not labeled to provide a ground truth on appropriate facial reaction. Lastly, previous studies [8] measured and evaluated the appropriateness of generated reactions using subjective measures (i.e., running a user study, where participants evaluates the appropriateness of a specific reaction generated), which limit the reproducibility of these results. Currently, there is a lack of objective measures to evaluate the MARG tasks (research challenge 3, **RC3**). This paper presents the first framework to predict, evaluate, and generate multiple appropriate facial reactions to address the above-mentioned open challenges. We present a model for generating multiple appropriate facial reactions in a specific context given an input (solution **S1**, addressing RC1). We create and design a theoretical framework to ground the modelling of facial MARG (**S2**, addressing RC2). Finally, we define a set of objective measures to evaluate the fMARG problem and to gain insights on how the proposed solutions can be improved (**S3**, addressing RC3). ## 2 Hypotheses and task definition This section formulates the hypothesis and formally defines the facial Multiple Appropriate Reactions Generation (fMARG) task. ### Hypotheses **Hypotheses 1:** According to the SOR theory [16; 28] and given the fuzzy nature of the fMARG problem, the same/similar behaviour expressed by a speaker could trigger different facial reactions expressed by not only various subjects but also the same subject under different contexts. Specifically, given a spatio-temporal behaviour \(b_{S_{n}}^{t_{1},t_{2}}\) expressed by a speaker \(S_{n}\) at the time \([t_{1},t_{2}]\), a set of (multiple) appropriate facial reactions \(F_{L}(b_{S_{n}}^{t_{1},t_{2}})\) could be expressed by different listeners, which can be represented as: \[\begin{split}& F_{L}(b(S_{n})^{t_{1},t_{2}})=\\ &\left[\begin{array}{cccc}f_{L_{1}}(b_{S_{n}}^{t_{1},t_{2}})_{1 }&\cdots&f_{L_{1}}(b_{S_{n}}^{t_{1},t_{2}})_{I_{1}}\\ f_{L_{2}}(b_{S_{n}}^{t_{1},t_{2}})_{1}&\cdots&f_{L_{2}}(b_{S_{n}}^{t_{1},t_{2}} )_{I_{2}}\\ \cdots&\cdots&\cdots\\ f_{L_{N}}(b_{S_{n}}^{t_{1},t_{2}})_{1}&\cdots&f_{L_{N}}(b_{S_{n}}^{t_{1},t_{2} })_{I_{N}}\\ \end{array}\right]\end{split} \tag{1}\] where \(f_{L_{\eta}}(b_{S_{n}}^{t_{1},t_{2}})_{i}\) (\(i=1,2,\cdots,I_{\eta}\)) denotes the appropriate facial reaction that could be expressed by the \(\eta_{th}\) listener in response to the \(b(S_{n})^{t}\) under the \(i_{th}\) context. Here, the number of possible appropriate facial reactions expressed by different listeners may not be the same (i.e., \(I_{1}\neq I_{2}\cdots\neq I_{N}\)). More importantly, the spatio-temporal patterns of these appropriate facial reactions are not guaranteed to be similar (illustrated in Figure 1). **Hypotheses 2:** Listeners may express similar facial reactions in response to different speaker behaviours [4]. For example, listeners may display similar positive facial expressions when a speaker tells either a joke or some good news. This means the \(n_{th}\) listener's real facial reaction \(f_{L_{n}}(b_{S_{n}}^{t_{1},t_{2}})\) triggered by \(b_{S_{n}}^{t_{1},t_{2}}\) can also be an appropriate facial reaction in response to other speaker behaviours (e.g., the behaviour \(b_{S_{n}}^{t_{3},t_{4}}\) expressed by the \(m_{th}\) speaker at the period \([t_{3},t_{4}]\)). ### fMARG task definition: Given the spatio-temporal behaviours \(b_{S_{n}}^{t_{1},t_{2}}\) expressed by the speaker \(S_{n}\) at the period \([t_{1},t_{2}]\), we define two types of fMARG tasks as follows: * **(i) Offline fMARG task:** this task aims to learn a ML model \(\mathcal{H}\) that takes the entire speaker behaviour sequence \(b_{S_{n}}^{t_{1},t_{2}}\) as the input, and generates multiple appropriate spatio-temporal listener facial reaction sequences \(P_{f}(b_{S_{n}}^{t_{1},t_{2}})=\{p_{f}(b_{S_{n}}^{t_{1},t_{2}})_{1},\cdots,p_{ f}(b_{S_{n}}^{t_{1},t_{2}})_{M}\}\) as: \[\begin{split}& P_{f}(b_{S_{n}}^{t_{1},t_{2}})=\mathcal{H}(b_{S_{n}} ^{t_{1},t_{2}})\\ & p_{f}(b_{S_{n}}^{t_{1},t_{2}})_{1}\neq p_{f}(b_{S_{n}}^{t_{1},t_ {2}})_{2}\neq\cdots\neq p_{f}(L|b(S_{n})^{t})_{I}\\ & p_{f}(b_{S_{n}}^{t_{1},t_{2}})_{i}\sim f_{L_{1}}(b_{S_{n}}^{t_{1 },t_{2}})_{i}\in F_{L}(b(S_{n})^{t_{1},t_{2}})\end{split}\] (2) where each \(p_{f}(b_{S_{n}}^{t_{1},t_{2}})_{i}\in P_{f}(b_{S_{n}}^{t_{1},t_{2}})\) represents a generated facial reaction in response to \(b_{S_{n}}^{t_{1},t_{2}}\), which should be similar to at least one of the appropriate real facial reactions in \(F_{L}(b(S_{n})^{t_{1},t_{2}}\) (defined in Eq. 1) expressed by human listeners. * **(ii) Online fMARG task:** this task aims to learn a ML model \(\mathcal{H}\) that estimates each appropriate facial reaction frame (i.e., \(\gamma_{\text{th}}\in[t_{1},t_{2}]\) frame) by only considering the \(\gamma_{\text{th}}\) frame and its previous frames expressed by the corresponding speaker (i.e., \(b^{t_{1},\gamma}_{S_{n}}\)), rather than taking all \(t_{1}\)th to \(t_{2}\)th frames into consideration. This can be formulated as: \[p_{f}(b^{t_{1},t_{2}}_{S_{n}})^{\gamma}_{i}=\mathcal{H}(b^{t_{1},\gamma}_{S_{n }})\] (3) where \(p_{f}(b^{t_{1},t_{2}}_{S_{n}})^{\gamma}_{i}\) denotes the \(\gamma_{\text{th}}\) predicted facial reaction frame of the \(i_{\text{th}}\) generated appropriate facial reaction in response to \(b^{t_{1},t_{2}}_{S_{n}}\); and \(b^{t_{1},\gamma}_{S_{n}}\) denotes the speaker behaviour segment at the period \([t_{1},\gamma]\). In summary, this task aims to gradually generate all facial reaction frames to form multiple appropriate spatio-temporal facial reactions as defined in Eqa 2. ## 3 Evaluation protocol As discussed in Sec. 2, similar speaker behaviours may trigger listeners to express different facial reactions. Thus, given a speaker behaviour \(b^{t_{1},t_{2}}_{S_{n}}\), instead of only assessing the similarity between the generated facial reaction \(p_{f}(b^{t_{1},t_{2}}_{S_{n}})\) and the corresponding ground-truth (GT) real facial reaction \(f^{t_{1},t_{2}}_{L_{n}}\), we propose a set of objective evaluation metrics to evaluate the _appropriateness_, _diversity_, _realism_, and _synchrony_ of the generated facial reactions \(P_{f}(b^{t_{1},t_{2}}_{S_{n}})\). To the best of our knowledge, this paper proposes the first set of objective evaluation metrics that assesses the appropriateness of the generated facial reactions. ### Automatic appropriate facial reaction labelling strategy Since there is no golden standard for evaluating whether a facial reaction is appropriate in response to a speaker behaviour, while manually labelling would be labour-intensive and subjective, we first propose an objective automatic fMARG labelling strategy. Given \(T\) spatial-temporal behaviours \(B(S)=\{b^{1},b^{2},\cdots,b^{M}\}\) expressed by a set of speakers, and \(T\) GT real facial reactions \(F(L)=\{f^{1},f^{2},\cdots,f^{M}\}\) expressed by the corresponding listeners, we propose to obtain all appropriate real facial reactions \(F_{L}(b^{m})=\{f^{m_{1}}_{L},f^{m_{2}}_{L},\cdots,f^{m_{L}}_{L}\}\in F(L)\) in response to each speaker behaviour \(b^{m}\in B(S)\) as follows: * **Step 1:** We compute the similarities between \(b^{m}\) and all speaker behaviours (including \(b^{m}\) itself) in \(B(S)\), resulting in \(T\) similarity scores \(\text{Sim}(b^{m},b^{1}),\text{Sim}(b^{m},b^{2}),\cdots,\text{Sim}(b^{m},b^{M})\). * **Step 2:** We choose the speaker behaviours \(B(S|b^{m})=\{b^{m_{1}},b^{m_{2}},\cdots,b^{m_{I_{m}}}\}\in B(S)\) as the _similar speaker behaviours_ of the \(b^{t}\), which are defined by: \[b^{\eta}\in B(S|b^{m})\quad\text{Subjects to}\quad\text{SIM}(b^{\eta},b^{t})> \mathcal{T}\] (4) where \(\eta=1,2,\cdots,M\) and \(\mathcal{T}\) is a threshold to decide whether the speaker behaviour \(b^{\eta}\) is similar to \(b^{t}\). * valence and arousal intensities - and the probabilities of eight categorical facial expressions (i.e., Neutral, Happy, Sad, Surprise, Fear, Disgust, Anger and Contempt), to represent frame-level human facial display. Specifically, all AUs' occurrences are predicted by the state-of-the-art GraphAU model [12; 24], while facial affects and facial expression probabilities are predicted by [25]). We also apply OpenSmile [5] to extract clip-level audio descriptors, including GEMAP and MFCC features. Consequently, each speaker behaviour is represented by a multi-channel audio-visual time-series behavioural signal obtained by concatenating all frame-level descriptors. Subsequently, we apply elastic similarity measurement [21] (i.e., Dynamic Time Wrapping for multi-channel time-series) to compute the similarity between each pair of multi-channel time-series speaker behaviours, i.e., the function SIM is defined as: \[\text{SIM}(b^{m},b^{\eta})=1-\frac{\text{DTW}(b^{m},b^{\eta})}{\text{Max}_{ \text{DTW}}} \tag{6}\] where DTW\((b_{t},b_{\eta})\) denotes the DTW distance between two multi-channel time-series representations of speaker behaviours \(b_{m}\) and \(b_{\eta}\); and \(\text{Max}_{\text{DTW}}\) denotes the maximum DTW distance of all speaker behaviour pairs in the whole dataset. ### Evaluation metrics Given an well-trained ML model, we define \(P_{f}^{1},P_{f}^{2},\cdots,P_{f}^{M}\) as the \(M\) sets of real facial reactions generated based on the \(M\) input speaker behaviours \(b^{1},b^{2},\cdots,b^{M}\), where \(P_{f}^{m}=\{p_{f}^{m_{1}}\neq p_{f}^{m_{2}}\neq\cdots\neq p_{f}^{m_{n}}\}\) (i.e., we assume that the well-developed model can generate \(\alpha\) different facial reactions in response to each speaker behaviour), and each input speaker behaviour \(b_{m}\) corresponds to a set of appropriate real facial reactions \(F(L|b^{m})\). Then, we propose the following evaluation metrics to measure the performance of the developed model (i.e., the appropriateness, diversity, realism and synchrony of the generated facial reactions). In this paper, all speaker behaviours, real facial reactions and generated facial reactions are represented by multi-channel time-series facial attribute signals explained in Sec. 3.1. #### 3.2.1 Appropriateness metrics We first propose three metrics for evaluating the **appropriateness** of the generated facial reactions, including: (1) the DTW distance between the generated facial reaction and its most similar appropriate real facial reaction; (2) the Concordance Correlation Coefficient (CCC) between the generated facial reaction and its most similar appropriate facial reaction; and (3) Appropriate facial reaction prediction accuracy. Since DTW has been widely used to measure the similarity between two temporal sequences that may vary in speed [9; 18], while the speeds for expressing similar facial behaviours may be varied across different subjects due to person-specific factors (e.g., age [1]), we propose to apply this metrics to measure the similarity between each generated facial reaction sequence and the real facial reaction sequence in our tasks. Meanwhile, the CCC has been frequently used to evaluate the correlation between human behaviour prediction sequence and the ground-truth sequence (e.g., dimensional affect recognition task [19; 27]), and thus we also employ it as a key metrics: * **(1) Facial reaction distance (FRDist):** we compute the DTW distance between the Figure 2: Illustration of the proposed automatic appropriate facial reaction labelling strategy (**Sec. 3.1**). It first represents each audio-facial speaker behaviour clip as a multi-channel time-series signal. Then, we compute the similarity between each pair of speaker behaviour time-series (**Step 1**). After that, we define a set of similar speaker behaviours for each speaker behaviour (**Step 2**). Finally, for each speaker behaviour, we define the real facial reactions correspond to all of its similar speaker behaviours as its appropriate facial reactions (**Step 3**). generated facial reaction and its most similar appropriate real facial reaction as: \[d_{m}=\sum_{i=1}^{\alpha}\text{Min}(\text{DTW}(p_{i}^{m},f_{i}^{\eta}))\] (7) where \(f_{i}^{\eta}\in F(L|b^{m})\) denotes the most similar appropriate facial reaction to the generated \(p_{i}^{m}\), and \(d_{m}\) denotes the sum of the distances corresponding to all facial reactions generated in response to \(b_{m}\). The final distance score for evaluation is obtained by averaging all obtained DTW distances: \[\text{FRDist}=\frac{\sum_{m=1}^{M}d^{m}}{M}\] (8) * **(2) Facial reaction correlation (FRCorr):** we compute the correlation between each generated facial reaction and its most similar appropriate real facial reaction as: \[c_{m}=\sum_{i=1}^{\alpha}\text{Max}(\text{CCC}(p_{i}^{m},f_{i}^{\eta}))\] (9) where CCC denotes the Concordance Correlation Coefficient (CCC). The final correlation score for evaluation is obtained by averaging all obtained CCC values as: \[\text{FRCorr}=\frac{\sum_{m=1}^{T}c_{m}}{M}\] (10) * **(3) Appropriate facial reaction prediction accuracy (ACC):** \[\text{ACC}=\frac{\sum_{m=1}^{M}\sum_{i=1}^{\alpha}\{p_{i}^{m}\in F(L|b^{m})\}} {M\times\alpha}\] (11) Here, the \(p_{i}^{m}\in F(L|b^{m})\) is conditioned on: \[\text{Sim}(p_{i}^{m},f_{i}^{\eta})>\mathcal{T}\] (12) where the \(\mathcal{T}\) is also employed as the threshold that decides whether the generated facial reaction \(p_{i}^{m}\) is an appropriate facial reaction in response to \(b_{m}\). It should be noticed that \(p_{i}^{m}\)'s most similar facial reaction in the entire dataset may not belong to the \(F(L|b^{m})\). In other words, we only evaluate the appropriateness by considering whether \(p_{i}^{m}\) is similar to one of the appropriate real facial reactions defined by \(F(L|b^{m})\). #### 3.2.2 Diversity, Realism and Synchrony metrics We expect the well-developed model can generate multiple different and photo-realistic appropriate facial reactions from each input speaker behaviour, as human listeners can express different reactions in response to the same speaker behaviour under different situations. Subsequently, we follow [14] to compute variation among all frames for evaluating the variance of each generated facial reaction. We also propose a metric called sum of Mean Square Error (S-MSE), to evaluate the diverseness among multiple generated facial reactions in response to the same speaker behaviour. In addition, a inter-condition diversity metric is also introduced to evaluate the diversity of all generated facial reactions in response to different speaker behaviours. These **Diversity** metrics are explained as follows: * **(1) Facial reaction variance (FRVar):** this aims to evaluate the variance of each generated facial reaction, which is obtained by computing the variation across all of its frames. The final facial reaction diversity is obtained by averaging the variance values of all generated facial reactions: \[\text{FRVar}=\frac{\sum_{t=1}^{M}\sum_{i=1}^{\alpha}\text{var}(p_{i}^{m})}{M \times\alpha}\] (13) * **(2) Diverseness among generated facial reactions (FRDiv):** we evaluate the model's capability in generating multiple different facial reactions by calculating the sum of the MSE among every pair of the generated facial reactions in response to each input speaker behaviour as: \[\text{S-MSE}(m)=\sum_{i=1}^{\alpha-1}\sum_{j=i+1}^{\alpha}(p_{i}^{m}-p_{j}^{ m})^{2}\] (14) The final diverseness score of the diverseness among generated facial reactions is obtained by averaging S-MSE scores produced from all input speaker behaviours as: \[\text{FRDiv}=\frac{\sum_{m=1}^{M}\text{S-MSE}(m)}{M}\] (15) * **(3) Diversity among facial reactions generated from different speaker behaviours (FRDvs):** we finally evaluate the diversity of the generated facial reactions in response to different speaker behaviours as: \[\text{FRDvs}=\frac{\sum_{i=1}^{\alpha}\sum_{m=1}^{M-1}\sum_{a=m+1}^{M}\text{ MSE}(p_{i}^{m},p_{i}^{a})}{\alpha M(M-1)}\] (16) Then, we employ the Frechet Inception Distance (FID) [6] to evaluate the **Realism** the generated facial reactions, as it has been frequently employed for measuring the realism of the generated human facial and body behaviours in previous studies [10; 14]: * **Facial reaction realism (FRRea):** the realism score is obtained by computing Frechet Inception Distance (FID) between the distribution \(\text{Dis}(P_{f}^{m})\) of the generated facial reactions and the distribution \(\text{Dis}(F(L|b^{m}))\) of the corresponding appropriate real facial reactions as: \[\text{FRRea}=\text{FID}(\text{Dis}(P_{f}^{m}),\text{Dis}(F(L|b^{t})))\] (17) Finally, we compute the Time Lagged Cross Correlation (TLCC) to evaluate the synchrony between the input speaker behaviour and the corresponding generated facial reaction behaviour, as this metrics has been frequently used to evaluate the leader-follower relationship [2; 14]. * **Synchrony (FRSyn):** we first compute TLCC scores between the input speaker behaviour and each of its generated facial reaction as: \[\text{Synchrony}(m)=\sum_{i=1}^{\alpha}\text{TLCC}(p_{i}^{m},f_{S}^{m})\] (18) where \(f_{S}^{m}\) here denotes the multi-channel facial attributes time-series of the speaker behaviour \(b_{m}\). Then, the final synchrony score is obtained by averaging synchrony scores obtained from all input speaker facial behaviour-generated facial reaction pairs: \[\text{FRSyn}=\frac{\sum_{m=1}^{M}\text{Synchrony}(m)}{M}\] (19) ## 4 Conclusion In this paper, we define a new affective computing research direction: Multiple Appropriate Facial Reaction Generation. We specifically present its basic theory and hypothesis, task definition, automatic appropriateness labelling strategy as well as a set of objective evaluation metrics.
2308.13624
Field Testing of Residential Bidirectional Electric Vehicle Charger for Power System Applications
Bidirectional electric vehicle (EV) charging is a technology that is gaining rapid popularity due to its ability to provide economic and environmental benefits to both EV owners and power system operators (PSOs). Using the EV as a flexible source of energy, an EV owner can provide power to homes/buildings, or even participate in grid services such as demand response and frequency regulation. However, there is a lack of real-world testing and validation for bidirectional charging technology, particularly in the residential segment. As such, this paper presents real-world field testing of a bidirectional EV charger deployed in a home. Control software is developed to dispatch the EV according to static setpoints, as well as automated load following, and its accuracy and responsiveness is reported on. The results of the testing with the charger and 2019 Nissan Leaf combination indicates a responsiveness of 6-8 seconds and accuracy of over 99%, which suggests feasible participation for applications such as load following, arbitrage, and demand response.
Shivam Saxena, Hany Farag, Khunsha Nasr, Leigh St. Hilaire
2023-08-25T18:41:01Z
http://arxiv.org/abs/2308.13624v1
# Field Testing of Residential Bidirectional Electric Vehicle Charger for Power System Applications ###### Abstract Bidirectional electric vehicle (EV) charging is a technology that is gaining rapid popularity due to its ability to provide economic and environmental benefits to both EV owners and power system operators (PSOs). Using the EV as a flexible source of energy, an EV owner can provide power to homes/buildings, or even participate in grid services such as demand response and frequency regulation. However, there is a lack of real-world testing and validation for bidirectional charging technology, particularly in the residential segment. As such, this paper presents real-world field testing of a bidirectional EV charger deployed in a home. Control software is developed to dispatch the EV according to static setpoints, as well as automated load following, and its accuracy and responsiveness is reported on. The results of the testing with the charger and 2019 Nissan Leaf combination indicates a responsiveness of 6-8 seconds and accuracy of over 99%, which suggests feasible participation for applications such as load following, arbitrage, and demand response. _bidirectional charging, electric vehicles, vehicle grid integration vehicle to home, vehicle to grid, demand response_ ## I Introduction In an effort to reduce worldwide greenhouse gas emissions, there has been an acceleration of uptake with respect to electric vehicles (EVs), with over twenty countries mandating that all vehicle sales will be 100% EVs by 2030. [1]. This paradigm shift has motivated power system operators (PSOs), policy makers, and EV owners to propose new technologies to increase the usefulness of EVs. To address issues of power system congestion and associated degradation of power quality as a result of coincident EV charging, EV smart-charging programs have been successful, where PSOs can use EVs as a flexible load to reduce congestion at peak times [2]. However, studies show that even smart-charging programs may not be enough to handle the increase in peak demand caused by mass EV adoption, thus, necessitating expensive grid capacity upgrades by PSOs [3]. More recently, there has been great interest in bidirectional EV charging, where the onboard battery of the EV can be used as both a flexible load and a generation source. As such, the EV can be used to provide power to homes and buildings (V2H and V2B), and also to export power back to the grid (V2G) [4]. This level of functionality lends flexibility to both EV owners and PSOs, and results in a multitude of power system applications where bidirectional EV charging can be useful. For example, V2H can be used to provide power from the EV to specific circuits within a home, or to the entire home itself, especially in the case of power outages [5]. Similarly, V2B is effective in reducing peak demand charges of commercial buildings [6]. On the other hand, V2G provides the opportunity for EV owners to provide grid services to PSOs, whether participating in demand response or frequency regulation [7]. With the advent of power system decentralization and new markets opening to "behind-the-meter" participants, bidirectional charging is opening up new avenues in research [8]. In [9], the authors utilize EVs to perform arbitrage by charging their EVs at home during the night when electricity tariffs are lower, and discharging at work during the day when the tariffs are higher. The work in [10] develops a scheduling algorithm to minimize the time a home spends without power by using V2H. Meanwhile, the works in [11] and [12] develop optimization models to procure power from EVs for the services of demand response and frequency regulation considering the effects of battery degradation and revenue. While the aforementioned works have contributed significantly to the state-of-the-art, it is worthwhile to note that these works have not used real-world data or experiences in their experimental results. As such, it is (a) difficult to determine the technical feasibility of bidirectional charging, especially pertaining to the speed and accuracy of power generation required by different power system applications; (b) unclear what methods are used to control the power output of the EV; and (c) challenging to ascertain what additional components are needed for bidirectional charging. These shortcomings prevent the mass uptake of bidirectional charging by the general public. Motivated by these reasons, this paper presents a field evaluation of a residential, bidirectional enabled charger with respect to its speed of response, accuracy of power output, and subsequent suitability for power system applications, including demand response, frequency regulation, arbitrage, and load following. This paper contributes a novel attempt to disseminate real-world data and results from field tests, which to the best of the authors' knowledge, has not been shared before. Specifically, the charger has been deployed inside a real-world home and tested against real household appliances, from which the results can inform future algorithms, policies, and programs pertaining to bidirectional EV research. The organization of the remainder of the paper is as follows. Section II provides a brief background of power system applications that bidirectional EVs can be utilized for, while Section III provides details of the interconnection and software-based control of the deployed charger. Section IV is devoted to discussing experimental results for the field testing of the charger, while Section V presents the conclusion of the paper. ### _Components of Residential Bidirectional EV Charging_ Fig. 1 illustrates the typical components used for residential Level 3 DC bidirectional EV charging, where the bidirectional charger houses a bidirectional inverter that couples with the EV battery to convert the DC energy of the battery to AC energy. It is worthwhile to mention that Level 2 AC bidirectional charging is possible if the EV itself is equipped with a bidirectional inverter [13], however, this paper focuses on Level 3 bidirectional charging. Nevertheless, the AC energy flows from the inverter and towards the home, where metering infrastructure is located at the point of common coupling between the home and the PSO-controlled grid. Using this measurement, the bidirectional charger (or external software) is able to export power from the EV equals to the load of the home to prevent back feeding the grid, if so desired. Furthermore, an automatic transfer switch is used to isolate the home from the grid during power outages, ensuring that the home is being powered only by the EV. ### _Home Load Following_ Used in the capacity of V2H, the EV is used to provide power to the loads of the home, both in grid-connected or off-grid scenarios. In a grid-connected scenario, the main motivation would be to utilize the EV to reduce electricity bills, particularly when running energy intensive loads, such as dryers, stovetops, and washing machines. In an off-grid scenario, or when there is a power outage, the EV would be used to keep power alive to critical loads, such as the heating source, internet, and fridge. ### _Energy Arbitrage_ Energy arbitrage opportunities can be realized when electricity tariffs fluctuate, typically as a function of time. For retail electricity rates offered to residential customers, a common tariff structure is known as time of use (TOU), which defines blocks of time where tariffs are at their highest (on-peak), and also at their lowest (off-peak). Thus, an EV owner may charge at off-peak hours, and then discharge an on-peak hours to capture daily revenue. Operating the EV in this manner generally leads to greenhouse gas emissions savings as well, since during on-peak times, PSOs activate "peaking power plants" to satisfy increased demand, leading to higher emissions [14]. ### _Demand Response_ Demand response is typically used to reduce congestion within a power system during energy shortages. For the residential scenario, EVs can be used to generate power to reduce the load of their home, or, export any excess power to the power grid to further ease grid congestion. Demand response does not require very fast response from the EV, with timeframes in the minutes scale being acceptable [15]. ### _Frequency Regulation_ Frequency regulation is typically used by PSOs to maintain balance between the supply and demand of power, which stabilizes the grid frequency. Power can be injected or withdrawn to maintain the balance, where PSOs send regulation signals, or setpoints, to participating entities to modulate their power output accordingly, in the timeframe of 2-6 seconds [16]. Bidirectional EVs could thus be a suitable candidate to provide frequency regulation services in aggregate if the responsiveness is indeed within 2-6 seconds. ## III Deployment of Residential Bidirectional Charger ### _Physical Interconnection With PSO Grid_ The single line diagram in Fig. 2 shows the physical interconnection and deployment of the bidirectional charger under test, within a home located in Ontario, Canada. The charger is rated for +/- 7.4 kW, and used with a 2019 Nissan Leaf SV that has 40 kWh battery capacity. In order to meet connection approval from the local PSO, a disconnect is deployed outside the home for the utility to turn off the charger during power outages. As seen in Fig. 2, a smart meter is deployed near the main electrical panel of the home, which uses current transformers within the panel to take net power measurements of the entire home. Also seen in Fig. 2 is an external software-based controller that is developed to take measurements from the smart meter and bidirectional charger in real-time, while also dispatching the charger with specific setpoints in units of kW. It is worth noting that the charger is not capable of islanding during a power outage, and as such, a transfer switch is not deployed within the home. ### _Software Control of Bidirectional charger_ Both the smart meter, manufactured by Accuenergy (Acuvim-L series), and the charger allow external software to read/write values to these devices using the Modbus protocol. As such, both the meter and the charger expose specific registers, which the external software can use to toggle certain functionalities. For the smart meter, the registers of interest are the readings of the active power measurement and current flowing to/from the home. For the charger, the registers of interest are the ability to control the charger remotely (from external software), the ability to stop/start charging or discharging, the setting of a setpoint in kW, the reading of the power being delivered/received from the EV, and the state of charge (SOC) of the EV battery. To control the power flow of the EV, particularly in relation to the loads of the home, a distributed energy resources management system (DERMS) was used, developed by Hero Energy and Engineering, entitled Harmonize [17]. The DERMS software provides control of energy resources for power flow optimization, however, it was expanded to develop libraries that facilitate the measurement and control of the bidirectional charger. Thus, the DERMS is responsible for reading relevant measurements from the smart Fig. 1: Components of residential bidirectional EV charging. Fig. 2: Single line diagram of deployed charger within home. meter and the charger at a configurable sampling rate, synchronizing the timing of the measurements, allowing both manual and automated control of the EV, and logging the data to disk. A snapshot of charger and EV combination (Nissan LEAF 2019) is shown in Fig. 3, while a screenshot of the DERMS software is shown in Fig. 4, where functionalities to control the EV are provided in the Control Panel. For manual control, the user may select the "Set Power (kW)" button, and enter a value within the "Setpoint (kW)" textbox to send the setpoint to the charger. On the other hand, for automated control, the user may press the "Zero Export" button, which enables the EV to deliver just enough power to meet the loads of the home. As shown in Fig. 2, the DERMS receives measurements of the net power measurement of the home from the smart meter, as well as the EV power measurement from the charger, from which the setpoint is calculated and sent to the charger. The equation to realize the setpoint is as follows: \[EV_{SET}(t)=P_{NET}(t-1)-P_{EV}(t-1)-\ \partial\ (1)\] where, \(EV_{SET}\) is the EV setpoint, \(P_{NET}\) is the net power of the home, \(P_{EV}\) is the power met represents time, and \(\alpha\) represents a tolerance factor, in kW, used as an offset to ensure the EV does not back feed the grid. The realization of (1) can be seen in Fig. 4, where the net power of the home is 0.328 kW, which is quite close to the tolerance set of 0.3 kW, while the EV is set to discharge 5.86 kW from its battery to meet the loading of the home. ## IV Experimental Results This section presents experimental results to evaluate the speed, accuracy, and responsiveness of the power transfer of the deployed residential bidirectional charger, thereby enabling an evaluation with respect to its applicability to the power system applications discussed in Section II. ### _Step Test_ The step test involves setting setpoints on the charger in steps of 1 kW to evaluate its speed and accuracy. In this test, the sampling data acquisition rate from the meter and charger was set at 5 Hz, and setpoints were set in decrements of 1 kW every minute (or 300 timesteps). The response speed is calculated as the duration of time between the time the setpoint command was sent, and the first occurrence of the EV power measurement reaching the setpoint, while the accuracy is calculated as the percentage difference between the setpoint and the average EV power when the setpoint and arrives at Fig. 4: Software controller dispatches EV to meet house loads in real-time. Fig. 5: Step test for bidirectional charging. Fig. 3: Snapshot of EV and deployed bidirectional charger. at steady state. Three trials were repeated of this test, and the average set of results can be seen in Table I and Fig. 5. Three main observations can be made from Table I and Fig. 5. First, the starting setpoint takes a significant amount of time to reach when compared to other setpoints, with the starting setpoint being reached at 109 s versus an average of 6 s for the rest of the setpoints. This is presumably due to the initial negotiations between the charger and EV which occur on the first connection and subsequent charging requests [18]. Second, the charger has a maximum power transfer of +/- 6.3 kW when attempting to set setpoints at +/- 7 kW despite the nameplate claiming +/- 7.4 kW. This is because the charger is limited to a DC current input of 17 A, while the DC voltage of the EV battery measures at roughly 375V, resulting in a maximum power flow of 6.3 kW. Third, the accuracy of the charger is quite high when removing the samples of the +/- 7 kW setpoint, with an average error of less than 0.15 %. The step results indicate that the charger may work well for power system applications with less stringent timing requirements, such as demand response, load following, and arbitrage. ### _Sweep Test_ The sweep test is executed in a similar fashion to the step test, with setpoints being issues from +6 kW to - 6 kW. As seen in the results shown in Table II and Fig. 6, the setpoint accuracy provided by the charger is extremely high with an average error of less than 0.1%, however, it takes an average of 8.74 s to move through the entire allowable range. Recalling that the average frequency regulation signal is between 2-6 s [16], improvements in the response time are required before this charger can provide the frequency regulation service. ### _Energy Arbitrage_ The arbitrage test involves charging the EV at off-peak hours, and discharging the EV at on-peak hours. In Ontario, the 2022 retail electricity tariffs are $0.082/kWh for off-peak, and $0.17/kWh for on-peak, where the summer off-peak hours are between 19:00 and 7:00, while the on-peak hours are between 11:00 and 17:00. The results of the arbitrage test can be seen in Fig. 7, where the EV shows steady charging and discharging during off-peak and on-peak periods, respectively, potentially earning a total of $3.61 for the day. Note that Ontario regulations do not currently allow incentives for EVs to export energy to the grid. Nevertheless, the steadiness of the power discharge during the on-peak hours suggests that the charger would be able to participate in applications involving demand response, especially since the responsiveness required is more on the scale of minutes rather than seconds. ### _Load Following with Zero Export_ In this test, the EV is dispatched to follow the loads of the home. To this end, several common household appliances, such as the dishwasher, dryer, and washing machine are turned on during a test duration of approximately five hours to test the responsiveness of the EV and charger in load following. To minimize back feeding the grid, a tolerance of 0.1 kW is set as per (1), while all other loads within the homes are turned off, leaving a base load of approximately 0.13 kW throughout the test. The results from the test can be seen in Table III, as well as Figs. 8 and 9, where the EV power mirrors the overall house load as per (1). In addition, as seen in Fig. 8, the net power of the home hovers close to the 0.1 kW mark throughout the majority of the test, except for periods for when the dryer is on. This is due to the fact that the dryer's power consumption changes very quickly, within 1-2 seconds, while the average response time of the EV is around 6 seconds (as found in the step test). Thus, Fig. 8 shows that the plot of the EV lags behind the plot of the net house power, thereby leading to periods of time where the EV cannot account for all house loads, resulting in the net house power being greater than 0.1 kW. Nevertheless, accounting for the tolerance of 0.1 kW, the EV is able to supply 4.37 kWh of shiftable house load of 6.41 kWh, or 68%. A plot of the SOC of the EV is also shown in Fig. 9, where the SOC drops from 85 to 66 during the test, with most of the decline occurring when the dryer is on. Fig. 6: Sweep test for bidirectional charging. Fig. 7: Arbitrage test based on electricity tariff time of use. ## V Conclusion This paper presents the real-world field testing of a residential bidirectional EV charger with a view towards evaluating its suitability for applications of load following, arbitrage, demand response, and frequency regulation. The charger was deployed to a home in Ontario, Canada, and outfitted with a smart meter to provide measurements of the net energy flow. DERMS software was enhanced to monitor and control an EV in real-time in response to house loads. Field tests with the charger and 2019 Nissan LEAF found that charger/EV combo responds to setpoints within 6-8 seconds, thereby being appropriate for applications of arbitrage, demand response, and home backup, while requiring improvement before being provisioned for frequency response.
2310.09436
Sub-network Discovery and Soft-masking for Continual Learning of Mixed Tasks
Continual learning (CL) has two main objectives: preventing catastrophic forgetting (CF) and encouraging knowledge transfer (KT). The existing literature mainly focused on overcoming CF. Some work has also been done on KT when the tasks are similar. To our knowledge, only one method has been proposed to learn a sequence of mixed tasks. However, these techniques still suffer from CF and/or limited KT. This paper proposes a new CL method to achieve both. It overcomes CF by isolating the knowledge of each task via discovering a subnetwork for it. A soft-masking mechanism is also proposed to preserve the previous knowledge and to enable the new task to leverage the past knowledge to achieve KT. Experiments using classification, generation, information extraction, and their mixture (i.e., heterogeneous tasks) show that the proposed method consistently outperforms strong baselines.
Zixuan Ke, Bing Liu, Wenhan Xiong, Asli Celikyilmaz, Haoran Li
2023-10-13T23:00:39Z
http://arxiv.org/abs/2310.09436v1
# Sub-network Discovery and Soft-masking for Continual Learning ###### Abstract Continual learning (CL) has two main objectives: preventing _catastrophic forgetting_ (CF) and encouraging _knowledge transfer_ (KT). The existing literature mainly focused on overcoming CF. Some work has also been done on KT when the tasks are similar. To our knowledge, only one method has been proposed to learn a sequence of mixed tasks. However, these techniques still suffer from CF and/or limited KT. This paper proposes a new CL method to achieve both. It overcomes CF by isolating the knowledge of each task via discovering a sub-network for it. A soft-masking mechanism is also proposed to preserve the previous knowledge and to enable the new task to leverage the past knowledge to achieve KT. Experiments using classification, generation, information extraction, and their mixture (i.e., heterogeneous tasks) show that the proposed method consistently outperforms strong baselines.1 Footnote 1: The code of TSS can be found at [https://github.com/ZixuanKe/PyContinual](https://github.com/ZixuanKe/PyContinual). ## 1 Introduction One of the overarching goals of AI is to develop agents that can continually learn diverse tasks. Toward this goal, continual learning (CL) has been proposed, which incrementally learns a sequence of tasks, 1,..., \(T\)[15]. Once a task \(t\) is learned, its training data \(D_{t}\) (at least a majority of it) is no longer accessible. This paper studies CL of NLP tasks [11], where a _task_ is an _end-task_, e.g., text classification, summarization or information extraction, in the popular setting of _task-incremental learning_ (TIL), in which the task-id is given in both training and testing.2 Footnote 2: Another popular CL setting is _class-incremental learning_ (CIL), which provides no task-id in testing and solves a different type of problems [11]. Ideally, CL should (1) overcome _catastrophic forgetting_ (**CF**), i.e., it should not degrade the performance of previously learned tasks; (2) encourage _knowledge transfer_ (**KT**) across tasks, i.e., the learning of a task should be able to leverage the knowledge learned from previous tasks3; and (3) be able to learn a mixed sequence of similar and dissimilar tasks and achieve both (1) and (2). Footnote 3: This is called _forward transfer_ in the literature. There is also the _backward transfer_, where the performance of previous tasks may be improved after learning a similar new task. Most existing approaches for TIL address one of the three to the detriment of the others. For example, HAT [1] and SupSup [16] can overcome CF by isolating the parameters of each task, which makes KT difficult. CUBER [13] and DAS [11] can achieve KT by allowing some updates to existing knowledge, which causes CF. ProgressiveNet [12] assures no CF by fixing all the learned parameters and expanding the network, but it limits KT. To our knowledge, CAT [11] is the only system that tries to have all three capabilities. However, it still suffers from CF due to its weak task similarity detection (for KT) and parameter sharing across dissimilar tasks. This paper takes a major step to achieve all three objectives without suffering from the above drawbacks. The proposed system is called **TSS** (TIL based on **S**ub-network discovery and **S**off-masking). TSS performs two functions, **(A)** sub-network discovery with soft-masking and **(B)** importance computation, to learn a mixed sequence, where some tasks may be _similar_ (which enable KT) and some may be _dissimilar_. Given a fixed and randomly initialized backbone network \(N\), function **(A)** finds a separate sub-network as the model _for each task_. A sub-network is indicated by a set of _binary gates_, one for each parameter of \(N\), specifying whether the corresponding parameter in \(N\) should be in the sub-network for the task. In training, \(N\) is fixed and the binary gates are obtained by learning a set of positive and real-valued _popup scores_ (one for each parameter of \(N\)) and then applying a threshold on the popup scores.4 In other words, _we convert the network training into the popup scores training_.5 Only the binary gates (1's) obtained from the trained popup scores are stored for each task as its model. Thus, there is no interference across tasks to cause CF as the backbone network is fixed. Footnote 4: The popup score means that the score is used to select a parameter or popup the edge from the backbone network (Ramanujan et al., 2020). Footnote 5: Therefore, training task \(t\) is the same as training the popup scores for task \(t\). We thus use the two terms interchangeably. This sub-network discovery is effective for overcoming CF, but it prevents KT if the popup scores of each task are independently trained with no sharing across tasks. A naive way to share knowledge is to initialize the popup scores of the new task \(t\) with the trained popup scores of the previous task \(t-1\). However, this is problematic for a mixed sequence of similar and dissimilar tasks because the training of task \(t\) can only leverage the knowledge from the \((t-1)\)th task. This can cause negative transfer if task \(t-1\) is dissimilar to task \(t\). To address this, we propose a **soft-masking** mechanism to make the initialization of popup scores for the new task contain the learned knowledge from _all_ previous tasks so that the new task training can leverage the knowledge of all previous similar tasks. This is a **key novelty** of our approach. It is achieved by reducing (called "soft-masking") the gradient corresponding to the subset of popup scores that are **important** to previous tasks (\(1...t-1\)) in training the current task \(t\) so that the previously learned knowledge is well protected. In this way, the trained popup scores for task \(t\), which will be used as the initialization for the next task, contain not only the knowledge for task \(t\) but also the knowledge from all previous tasks. This is **critical** for learning a mixed sequence of similar and dissimilar tasks because the system does not know _a priori_ which tasks' knowledge can be transferred to the current task. Note that soft-masking is only applied in backward propagation to preserve old knowledge. In the forward pass, the training can still use all popup scores corresponding to all parameters of the backbone network. Additionally, soft-masking reduces the gradient instead of fully blocking the gradient, which gives the model the flexibility to _adjust_ any popup scores when training the current task \(t\), which further encourages KT. The question is how to decide the important popup scores for each task. This is done by **(B)**, which computes the importance of the popup scores _after_ training a task. The importance is a number between 0 and 1. After training popup scores for a task, we input the training data _again_ to compute the importance of each popup score for the task based on its gradient. We save the _accumulated_ importance over all previous tasks to keep track of the important popup scores so far. The accumulated importance (within 0 and 1; thus "soft") is used to guide soft-masking in **(A)** in training a new task. This paper makes three key contributions. 1. It studies an important but under-studied problem of _continually learning_ a mixed sequence, where some tasks may be similar and some may be dissimilar. Existing CL methods suffer from either CF or limited KT in this scenario. 2. It proposes a novel method TSS consisting of sub-network discovery with soft-masking and importance computation to prevent CF and to encourage KT for a mixed sequence of tasks. 3. It evaluates TSS using datasets consisting of classification, generation and information extraction tasks and their mixture. The results demonstrate the effectiveness of TSS. ## 2 Related Work **Forgetting prevention in continual learning (CL).** There are four main families of approaches: (1) _Regularization-based approaches_(Kirkpatrick et al., 2016; Lee et al., 2017; Seff et al., 2017; Zenke et al., 2017; Rusu et al., 2016) add a regularization in the loss to penalize changes to parameters that are important to previous tasks. _Gradient projection_(Zeng et al., 2019) ensures the gradient updates occur in the orthogonal direction to the input of old tasks and in trust region (Lin et al., 2022, 2020) TSS uses no regularization. (2) _Replay-based approaches_(Rebuffi et al., 2017; Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019; Wang et al., 2020) retain some training data of old tasks and use them in learning a new task. The methods in (Shin et al., 2017; Kamra et al., 2017; Rostami et al., 2019; He and Jaeger, 2018) learn data generators and generate old task data for learning a new task. These are clearly different from TSS as it uses no any replay data. (3) _Parameter isolation_(Serra et al., 2018; Ke et al., 2020, 2021; Mallya and Lazebnik, 2018; Fernando et al., 2017; Wortsman et al., 2020) learns and masks a dedicated sub-network for each task, but has limited KT. TSS leverages this approach to isolate knowledge for different tasks, but enables KT by using a soft-mask mechanism to preserve the learned knowledge and uses the old knowledge as the initialization for learning the new task. (4) _Parameter isolation plus out-of-distribution (OOD) detection_ is a new and theoretical grounded approach (Kim et al., 2022, 2023). It is mainly used for class-incremental learning (CIL). The main idea is to use a parameter isolation approach for task incremental learning (TIL) to overcome CF and the model for each task is an OOD detection model. During inference, the system performs both task-id prediction and within-task prediction for CIL classification. However, our work is about TIL. **Knowledge transfer in CL.** There are two main approaches for KT: (1) _replay-based_(Huang et al., 2021; Wang et al., 2021, 2020; de Masson d'Autume et al., 2019; Zhu et al., 2022; Yin et al., 2022), which use replay data to help the transfer. TSS is replay-free. (2) _similarity-based_, which uses task similarities using features (Ke et al., 2021, 2020; Wang et al., 2022; Zhang et al., 2022) or gradients (Lin et al., 2022). However, these methods are not always reliable, which results in CF. CAT (Ke et al., 2020) considers a mixed sequence of tasks by first detecting previous tasks that are similar to the current task and then opening the masks of these tasks so that the new task learning can modify their parameters to achieve KT. However, this causes CF for dissimilar tasks that share parameters with those similar tasks. DAS (Ke et al., 2023) encourages KT by allowing previous knowledge updates based on importance, but this also causes CF because previous parameters is changeable and there is no easy way to separate the parameters that are used for different tasks. In contrast, TSS does not require any explicit similarity detection but finds sub-network for each task to guarantee no CF. Its soft-masks allow the initialization contains all previously learned knowledge and thus encourage KT to the new task. Konishi et al. (2023) recently proposed a parameter-level soft-masking method using AlexNet as the backbone. The method has difficulties working with more complex architectures. Our soft-masks are set on the popup scores rather than network parameters and our approach is based on sub-network discovery and knowledge sharing. **Network pruning as importance computation.** It is known that many parameters in a neural network are redundant and can be pruned (Li et al., 2021; Lai et al., 2021; Chen et al., 2020; Lin et al., 2020; Gao et al., 2021; Voita et al., 2019). For Transformer, one can prune the attention head (Michel et al., 2019; Voita et al., 2019; McCarley et al., 2019) and sub-layers (Fan et al., 2020; Sajjad et al., 2020). However, these methods are not directly applicable to us as we need to compute the importance of each popup score instead of each parameter in the network. We use the importance as soft-masks to leverage all existing sub-networks for KT rather than to compress the LM. ## 3 Proposed TSS Technique TSS is designed for both CF prevention and forward KT. Figure 1 gives an overview of TSS. The backbone network consists of the transformer and adapters, which are indicated by the transformer parameters \(\mathbf{w}_{l}^{\text{transformer}}\) of layer \(l\) and by the parameters of adapter \(\mathbf{w}_{l}^{\text{adapter}}\) of layer \(l\) respectively (see the grey boxes). Note again that these are fixed in the entire learning process. On the left, we show the forward pass and backward propagation of **(A) sub-network discovery with soft-masking** (Sec. 3.1). In the forward pass, a set of learnable popup scores \(\mathbf{s}_{l}^{(t)}\) are initialized (Sec. 3.1.2) and fed into a step function. The output of the step function is a set of binary gates \(\mathbf{g}_{l}^{(t)}\) that element-wise multiply (\(\otimes\)) with the parameters of the adapter of the backbone network (\(\mathbf{w}_{l}^{\text{adapter}}\)). As a result, the binary gates indicate a sub-network for task \(t\) within the \(\mathbf{w}_{l}^{\text{adapter}}\) (Sec. 3.1.1). In the backward propagation (Sec. 3.1.2), we first accumulate the importance of popup scores for all previous tasks by normalization and element-wise maximum ("EMax"). The accumulated importance \(\mathbf{I}_{l}^{(\leq t-1)}\) is converted into soft-mask by the operation \(1-\mathbf{I}_{l}^{(\leq t-1)}\). This soft-mask is then element-wise multiplied with the original gradient of the popup scores \(\mathbf{s}_{l}^{(t)}\) in layer \(l\), \(\nabla_{l}\) (computed by straight-through estimator in Eq. 3). The soft-masked gradient, \(\hat{\nabla}_{l}\), is the final gradient that is used to optimize the popup scores \(\mathbf{s}_{l}^{(t)}\). Since the important popup scores have lower gradients in \(\hat{\nabla}_{l}\), the popup scores that are important to previous tasks are preserved and are used to initialize the new task to encourage KT (not shown in Figure 1). After **(A)**, we show **(B) importance computation** (Sec. 3.2). The forward pass is the same as **(A)**. However, in backward propagation, the gradient of popup scores, \(\nabla_{l}\), is not used to update anything (red cross in the arrow) but computes the importance using its absolute average value, \(\frac{1}{N}\sum_{n=1}^{N}\lvert\nabla_{l}\rvert\). The resulting importance of task \(t\) in layer \(l\), \(\mathbf{I}_{l}^{(t)}\), is saved for **(A)** in learning the next task. The following sub-sections present each function and step in detail. ### Sub-network Discovery and Soft-masking Sub-network discovery finds a sub-network (represented as binary gates) for the current task, which is saved to overcome CF. It relies on the training of popup scores and its training also involves soft-masking to encourage knowledge transfer. In this section, we will present how we discover the sub-network via learning popup scores and how we encourage knowledge transfer (KT) via soft-masking. #### 3.1.1 From Popup Scores to Sub-network Since the pre-trained language model (LM) contains useful general knowledge, we fix and use the full LM. The sub-network discovery is only done on adapters (Houlsby et al., 2019), which are inserted into each layer of the LM.6 Note again that throughout the training process, both the adapters (randomly initialized) and backbone LM are _fixed_. Footnote 6: See Appendix A for additional details about adapters. We now define a set of learnable **popup scores**\(\mathbf{s}_{l}^{(t)}\) for layer \(l\) for task \(t\). It has the same size as the parameters of the adapters \(w_{l}^{\text{adapter}}\) (which is fixed throughout learning). In the forward pass, \(\mathbf{s}_{l}^{(t)}\) is constantly _thresholded_, \[\mathbf{g}_{l}^{(t)}=\mathbb{1}_{\epsilon}(\mathbf{s}_{l}^{(t)}), \tag{1}\] where \(\mathbb{1}\) is a step function that outputs 1 for the scores that are larger than the threshold \(\epsilon\) (a hyperparameter). Since the gate \(\mathbf{g}_{l}^{(t)}\) is always binary, it can naturally indicate a **sub-network** within the parameters of the fixed adapter, \(\mathbf{w}_{l}^{\text{adapter}}\), by element-wise multiplication, \[\hat{\mathbf{w}}_{l}^{\text{adapter}}=\mathbf{w}_{l}^{\text{adapter}}\otimes\mathbf{g}_ {l}^{(t)}, \tag{2}\] where \(\otimes\) is element-wise multiplication. \(\hat{\mathbf{w}}_{l}^{adapter}\) indicates the _selected_ parameters in the adapter that form the discovered sub-network. **Training the popup scores.**\(\mathbf{s}_{l}^{(t)}\) cannot be directly optimized because the derivative of its threshold-based step functions is zero. Then nothing can be learned. To solve the problem we use straight-through estimators (Bengio et al., 2013). Specifically, we ignore the derivative of the threshold function and pass on the incoming gradient as if the threshold function was an identity function. Figure 1: Illustration of TSS in training task \(t\). The detailed description is in Sec. 3. The grey boxes indicate that the parameters of the adapters and backbone language model (LM) are all frozen. The only trainable parameters are the popup scores \(\mathbf{s}_{l}^{(t)}\). In **(A) sub-network discovery with soft-masking** (Sec. 3.1.1), we remove the step function in the backward propagation to reflect the straight-through estimator in Eq. 3, where the gradient “skips” the step function as if it is an identity function. The original gradient for popup scores, \(\nabla_{l}\), is not directly used to update the popup scores but soft-masked based on the accumulated importance (Sec. 3.1.2). In **(B) Importance Computation** (Sec. 3.2), the red cross in backward propagation indicates that the gradient is not used to update the popup scores \(\mathbf{s}_{l}^{(t)}\) but only to compute the importance. Consequently, for each parameter \(s_{i,j}^{(t)}\) in \(\mathbf{s}_{l}^{(t)}\), it is updated as follows, \[\begin{split}&\mathbf{s}_{l}^{(t)}=\mathbf{s}_{l}^{(t)}-\alpha\nabla_{l};\\ &\nabla_{l}=\frac{\partial\mathcal{L}}{\partial s_{i,j}^{(t)}}= \frac{\partial\mathcal{L}}{\partial\mathcal{I}_{j}}\frac{\partial\mathcal{I} _{j}}{\partial s_{i,j}^{(t)}}=\frac{\partial\mathcal{L}}{\partial\mathcal{I} _{j}}w_{i,j}\mathcal{O}_{i},\end{split} \tag{3}\] where \(\mathcal{I}_{j}\) and \(\mathcal{O}_{i}\) are the input and output of neurons \(i\) and \(j\). \(\nabla_{l}\) is the gradients on the popup scores \(\mathbf{s}_{l}^{(t)}\) and \(\alpha\) is the learning rate. After training, the binary gate, \(\mathbf{g}_{l}^{(t)}\), is saved (taking only 1-bit per element). Since \(\mathbf{g}_{l}^{(t)}\) is saved, and all the backbone parameters are fixed (including the adapter's and backbone LM's), TSS does not suffer from forgetting of the previously learned knowledge. #### 3.1.2 Preserving Previous Knowledge for New Task Initialization While the sub-network discovery in Sec. 3.1.1 can help prevent forgetting (CF), it does not do knowledge transfer (KT). As discussed earlier, the naive way for KT is to initialize the popup scores \(\mathbf{s}_{l}^{(t)}\) for the new task \(t\) with the learned scores \(\mathbf{s}_{l}^{(t-1)}\) for task \(t-1\). However, \(\mathbf{s}_{l}^{(t-1)}\) contains only the knowledge from task \(t-1\), but we have no idea whether any knowledge from task \(t-1\) can be transferred to task \(t\) because tasks \(t\) and \(t-1\) may be entirely different. To address this, we preserve the learned knowledge from all previous tasks from 1 to \(t-1\) in \(\mathbf{s}_{l}^{(t-1)}\). In this way, task \(t\) can leverage all previously learned knowledge for KT. To make the preservation possible, inspired by [10], we propose a soft-masking mechanism based on the importance of each popup score to all previous tasks. We compute the importance in Sec. 3.2. For now, we assume that we already have the set of importance value \(\{\mathbf{I}_{l}^{(k)}\}_{k=1}^{t-1}\) for \(\mathbf{s}_{l}^{(k)}\) for each previously learned task \(k\). The preservation is achieved by soft-masking the learning based on the accumulated importance as follows:7 Footnote 7: Before accumulation, we normalized the the importance values in each layer \(l\) for a task \(k\) by making the importance values for all popup scores in the layer to have the mean of 0 and standard deviation of 1. To further facilitate soft-masking, the normalized importance values are rounded by a T\(\tanh\) activation and the absolute operation so that the values are in the interval of [0,1]. To simplify the notation, we still use \(\mathbf{I}_{l}^{(k)}\) to represent the resulting importance. **Accumulating importance.** We accumulate the importance after task \(t-1\) is learned via element-wise max (EMax), \[\mathbf{I}_{l}^{(\leq t-1)}=\text{EMax}(\{\mathbf{I}_{l}^{(t-1)},\mathbf{I}_{l}^{(\leq t- 2)}\}), \tag{4}\] where \(\mathbf{I}_{l}^{(\leq t-2)}\) refers to the previously accumulated importance at task \(t-2\). We do not need to save all \(\{\mathbf{I}_{l}^{(k)}\}_{k=1}^{t-1}\) for Eq. 4, but only the incrementally accumulated importance after training each task. **Soft-masking the popup scores.** Given the accumulated importance \(\mathbf{I}_{l}^{(\leq t-1)}\) of layer \(l\) and the cross-entropy loss \(\mathcal{L}_{\text{CE}}\), we reduce (or soft-mask) \(\mathbf{s}_{l}^{(t)}\)'s gradient (\(\nabla_{l}\) in Eq. 3) flow as follows, \[\hat{\nabla}_{l}=(1-\mathbf{I}_{l}^{(\leq t-1)})\otimes\nabla_{l}, \tag{5}\] The importance \(\mathbf{I}_{l}^{(\leq t-1)}\) has the same size as \(\nabla_{l}\) and the associated popup scores. This is _soft-masking_ as each element in \(\mathbf{I}_{l}^{(\leq t-1)}\) is a real number in \([0,1]\) (not binary {0, 1}), which gives the model the flexibility to adjust any popup scores. We note that the above soft-masks are only applied in the backward pass, but not in the forward pass, which encourages knowledge transfer because each task training can leverage all popup scores that are corresponding to all parameters of the backbone network and the popup scores contain the knowledge from all previously learned tasks. **Popup scores initialization.** Thanks to the soft-masking mechanism, the trained \(\mathbf{s}_{l}^{(t-1)}\) contains knowledge from all previous tasks. We use it to initialize the popup scores for the new task \(\mathbf{s}_{l}^{(t)}\) to encourage knowledge transfer. Note that we apply Kaiming Initialization for \(\mathbf{s}_{l}^{(0)}\). ### Computing the Importance of Popup Scores to the Current Task To apply soft-masking (Sec. 3.1.2), we need to determine the importance of popup scores for the previous tasks. This is done after training a task. We adapt the gradient-based importance detection method in [10] for our purpose. Given a dataset \(D=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) of \(n\) samples (\(y_{i}\) is the class label of \(\mathbf{x}_{i}\)) and the trained popup scores \(\mathbf{s}_{l}^{(t)}\), we input the data _again_ and the importance of pupup scores in the layer \(l\) is estimated with a gradient-based proxy score \[\mathbf{I}_{l}^{(t)}=\frac{1}{N}\sum_{n=1}^{N}\frac{\partial\mathcal{L}_{\text{CE}} (\mathbf{x}_{n},y_{n}))}{\partial_{\mathbf{s}_{l}^{(t)}}}, \tag{6}\] Note the average gradient computed in Eq. 6 over all the data is only used to compute the importance and will not be used to optimize the popup scores. The resulting \(\mathbf{I}_{l}^{(t)}\) is of the same size as \(\mathbf{s}_{l}^{(t)}\), each entry corresponding to the importance of a popup score. It is used in the next task by accumulating with the previously accumulated importance (Eq. 4) and soft-masking the learning (Eq. 5). Note that \(\mathbf{I}_{l}^{(0)}\) is all 0 as we do not know which popup scores are important before the training of the first task. ## 4 Experiments We now evaluate the proposed system TSS. We first learn all tasks sequentially. After that, their task models are tested using their respective test sets. TSS does not use any replay data. ### Datasets and Baselines **Datasets:** We use five datasets covering a wide range of NLP problems, including classification, generation, and information extraction. For detailed datasets statistics, please see Appendix B. **(1) ASC** (_Aspect Sentiment Classification_) is from Ke et al. (2021) and has 19 tasks. Each task classifies the opinion (_positive_, _negative_, or _neutral_) in a review sentence at the aspect-level of a product. **(2) CCD** (_Continual Classification Dataset_) is a popular continual text classification dataset de Masson d'Autume et al. (2019).8**(3) SUM** (_ConvoSum_) is a conversational abstractive summarization dataset with 6 tasks/domains Fabbri et al. (2021). Given conversations from a domain, the system generates its summary. **(4) DRG** (_Dialogue Response Generation_) is a popular task-oriented dialogue response dataset Multi-WoZ2.0) Ramadan et al. (2018) with 5 tasks/domains. Given the intent and dialogue state (slot-value pairs containing messages to express), the system generates a response. **(5) NER** (_Named Entity Recognition_)9 classifies mentions into pre-defined entity types in each task. Footnote 8: It contains 5 tasks: AGNews (news classification), Yelp (sentiment analysis), Amazon (sentiment analysis), DBpedia (Wikipedia article classification) and Yahoo (questions and answers categorization). Since each of these datasets is quite large, we randomly sampled 500 samples from each class for each task due to our resource limitations. Footnote 9: This data consists of 5 tasks, including **conll03**Sang and Meulder (2003), **wikigold**Balasuriya et al. (2009), **btc**Derczynski et al. (2016), **re3d**Laboratory (2017), and **gum**Zeldes (2017). Due to resource limitations, we randomly sampled 200 samples for each task. Since TSS aims at (1) preventing forgetting, (2) encouraging knowledge transfer and (3) learning a mixed sequence of similar and dissimilar tasks, we consider two types of sequences. **Homogeneous tasks sequences.** In each such task sequence, all tasks are from the same dataset. Among the aforementioned 5 datasets, two of them (ASC and NER) are datasets consisting of **similar tasks** as the tasks share similar task labels (with some exceptions, see the statistic in Appendix B) but from different domains. Our goal is to achieve both CF prevention and KT. Three of them (SUM, CCD, and DRG) are **dissimilar tasks** as the distribution shift across tasks is relatively large. They have little shared knowledge to transfer and the main goal is to ensure there is little or no CF. **Heterogeneous tasks sequence.** This sequence is constructed by mixing all the above similar and dissimilar tasks of different types from all the 5 datasets in random order. It has a total of **40** tasks. This is a challenging and more realistic setting where the system needs to prevent CF and encourage KT dynamically. **Baselines.** We use **14** baselines with both _non-continual_ and _continual learning_ (CL) methods. _Non-CL baselines_: **MTL** and **MTL (Adapter)10 train tasks in a multi-task or data combined setting, where the former trains the whole LM and the latter trains only the adapter. These two are widely accepted as the _upper bounds_ of CL. **ONE** builds a separate model for each task by fine-tuning the LM, which has no KT or CF. **ONE (Adapter)**Madotto et al. (2020) trains an adapter for each task separately (called AdapterCL in its original paper). **ONE (Prompt)**Zhu et al. (2022) trains a prompt for each task (called C-PT in its original paper). Footnote 10: For classification datasets (ASC, CCD and NER), we conduct a multi-task learning (MTL) experiment. For generation datasets (SUM and DRG), MTL is not possible as the language modeling head on top of BART is a linear layer with weights tied to the input embeddings. We follow the standard practice (e.g., Qin and Joty (2022); Madotto et al. (2020)) and pool all data together to train a single shared head (we still called this MTL for simplicity). _CL baselines_. The CL baselines include an _naive continual learning_ (**NCL**) method where the system learns the tasks one by one with no mechanism to deal with CF or to encourage KT, and **9** state-of-the-art TIL methods:. **5** adapter-based methods: **CTR**Ke et al. (2021), **HAT**Serra et al. (2018), **SupSup**Wortsman et al. (2020), **CAT**Ke et al. (2020) and **CUBER**Lin et al. (2022). **1** prompt-based method: **L2P**Wang et al. (2022). **3** baselines that modify the Transformer: **LAMOL**Sun et al. (2020), **EWC**Kirkpatrick et al. (2016) and **DAS**Ke et al. (2023). Readers can refer to Appendix D for the details of these baselines. **LM and hyperparameters.** Since we need a backbone LM that can do classification, generation, and information extraction. We adopt BART**Large**(Lewis et al., 2020) as our LM. Fine-tuning of BART follows the standard practice.11 Detailed hyperparameters are given in Appendix C. Footnote 11: For ASC, we adopt the ASC formulation in (Xu et al., 2019), where the aspect term and sentence are concatenated via </s>. Opinion is predicted as the average over all tokens. ### Evaluation Results and Analysis Since **the order of the tasks** in a sequence may impact the final result, we ran 5 randomly sampled task sequences (results of individual sequences are given in Appendix I). For different types of tasks, we use their standard evaluation metrics.12 Table 1 gives the average result of each system over 5 random task sequences _after_ continual learning of **heterogeneous** task sequences (left section) and **homogeneous** task sequences (right section). We report the **main** performance of each dataset, i.e., MF1 (Macro-F1) in ASC and CCD, F1 in NER, R1 in SUM and BLEU in DRG and their average in the **Average Main** column (the second from the right in each section). We also report the average forgetting rate (FR) on the same metrics in the **Average FR** column to evaluate the forgetting prevention.13 LAMOL is not included in heterogeneous tasks as it is not obvious how to adapt LAMOL for NER. Footnote 12: We use Macro-F1 and accuracy for the sequence-level classification tasks (ASC and CCD), where Macro-F1 (MF1) is the primary metric because highly imbalanced classes in ASC introduce biases in accuracy. We use Rouge score (R1, R2 and RL) for SUM, BLEU score for DRG and F1 for NER. **Heterogeneous Tasks.** The left section in Table 1 shows that TSS outperforms all CL baselines for the mixed sequence of 40 heterogeneous tasks, with similar tasks (from ASC, NER) and dissimilar tasks (from SUM, CCD and DRG). Note, although we have only one task sequence, we report the results separately for each dataset. Other observations are: (1). **TSS is more effective than CL baselines that only deal with CF** (EWC, HAT, SupSup). In the two similar datasets (ASC and NER), TSS clearly wins because regularization-based EWC sacrifices accuracy for overcoming CF and parameter-isolation based SupSup prevents any possible KT. In the 3 dissimilar datasets which have little shared knowledge to transfer, TSS is similar to the baseline that can prevent CF (like SupSup). This confirms TSS can achieve both CF prevention and KT in the challenging heterogeneous setting.14 Footnote 14: Other baselines perform poorly: HAT has little KT in classification tasks, which makes ASC poorer. It has forgetting in generation tasks as it cannot isolate parameters in the shared LM head. (2). **TSS is more effective than CL baselines that deal with both CF and KT** (CAT, CTR, DAS, CUBER, and L2P). Among these systems, CAT performs the worst due to its inaccurate task similarity detection and its difficulty to deal with CF. L2P performs better, but still has a large gap to TSS due to the poor prompt selection (we can see it is \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline & \multicolumn{3}{c|}{**Similar**} & \multicolumn{3}{c|}{**Dissimilar**} & \multicolumn{3}{c|}{**Similar**} & \multicolumn{3}{c|}{**Dissimilar**} & \multicolumn{3}{c}{**Average**} & \multicolumn{3}{c|}{**Average**} & \multicolumn{3}{c|}{**Similar**} & \multicolumn{3}{c}{**Dissimilar**} \\ Dataset & **ASC** & **NER** & **SUM** & **CCD** & **DRG** & **ASC** & **NER** & **SUM** & **CCD** & **DRG** & **ACC** & **DRG** & **ACC** \\ Model & MF1 & F1 & R1 & MF1 & BLEU & Main & FR & MF1 & F1 & R1 & MF1 & BLEU & Main & FR \\ \hline \multicolumn{11}{c}{**Non-continual learning**} \\ \hline MTL & 92.28 & 63.33 & 39.39 & 90.57 & 25.29 & 62.17 & — & 92.28 & 63.33 & 39.39 & 90.57 & 25.29 & 62.17 & — \\ MTL (Adapter) & 92.17 & 60.61 & 38.84 & 91.09 & 24.50 & 61.44 & — & 92.28 & 63.33 & 39.39 & 90.57 & 25.29 & 62.17 & — \\ \hline ONE & 85.55 & 59.33 & 39.07 & 91.07 & 24.14 & 59.83 & — & 85.55 & 59.33 & 39.97 & 91.07 & 24.14 & 59.83 & — \\ ONE (Adapter) & 83.95 & 56.69 & 38.90 & 90.78 & 23.42 & 58.75 & — & 83.95 & 56.69 & 38.90 & 90.78 & 23.42 & 58.75 & — \\ ONE (Prompt) & 76.46 & 45.90 & 30.67 & 86.23 & 12.67 & 50.39 & — & 76.46 & 45.90 & 30.67 & 86.23 & 12.67 & 50.39 & — \\ \hline \multicolumn{11}{c}{**Contual learning of heterogeneous tasks**} \\ \hline NCL & 88.05 & 34.56 & 22.86 & 81.86 & 8.68 & 47.20 & 13.94 & 89.25 & 49.19 & 32.68 & 85.08 & 22.31 & 55.70 & 6.42 \\ EWC & 88.73 & 28.75 & 18.65 & 83.24 & 7.99 & 45.47 & 13.43 & 88.36 & 51.76 & 32.64 & 87.27 & 18.30 & 55.66 & 5.53 \\ HAT & 87.45 & 50.16 & 35.21 & 89.85 & 20.98 & 56.73 & 0.97 & 89.33 & 52.31 & 37.11 & 90.21 & 21.47 & 58.08 & 0.49 \\ DAS & 90.62 & 43.38 & 25.45 & 82.66 & 15.21 & 51.47 & 10.49 & 90.94 & 42.12 & 31.31 & 87.97 & 22.25 & 54.92 & 9.26 \\ SupSup & 85.83 & 59.38 & 32.83 & 90.66 & 24.71 & 59.67 & 0.00 & 85.83 & 58.93 & 38.23 & 90.66 & 24.71 & 59.67 & 0.00 \\ LAMOL & — & — & — & — & — & — & — & 84.62 & — & 108.8 & 54.44 & 19.96 & — & — \\ CAT & 45.79 & 35.24 & 17.44 & 36.62 & 10.21 & 29.06 & 30.27 & 84.31 & 50.73 & 37.24 & 90.82 & 21.72 & 56.96 & 1.37 \\ CTR & 89.37 & 50.16 & 33.98 & 89.96 & 17.63 & 56.22 & 0.95 & 88.86 & 51.85 & 37.34 & 90.54 & 21.39 & 58.00 & 0.65 \\ UUBER & 90.85 & 46.67 & 30.94 & 81.54 & 15.68 & 53.08 & 7.27 & 91.25 & 47.19 & 30.44 & 89.19 & 21.66 & 55.95 & 5.82 \\ L2P & 78.10 & 39.03 & 28.34 & 72.35 & 45.47 & 44.64 & 74.81 & 44.22 & 26.65 & 83.55 & 8.52 & 47.91 & 2.58 \\ \hline TSS & **90.61** & **62.13** & **38.29** & **90.70** & **24.56** & **61.26** & **0.00** & **91.28** & **63.96** & **38.39** & **90.89** & **24.75** & **61.85** & **0.00** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance for **heterogeneous** tasks (a sequence of 40 tasks from all 5 datasets) and **homogeneous** tasks (each sequence consisting of all tasks from one dataset), averaged over 5 random sequences (the **standard deviations** are given in Appendix H due to space limits). “—” means not applicable. We bold the best results within CL baselines. The smaller forgetting rate (FR) means the system can deal with forgetting better. Note, the results for **non-continual learning** are the same in both settings. **Execution time** and **memory need** are given in Appendix J. even poorer than ONE (prompt), indicating that its selection causes CF). DAS performs well on ASC, but poorly on other datasets, indicating it cannot effectively prevent CF. CUBER has strong performance on similar tasks (e.g., ASC) but performs poorly on dissimilar tasks. CTR is the best among the three, but it is still worse than TSS due to its inaccurate instance-level similarity detection. **Knowledge transfer (KT) and CF prevention.** To validate TSS's effectiveness in dealing with CF with a sequence of dissimilar tasks, we can first compare TSS with ONE. We can see TSS achieves similar results to ONE in the three **dissimilar tasks** datasets, indicating effective CF prevention. Additionally, we can evaluate the continual learning process to see whether forgetting occurs during the training of each system. To this end, we compute **Forgetting Rate15** in Table 1 (the right-most column in the CL of heterogeneous tasks section). Clearly, TSS has a 0 forgetting. SupSup also has 0 forgetting because it also tries to find a sub-network for each task. While this is certainly good for CF prevention but makes KT impossible. We can see other baselines all suffer from forgetting (positive forgetting rate) on average. Footnote 15: The forgetting rate (Liu et al., 2020) is defined as \(\text{FR}=\frac{1}{\tau-1}\sum_{i=1}^{t-1}A_{i,i}-A_{t,i}\), where \(A_{i,i}\) is the test performance of each task when it was first learned and \(A_{t,i}\) is the performance of task \(i\) after training the last task \(t\). We average over all tasks except the last one as the last task obviously has no forgetting. The detailed forgetting rate for each dataset is given in Appendix G. (Mehta et al., 2021) defined a different forgetting rate. Appendix F will argue that ours is more effective. Regarding **KT**, we can again use ONE as the control and see whether there is effective KT (learned tasks help the new task). Clearly, we can see TSS outperforms ONE in two datasets (ASC and NER) with **similar tasks**, indicating effective KT. Thanks to the sub-network discovery and soft-masking, we can also see TSS is very similar to MTL/Comb, which again shows the effectiveness of TSS. **Ablation study.** We want to know whether (1) the sub-network discovery and (2) the soft-masking mechanism are effective. For (1), we conduct the ablation experiment **TSS (w/o SD)**, where we remove the sub-network discovery and directly train the parameters of adapters with soft-masking. For (2), we conduct the experiment **TSS (w/o SM)** and **TSS (w/o SM; Naive)**. They both do not use the soft-masking mechanism. The former initializes the popup scores with Kaiming Initialization for all tasks while the latter initializes the scores with only those of the last task. The right section (heterogeneous) in Table 2 shows the ablation results and the corresponding forgetting rates. The full TSS gives the best average result, indicating that every component helps. We further observe: (1) TSS's gain is partially from the sub-network discovery as TSS (w/o SD) is poorer on average, particularly for those datasets having little shared knowledge; (2) soft-masking helps as TSS (w/o SM) gives a worse performance; (3) soft-masking can help make the initialization for the new task better as TSS (w/o SM; Naive) is clearly worse than TSS. Note that both TSS (w/o SM) and TSS (w/o SM; Naive) have 0 forgetting rate, due to the effectiveness of sub-network discovery. **Homogeneous Tasks.** For continual learning of homogeneous tasks, we conducted 5 experiments, one for each for the 5 datasets, i.e., each task sequence contains only the same type of tasks (see Sec. 4.1). The right section of Table 1 shows the average results for each system. The observations are similar to those for heterogeneous tasks, i.e., TSS outperforms all CL baselines. We further observe that TSS has larger improvements on similar tasks (ASC and NER), comparing to the improvements in the mixed sequence. This is expected. The right section of Table 2 shows the ablation study. The observations are also similar to those for heterogeneous tasks, i.e., every component helps. We also observe that the naive transfer (TSS (w/o SM; Naive)) works similarly to TSS. This indicates that the proposed initialization is more important \begin{table} \begin{tabular}{c||c|c|c|c|c|c||c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Dataset Model} & \multicolumn{2}{c|}{**Similar**} & \multicolumn{3}{c|}{**Disimilar**} & \multicolumn{3}{c|}{**Similar**} & \multicolumn{3}{c|}{**Dissimilar**} & \multicolumn{3}{c|}{**Dissimilar**} & \multicolumn{3}{c|}{**Average**} & \multicolumn{3}{c|}{**Average**} \\ & **ASC** & **NER** & **SUM** & **CCD** & **DRG** & \multicolumn{1}{c|}{**ASC**} & \multicolumn{1}{c|}{**NER**} & \multicolumn{1}{c|}{**SUM**} & \multicolumn{1}{c|}{**CCD**} & \multicolumn{1}{c|}{**DRG**} & \multicolumn{1}{c|}{**FR**} & \multicolumn{1}{c|}{**MFI**} & \multicolumn{1}{c|}{**BELU**} & \multicolumn{1}{c|}{**Main**} & \multicolumn{1}{c|}{**FR**} \\ \hline \multicolumn{11}{c}{**Non-continual learning**} \\ \hline ONE & 85.55 & 59.33 & 39.07 & 91.07 & 24.14 & 59.83 & — & 85.55 & 59.33 & 39.07 & 91.07 & 24.14 & 59.83 & — \\ \hline \multicolumn{11}{c}{**Contential learning of homogeneous tasks**} & \multicolumn{1}{c}{**Contential learning of homogeneous tasks**} \\ \hline TSS (w/o SD) & 86.54 & 37.03 & 28.04 & 78.29 & 14.85 & 48.95 & 9.02 & 91.06 & 40.47 & 30.42 & 28.83 & 22.77 & 54.61 & 9.61 \\ TSS (w/o SM) & 85.83 & 58.93 & 38.23 & 90.66 & 24.71 & 59.67 & 0.00 & 85.83 & 58.93 & 38.23 & 90.66 & 24.71 & 59.67 & 0.00 \\ TSS (w/o SM; Naive) & 88.89 & 34.79 & 36.11 & 89.99 & 24.19 & 54.80 & 0.00 & 90.38 & 62.42 & 38.08 & 90.63 & 24.59 & 61.22 & 0.00 \\ \hline TSS & **90.61** & **62.13** & **38.29** & **90.70** & **24.56** & **61.26** & **0.00** & **91.28** & **63.56** & **38.39** & **90.89** & **24.75** & **61.85** & **0.00** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation experiment results for **heterogeneous** and **homogeneous** tasks - averages over 5 random sequences (the standard deviations are reported in Appendix H due to space limits). in the heterogeneous tasks setting because two adjacent tasks can be very dissimilar (e.g., the current task is NER but its previous/last task may be summarization) and causes negative transfer. In summary, we can say that TSS works well for both the homogeneous tasks and the challenging heterogeneous tasks scenarios. ## 5 Conclusion This paper studied _task incremental learning_ (TIL) on a range of NLP problems. We first presented the three desired capabilities: no forgetting, knowledge transfer and learning a mixed sequence of similar and dissimilar tasks. To our knowledge, only one system (CAT) aims to achieve all these objectives but it suffers from forgetting. We then propose a novel method, TSS, to achieve all three. Experimental results showed that TSS achieves KT and no forgetting even for the challenging mixed (or heterogeneous) task sequence setting. ## 6 Limitation While effective, TSS has some limitations. First, although TSS shows strong performance in forward transfer (old knowledge helps the new task), how to enable backward transfer (new knowledge improves the trained tasks) is left to future work. Second, while empirically soft-masking the gradient does not harm task learning, there is no theoretical guarantee that the task with soft-masking can learn as well as without soft-masking. It is interesting to investigate whether this is true for any scenario. ## 7 Ethics Statement This paper proposes a novel task-incremental learning model that can achieve both forgetting prevention and knowledge transfer for a mixed sequence of heterogeneous tasks. We do not foresee any negative consequences for any individual as a result of this research. The consequence of the failure of the system is that the system makes some incorrect predictions, which, we believe, do not have any ethic implications. All the data and pre-trained models are publicly available and our method does not leverage biases in the data. ## Acknowledgements The work of Bing Liu was supported in part by four National Science Foundation (NSF) grants (1910424, 1838770, 2225427, and 2229876) and a research contract from KDDI.
2301.10626
Predicting chaotic statistics with unstable invariant tori
It has recently been speculated that statistical properties of chaos may be captured by weighted sums over unstable invariant tori embedded in the chaotic attractor of hyperchaotic dissipative systems; analogous to sums over periodic orbits formalized within periodic orbit theory. Using a novel numerical method for converging unstable invariant 2-tori in a chaotic PDE, we identify many quasiperiodic, unstable, invariant 2-torus solutions of a modified Kuramoto-Sivashinsky equation exhibiting hyperchaotic dynamics with two positive Lyapunov exponents. The set of tori covers significant parts of the chaotic attractor and weighted averages of the properties of the tori -- with weights computed based on their respective stability eigenvalues -- approximate statistics for the chaotic dynamics. These results are a step towards including higher-dimensional invariant sets in a generalized periodic orbit theory for hyperchaotic systems including spatio-temporally chaotic PDEs.
Jeremy P. Parker, Omid Ashtari, Tobias M. Schneider
2023-01-25T15:08:58Z
http://arxiv.org/abs/2301.10626v1
# Predicting chaotic statistics with unstable invariant tori ###### Abstract It has recently been speculated that statistical properties of chaos may be captured by weighted sums over unstable invariant tori embedded in the chaotic attractor of hyperchaotic dissipative systems; analogous to sums over periodic orbits formalized within periodic orbit theory. Using a novel numerical method for converging unstable invariant 2-tori in a chaotic PDE, we identify many quasiperiodic, unstable, invariant 2-torus solutions of a modified Kuramoto-Sivashinsky equation exhibiting hyperchaotic dynamics with two positive Lyapunov exponents. The set of tori covers significant parts of the chaotic attractor and weighted averages of the properties of the tori - with weights computed based on their respective stability eigenvalues - approximate statistics for the chaotic dynamics. These results are a step towards including higher-dimensional invariant sets in a generalized periodic orbit theory for hyperchaotic systems including spatio-temporally chaotic PDEs. **Periodic orbit theory formalizes the idea that, if one can identify a large number of non-chaotic unstable periodic orbits embedded within a chaotic attractor, the properties of each of these non-chaotic time-invariant solutions can be summed with suitable weights to predict statistical properties of the chaotic dynamics itself. However, it has been conjectured that in so-called 'hyperchaotic' systems, such as turbulent fluid flows and other spatio-temporally chaotic problems, it may be advantageous to consider the higher-dimensional invariant structures called invariant tori, instead of unstable periodic orbits. Considering a particular hyperchaotic partial differential equation, we here show that one can indeed successfully identify many unstable invariant tori and describe chaotic statistics as sums over the tori.** ## I Introduction Chaotic dynamics arise naturally from simple interactions in many physical systems, from fluid dynamics to electrical circuits and nonlinear optics. Studying the chaotic dynamics in terms of unstable non-chaotic _invariant_ solutions to the underlying evolution equations, which are embedded within the stable chaotic attractor, provides key insights into the observed physics. In the absence of special symmetries, two types of unstable invariant solutions are generally studied: equilibria, zero-dimensional unstable fixed points in the state space of the system; and periodic orbits, non-chaotic time-periodic solutions corresponding to one-dimensional closed loops in state space. Though equilibria are usually rare, in a general continuous dynamical system, periodic orbits are expected to be dense within the attractor [1]. Trajectories within chaotic attractors closely shadow unstable periodic orbits, and consequently, periodic orbits are often described as the 'backbone' of chaos. For systems with dense periodic orbits, the construction of dynamical zeta functions [2] allows statistical quantities to be evaluated as sums over the collection of all periodic orbits [3; 4; 5]. However, for dissipative systems the 'closing lemmas' [6] which imply density of periodic orbits have not been extended to include the very smooth dynamical systems which arise from physical laws. Indeed, the formal lemmas certainly do not apply in the non-hyperbolic case where a system has multiple zero Lyapunov exponents in addition to a positive one, for example because of a continuous symmetry. In this case, instead of periodic orbits we find _relative_ periodic orbits [7], which are in fact a special case of invariant 2-tori. Invariant 2-tori are, after equilibria and periodic orbits, the next simplest invariant solutions in continuous dynamical systems. They represent quasiperiodic behaviour, in which two different fundamental frequencies interact. In a previous paper [8], it was shown that for a particular dissipative system of ordinary differential equations which exhibits hyperchaos with three pos itive Lyapunov exponents, invariant 2-tori are generically found embedded within the attractor. This led to the conjecture that unstable invariant tori (UITs), as well as unstable periodic orbits (UPOs), should be taken into account to describe dissipative hyperchaotic systems, and that UITs could be used to predict statistics of interest in such systems. One important class of hyperchaotic dissipative systems are the spatio-temporally chaotic partial differential equations (PDEs) which arise in fluid dynamics and related fields. Generic invariant tori could allow the study of key phenomenology within these systems, for which periodic orbits are rare or at least computationally problematic to detect. For example, in wall-bounded turbulence, it has proven difficult to find periodic orbit solutions which capture the interaction between different processes at different length-scales [9]. In the absence of phase locking, we expect the different temporal frequencies associated with these length-scales to lead to the dynamics manifesting as invariant tori rather than periodic orbits. It is conjectured that as the complexity of spatio-temporal chaos increases, which is associated with a greater number of positive Lyapunov exponents, the likelihood of finding and relative importance of UITs increases. However, a method to find such tori generically remains elusive. This paper aims at advancing the methodology for generically finding and exploiting UITs in the description of chaos. We consider the PDE system of a forced generalised Kuramoto-Sivashinsky equation (gKSE). Here, we can find invariant tori through their connection to relative periodic orbits in the unforced case. We show that UITs are plentiful and can be summed over to predict various average quantities of the chaotic dynamics. The paper is laid out as follows: in section II we present both the unforced and forced gKSE, and the Lyapunov exponents associated with our chosen parameters; in section III we present and apply methods to find UITs, which are used in section IV to predict statistics for the system; and the results are discussed in section V. ## II The forced generalised KSE The one-dimensional generalised Kuramoto-Sivashinsky equation (gKSE) [10; 11; 12; 13] for a real-valued scalar field \(u(x,t)\) defined on a periodic spatial domain of size \(L\) can be written as \[\partial_{t}u+u\partial_{x}u+\partial_{x}^{2}u+\beta\partial_{x}^{3}u+\partial _{x}^{4}u=0. \tag{1}\] For \(\beta=0\), the gKSE reduces to the classic Kuramoto-Sivashinsky equation (KSE), which has many applications in physics [14; 15; 16]. A non-zero value of \(\beta\) acts to break the discrete left-right antisymmetry of the KSE, but for \(\beta=0.01\), the observed dynamics is not altered significantly. The significance of breaking the discrete symmetry will be discussed in section II.1. The complexity of the dynamics in general increases as the domain size \(L\) is increased. We here consider \(L=22\), for which the dynamics are chaotic. Typical timeseries of the classic KSE and the gKSE for the considered values of the control parameters are shown in figure 1. The mean flow \(\int_{0}^{L}u\,\mathrm{d}x\) is a conserved quantity of (1), and without loss of generality can be taken to be zero. Equation (1) is invariant under the continuous family of transformations \(u(x+l,t)\mapsto u(x,t)\), parameterised by \(l\in\mathbb{R}\). This has the consequence that periodic orbits are unlikely in the chaotic attractor, and instead _relative_ periodic orbits (RPOs), which have a non-zero phase velocity, are readily found. To break the continuous symmetry of (1) in a controlled way, we add a forcing term and instead considered the forced gKSE, \[\partial_{t}u+u\partial_{x}u+\partial_{x}^{2}u+\beta\partial_{x}^{3}u+ \partial_{x}^{4}u=\epsilon\sin{(2\pi x/L)}, \tag{2}\] where \(\epsilon\) is a small parameter, which controls the breaking of the continuous shift symmetry. Beyond the control of the continuous symmetry, there is no physical motivation for studying this augmented system; the choice will become clear below. We will use \(\epsilon\) on the order of \(10^{-3}\), which is sufficiently large that the system is detectably different from the unforced equation, without being so large that the qualitative dynamics significantly change. See figure 1 for a visual comparison of the dynamics with and without forcing. Figure 2 shows a projection of the chaotic attractors at the discussed parameter values yielding the classical KSE, the unforced gKSE and the forced gKSE. The choice of projection is important here - since \(\epsilon\neq 0\) breaks the continuous phase-shift symmetry we choose a projection which factors out this phase shift, namely the absolute value of the complex Fourier coefficients. We time-march these PDEs using an exponential time-differencing fourth-order Runge-Kutta scheme, following Kassam and Trefethen [17]. This is coded in Julia so that we may find Jacobians through automatic differentiation using the package Zygote [18]. Some additional calculations were performed using an identical algorithm in MATLAB. Throughout, we discretise the domain with 24 points, or \(N=16\) Fourier modes after 2/3 dealiasing, which is sufficient for this relatively low value of \(L\). From (2) we define the instantaneous energy production \[P:=\int_{0}^{L}\left[\left(\partial_{x}u\right)^{2}+\epsilon u\sin\left(2\pi x /L\right)\right]\mathrm{d}x, \tag{3}\] and dissipation \[D:=\int_{0}^{L}\left(\partial_{x}^{2}u\right)^{2}\mathrm{d}x, \tag{4}\] which we will use as key statistics of the flow. The long-time averages of these should be equal for a statistically stationary trajectory, and they will be equal when averaged over any invariant solution. ### Lyapunov exponents The average rate of growth or decay of perturbations to a trajectory within a chaotic attractor is measured by the Lyapunov exponents. The presence of at least one positive Lyapunov exponent is a necessary and sufficient condition for an attractor to be chaotic. A chaotic attractor is called hyperchaotic if its Lyapunov spectrum contains at least two positive exponents. We compute the leading Lyapunov exponents using algorithm 1, which is inspired by an approach initially proposed by Benettin _et al._[19]. In this algorithm, the state \(u\) and the orthonormal basis \(\mathbf{Q}\) are mapped to \(f^{\tau}(u)\) and \(\mathbb{J}_{u}^{\tau}\mathbf{Q}\), respectively, where \(f^{\tau}\) is the time evolution operator for time \(\tau\) and \(\mathbb{J}_{u}^{\tau}=\nabla_{u}f^{\tau}(u)\) is the Jacobian of the flow. The deformed basis is reorthonormalized via QR decomposition, which at the same time allows us to extract the finite-time evaluation of the Lyapunov exponents \(\chi_{i}\) from the diagonal elements of the right triangular matrix [20]. This process is repeated for \(n\gg 1\) times, and the Lyapunov exponents are averaged. Prior to this loop, the dynamics is integrated for a sufficiently long time \(\tau_{0}\) to ensure that the trajectory is confined to the chaotic attractor. ``` Input : Forced gKSE parameters \(L\), \(\beta\) and \(\epsilon\) Initial condition IC (\(N\)-dim. state) Transition time \(\tau_{0}\) Number of Lyapunov exponents \(p\) Reorthonormalization time \(\tau\) Number of reorthonormalizations \(n\) Output :\(p\) leading Lyapunov exponents \(\mathbf{X}=[\chi_{1}\;\dots\;\chi_{p}]^{\top}\) \(u\gets f^{\tau_{0}}(\mathrm{IC})\); // time marching from IC \(\tilde{\mathbf{Q}}\leftarrow\left[\mathbf{I}_{p}\mid\mathbf{0}_{p\times(N-p)} \right]^{\top}\); \(\tilde{\mathbf{Q}}_{:,1}\leftarrow\partial_{t}u\); if\(\epsilon=0\)then \(\tilde{\mathbf{Q}}_{:,2}\leftarrow\partial_{x}u\); end QR decompose \(\tilde{\mathbf{Q}}=\mathbf{QR}\); \(\mathbf{X}\leftarrow\mathbf{0}_{p\times 1}\); for\(j=1\) to \(n\)do \(\mathbf{W}\leftarrow\mathbb{J}_{u}^{\tau}\mathbf{Q}\) ; \(u\gets f^{\tau}(u)\); \(\mathbf{W}_{:,1}\leftarrow\partial_{t}u\); if\(\epsilon=0\)then \(\mathbf{W}_{:,2}\leftarrow\partial_{x}u\); end QR decompose \(\mathbf{W}=\mathbf{QR}\); \(\mathbf{X}\leftarrow\mathbf{X}+\ln\left(\mathrm{diag}(\mathbf{R})\right)/\tau\); end for return\(\mathbf{X}/n\) ``` **Algorithm 1**Computation of the leading Lyapunov exponents. The four leading Lyapunov exponents of the chaotic attractor over \(2^{-7}<10^{3}\epsilon<2^{4}\) (with \(\beta=0.01\) and \(L=22\) being fixed) are shown in Figure 3. The parameters \(\tau_{0}=2.5\times 10^{3}\), \(\tau=2.0\) and \(n=7.5\times 10^{5}\) are fixed in the calculations. Outside the plotted range of \(\epsilon\), the attractor behaves as follows: for large values of \(\epsilon\), the global attractor is a stable fixed point; as \(\epsilon\) decreases, the system becomes chaotic via the Ruelle-Takens-Newhouse route to chaos: at \(\epsilon\approx 0.1202\) the fixed point loses its stability and the attractor becomes a limit cycle; at \(\epsilon\approx 0.1021\) the attractor becomes a stable torus; and at \(\epsilon\approx 0.0998\) the attractor turns chaotic and remains chaotic until \(\epsilon\approx 0.0819\). Below this value the attractor consists of (possibly coexisting) stable periodic orbits or stable tori. At \(\epsilon\approx 0.0190\) the attractor again becomes chaotic and remains so for all smaller values of \(\epsilon\). Due to the presence of an additional zero Lyapunov exponent associated with the continuous translational symmetry in the case of unforced \(\epsilon=0\) system, by definition, the chaotic attractor is non-hyperbolic, and we therefore do not expect to find many periodic orbits, but rather relative periodic orbits (RPOs). For \(\beta=0\), the combination of the continuous and discrete symmetries leads to a dense class of periodic orbits called _preperiodic_ orbits by Cvitanovic, Davidchack, and Siminos [21], but these are not present when \(\beta\neq 0\). As the continuous symmetry is broken by \(\epsilon>0\), the remaining RPOs should generically become invariant 2-tori, as discussed in section III.2. Intuitively, for 2-tori to be embedded within a hyperbolic chaotic attractor, we Figure 1: Typical chaotic timeseries at \(L=22\) of length \(t=400\). Top: the classic KSE \(\beta=0\), \(\epsilon=0\). Middle: The gKSE with \(\beta=0.01\) but \(\epsilon=0\). Bottom: Our forced gKSE with \(\beta=0.01\) and \(\epsilon=0.002\). The behaviour appears very similar, despite the fact that the first system has two symmetries which the last does not. Figure 2: The chaotic attractor at \(L=22\), depicted as a timeseries of length \(t=10^{6}\). Left: the classic KSE \(\beta=0\), \(\epsilon=0\). Middle: The gKSE with \(\beta=0.01\) but \(\epsilon=0\). Right: Our forced gKSE with \(\beta=0.01\) and \(\epsilon=0.002\). The projection shows, in each case, the magnitude of the first, second and third Fourier coefficients on the \(x\), \(y\) and \(z\) axes respectively. Notice the subtle effect of nonzero \(\beta\) and \(\epsilon\) to ‘blur’ the attractor. require more than one positive Lyapunov exponent - with only one, the chaotic attractor locally looks like a very thin sheet which cannot geometrically include an invariant 2-dimensional non-chaotic manifold. Indeed, we see two positive Lyapunov exponents for \(\epsilon>0\), so this is consistent. Note that the Kaplan-Yorke dimension \(D_{KY}\) of the attractor slightly _decreases_ with increasing \(\epsilon\), while the first and fourth Lyapunov exponents become more negative (see Table 1). In all cases, \(D_{KY}\approx 4.2\) - the system we are studying exhibits low-dimensional chaos, in contrast with the spatio-temporal chaos seen at higher \(L\) in the KSE, for example. ## III Unstable invariant tori ### Torus convergence algorithm Rather than discretising a loop on a Poincare section which slices the torus, as in Parker and Schneider (2010), the full surface of the 2-torus is parameterised by coordinates \((\rho,\sigma)\in[0,2\pi)\times[0,2\pi)\). The local dynamics on the torus is assumed to be a rotation with a fixed velocity \((R,S)\in\mathbb{R}^{2}\), so that \[\partial_{t}u=R\partial_{\rho}u+S\partial_{\sigma}u, \tag{5}\] which states that the flow of the dynamical system lies in the tangent space of the torus at that point, as shown in figure 4. For \(u(x,\rho,\sigma)\) to describe an invariant torus of the system (2), we therefore require that \[R\partial_{\rho}u+S\partial_{\sigma}u+u\partial_{x}u+\partial_{x}^{2}u+\beta \partial_{x}^{3}u+\partial_{x}^{4}u=\epsilon\sin\frac{2\pi x}{L}. \tag{6}\] The fact that \(R\) and \(S\) are independent of \(\rho\) and \(\sigma\) partially constrains the choice of parameterisation, which improves numerical convergence. but also assumes quasiperiodic dynamics. This is a strong assumption which is certainly not valid in Arnold tongues, regions of parameter space in which phase-locking implies the existence of periodic orbits on an invariant torus. In such regions, we may instead directly converge the periodic orbits (Dubrovich, 2010). In practice, when the phase locking of a UIT is such that the periodic orbits on it are of very long period, it is indistinguishable numerically from a quasiperiodic invariant torus, and our algorithm will converge. \begin{table} \begin{tabular}{l l l l} \hline & \(\beta=0\) & \(\beta=0.01\) & \(\beta=0.01\) \\ & \(\epsilon=0\) & \(\epsilon=0\) & \(\epsilon=0.001\) \\ \hline \(\chi_{1}\) & 0.0484 & 0.0485 & 0.0476 \\ \(\chi_{2}\) & 0 & 0 & 0.0036 \\ \(\chi_{3}\) & 0 & 0 & 0 \\ \(\chi_{4}\) & -0.0028 & -0.0061 & -0.0098 \\ \(\chi_{5}\) & -0.1884 & -0.1805 & -0.1815 \\ \(\chi_{6}\) & -0.2562 & -0.2608 & -0.2600 \\ \(\chi_{7}\) & -0.2902 & -0.2918 & -0.2903 \\ \(\chi_{8}\) & -0.3102 & -0.3089 & -0.3083 \\ \hline \(D_{\rm KY}\) & 4.2425 & 4.2348 & 4.2279 \\ \hline \end{tabular} \end{table} Table 1: The eight most positive Lyapunov exponents of the chaotic attractor of the forced gKSE for domain length \(L=22\) and different parameter settings. Figure 3: The three most positive Lyapunov exponents of the forced gKSE with \(L=22\) and \(\beta=0.01\), as \(\epsilon\) is varied. The vertical line marks \(\epsilon=0.001\). Figure 4: The flow \(\partial_{t}u\) at any point on the surface of the 2-torus \(\mathcal{M}\) can be expressed as a linear combination of the tangent vectors along the coordinates \(\sigma\) and \(\rho\) which span the local tangent space \(\mathrm{T}\mathcal{M}\). See equation (5). We start with an initial guess which geometrically describes a 2-torus in state space, but not an invariant manifold. We then iteratively deform this torus until (6) is satisfied at every point on the surface. The torus is discretised with \(N=16\) Fourier modes in \(x\), as for (2), so the linear terms are entirely local differential operators in Fourier space. In \(\rho\) and \(\sigma\), we discretise on an evenly-spaced grid of \(M\times M=64\times 64\) points, and the derivatives are calculated with dense trigonometric differentiation matrices. This means that we can compute a sparse Jacobian for the left hand side of (6), of size \(NM^{2}\times(NM^{2}+2)\), where the two additional columns come from derivatives with respect to \(R\) and \(S\). The matrix consists of dense \(N\times N\) blocks on the main diagonal, corresponding to the nonlinear terms, and diagonal \(N\times N\) blocks elsewhere, corresponding to the derivatives with respect to \(\rho\) and \(\sigma\). This gives a total of \(M^{2}N^{2}+M^{4}N+M^{2}N\) nonzero entries. Following Cvitanovic, Davidchack, and Siminos [21], we can solve this system using a Levenberg-Marquardt algorithm. As a consequence, we do not need to introduce additional constraints for this underconstrained optimisation problem. At each iteration of Levenberg-Marquardt, the sparse linear system is solved using the conjugate-gradient-like LSMR [22]. Since this requires only sparse matrix multiplications, this can be performed on a GPU in Julia at great speed despite the size of the system. As a consequence of using the Levenberg-Marquardt algorithm, our method is able to converge from relatively poor initial guesses, since it is effectively gradient-descent when far from a solution, and so shares the convergence properties of recent improvements in methods for the computation of periodic orbits [23; 24; 25]. Figure 5 shows the torus-shaped but not invariant initial guess at \(\epsilon=0.002\) (generated by continuation, see section III.2), and the drastically different converged invariant solution. Our method is also able to converge some UPOs which exist when strong phase-locking is present, as UPOs are a degenerate case of the solution given by (6) in which either \(\partial_{\rho}u\equiv 0\) or \(\partial_{\sigma}u\equiv 0\). Indeed, if they were both zero, the algorithm would have converged to a steady state solution, but this never occurred. ### Continuation of UITs from RPOs A relative periodic orbit is a particular case of an invariant torus, where one of the two dimensions of the invariant manifold exactly corresponds to a continuous symmetry of the system. In the presence of such a symmetry, RPOs are expected to be the generic structure in chaotic attractors, with pure periodic orbits rare special cases, for example related to bifurcations of solutions outside the chaos. The unforced system with \(\epsilon=0\) has a continuous symmetry, and we can find RPOs in it using the now-routine method of recurrent flow analysis, followed by a Newton-Krylov based shooting method [5]. Our implementation of recurrent flow analysis exactly matches that of Cvitanovic, Davidchack, and Siminos [21]. Given an RPO with period \(T\) and phase velocity \(c\) so that \(u(x+cT,t+T)=u(x,t)\) for all \(x\) and \(t\), we can satisfy the form of the previous subsection by defining \[u(x,\rho,\sigma)=u\left(x+\frac{L\sigma}{2\pi},\frac{T\rho}{2\pi}\right),R= \frac{2\pi}{T},S=-\frac{2\pi c}{L}. \tag{7}\] Therefore, we can take RPOs found via recurrent flow analysis and shooting and reconverge them using the algorithm described in section III.1. Afterwards, we continue the invariant solution to non-zero \(\epsilon\), at which point it ceases to be an RPO and is instead a generic 2-torus. It was found to be sufficient to use the RPO as Figure 5: Initial guess (blue), the partially converged torus after 5 iterations (green), and the fully converged UIT (red) for \(T_{2}\) at \(\epsilon=0.003\). The projection is as per figure 2. an initial condition for the algorithm at \(\epsilon=0.001\), and then use this converged UIT and the \(\epsilon=0\) RPO to linearly extrapolate an initial guess for \(\epsilon=0.002\). Several of the UITs were continued to \(\epsilon=0.005\) and beyond, but for the more geometrically complex tori this proved difficult and higher resolution discretisations of \(\rho\) and \(\sigma\) are likely to be required. Indeed, we only searched for RPOs for periods up to \(T=100\) for this reason. Figure 7 shows two different projections of a simple RPO at \(\epsilon=0\) which is continued to a UIT at \(\epsilon=0.01\). With a projection which 'quotients out' the continuous symmetry, the RPO appears as a simple loop but is clearly a full torus at non-zero \(\epsilon\). A more naive projection shows that the RPO is indeed topologically a torus at \(\epsilon=0\), but then the effect of \(\epsilon\) is obscured. Twelve distinct UITs were successfully converged at \(\epsilon=0.001\) from the continuation of 27 distinct RPOs, and eight of these were successfully continued to \(\epsilon=0.002\). A projection of all eight of these is shown in figure 8. In addition to these, a common travelling wave solution at \(\epsilon=0\) was continued to give a periodic orbit, but this was omitted from our calculations in the following sections, as were any UPOs resulting from phase-lockings of the UITs. ### Stability of UITs Algorithms for finding the stability properties of invariant tori are notoriously difficult to use [26; 27]. Here we propose a simple iterative method which accurately finds the leading Lyapunov exponents for a UIT, i.e. the real parts of the stability eigenvalues. It is sufficient to know these to calculate the weights for the statistical averages discussed in section IV. Our method does not give the imaginary parts of the eigenvalues, nor the linear manifolds associated with the eigenvectors. The algorithm is an extension of that discussed in section II.1 to calculate Lyapunov exponents for the attractor. In theory, if we start with a point exactly on an invariant solution, the Lyapunov exponents calculated using the given algorithm should give us exactly the Lyapunov exponent of the invariant solution. However, as the solutions of interest are all unstable, a trajectory started from a calculated point of the invariant solution will quickly drift away. Consequently, we augment the algorithm and in each iteration, we ensure that the trajectory starts from the correct point on the invariant manifold, assuming the dynamics are given locally by (5), as described in algorithm 2. ``` Input : Parameters \(L\), \(\beta\) and \(\epsilon\) Invariant torus \(u(x,\rho,\sigma),R,S\). Number of Lyapunov exponents \(p\) Reorthonormalization time \(\tau\) Number of reorthonormalizations \(n\) Output :\(p\) leading Lyapunov exponents \(\mathbf{X}=[\chi_{1}\ \dots\ \chi_{p}]^{\top}\) \(\overline{\mathbf{Q}}\leftarrow\left[\mathbf{I}_{p}\mid\mathbf{0}_{p\times(N- p)}\right]^{\top}\); QR decompose \(\tilde{\mathbf{Q}}=\mathbf{Q}\mathbf{R}\); \(\mathbf{X}\leftarrow\mathbf{0}_{p\times 1}\); for\(j=1\) to \(n\)do \(u_{0}\gets u(x,jR\tau,jS\tau)\); \(\mathbf{W}\leftarrow\overline{\mathbf{J}}_{u_{0}}^{\top}\mathbf{Q}\) ; QR decompose \(\mathbf{W}=\mathbf{Q}\mathbf{R}\); \(\mathbf{X}\leftarrow\mathbf{X}+\ln\left(\text{diag}(\mathbf{R})\right)/\tau\); end for return\(\mathbf{X}/n\) ``` **Algorithm 2**Computation of the leading Lyapunov exponents of an invariant torus. Compare with algorithm 1. All the UITs found had either two unstable eigenvalues (distinct values or repeated values, the latter suggesting a complex conjugate pair), or only one unstable eigenvalue. Any invariant solution embedded within the chaotic attractor can have at most two unstable eigenvalues, since this is the number of positive Lyapunov exponents for the attractor itself, as discussed in section II. Any UIT should have, by definition, two zero eigenvalues, and in each case we numerically found two Lyapunov exponents with magnitude very close to zero. The same method applied to UPOs gives just one (approximately) zero eigenvalue, and one or two positive ones. The full results are listed in table 2. ## IV Statistical predictions Let \(\Gamma(u)\) be some real scalar measurable quantity for which we wish to know the long-time average for the system (2), for example the energy \(\int_{0}^{L}\frac{1}{2}u^{2}\mathrm{d}x\). For a trajectory confined to the \(i\)th UIT \(u(x,\rho,\sigma)\) we may write, via Fourier trans forms, \[\Gamma(u(\cdot,\rho,\sigma))=\sum_{k_{1}=-\infty}^{\infty}\sum_{k_{2}=-\infty}^{ \infty}\Gamma^{(k_{1},k_{2})}e^{i(k_{1}\rho+k_{2}\sigma)}+\text{c.c.}\] and thus for a trajectory starting at \(u(x,0,0)\), so that \(\rho=Rt\) and \(\sigma=St\), the long time average of \(\Gamma(u)\) is given by \[\Gamma_{i} \equiv\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\Gamma(u)\mathrm{d}t\] \[=\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\sum_{k_{1}=-\infty}^{ \infty}\sum_{k_{2}=-\infty}^{\infty}\Gamma^{(k_{1},k_{2})}e^{i(k_{1}R+k_{2}S) t}\mathrm{d}t\] \[\qquad+\text{c.c.}\] \[=\Gamma^{(0,0)}\] \[=\frac{1}{4\pi^{2}}\int_{0}^{2\pi}\mathrm{d}\rho\int_{0}^{2\pi} \mathrm{d}\sigma\Gamma(u(x,\rho,\sigma)),\] so long as the torus is quasiperiodic, i.e. \(R\) and \(S\) are incommensurate, so that \(k_{1}R+k_{2}S=0\) only if \(k_{1}=k_{2}=0\). In other words, the average value of a quantity \(\Gamma(u)\) over a trajectory on a quasiperiodic invariant torus is the average \(\Gamma_{i}\) over the torus surface itself, calculated in the obvious way. We then wish to calculate weights \(w_{i}\) so that the average value for the full chaotic attractor can be approximated as \[\hat{\Gamma}=\frac{\sum_{i}w_{i}\Gamma_{i}}{\sum_{i}w_{i}}, \tag{8}\] where crucially the \(w_{i}\) are independent of the particular quantity of interest \(\Gamma\). In certain cases, weights for sums of UPOs can be rigorously derived [28]. When UITs are considered, it is not clear whether it is possible to derive an expression for weights analytically, though in our case it would be possible to use the weights for the corresponding RPOs in the unforced \(\epsilon=0\) system. Even when a large number of periodic orbits are available, ad-hoc choices of weights have Figure 6: Timeseries at \(L=22\), \(\beta=0.01\) of length \(t=200\) for the solution \(T_{10}\). Top: relative periodic orbit at \(\epsilon=0\). Bottom: full torus at \(\epsilon=0.01\). Note the modulation of the periodic behaviour in the forced, asymmetric case. Figure 7: Projections of the solutions of figure 6. Left to right: (a) RPO at \(\epsilon=0\), showing the _real part_ of the first three Fourier coefficients, (b) UIT at \(\epsilon=0.01\), real part of first three Fourier coefficients, (c) RPO at \(\epsilon=0\), _absolute value_ of first three Fourier coefficients, (d) UIT at \(\epsilon=0.01\), absolute values. The first projection shows that the RPO is indeed geometrically a two-dimensional invariant torus, but disguises the fact that this is due to the continuous symmetry and does not easily allow us to distinguish the two values of \(\epsilon\). The second projection, which is that used in the other figures, removes the continuous symmetry and we can clearly see the development from RPO to UIT. been found to give comparably good or even better results than these derivable weights [5]. The easiest choice of weights to adapt to UITs is that of Zoldi and Greenside [29], which is to assign the \(i\)th invariant solution a weight \[w_{i}=\frac{1}{\sum_{k:\chi_{k}^{(i)}>0}\chi_{k}^{(i)}}, \tag{9}\] based on its positive Lyapunov exponents. In the full periodic orbit theory, equilibria are excluded and only periodic orbits considered. An intuitive interpretation of this is that the UPOs have a greater 'presence' in state space. Extending this, we exclude the UPOs we have found from our calculations, and consider only UITs. Since all our UITs have either one or two unstable directions, (9) reduces to \[w_{i}=\frac{1}{\chi_{1}^{(i)}+\chi_{2}^{(i)}},\] where \(\chi_{2}^{(i)}\) may be either zero or positive. Tables 2 and 3 list the UITs that were successfully converged at \(\epsilon=0\) and continued to \(\epsilon=0.001\) and \(\epsilon=0.002\) respectively. The calculated Lyapunov exponents \(\chi_{1}\) and \(\chi_{2}\) for each of them are used to compute weights, and these weights are used to give predictions for the chaotic attractor for the energy production/dissipation, the Lyapunov exponents, and the first three Fourier coefficients. The values computed directly from a very long chaotic trajectory are also given, and the agreement is generally good. ## V Discussion We have demonstrated that, for this contrived system at carefully chosen parameter values, UITs are common and readily found. In addition to visually giving a good representation of the strange attractor, as shown in figure 8, we are able to accurately predict statistics of the dynamics using a weighted average of the tori, with weights derived from the stabilities of the UITs. The predictions of energy production and dissipation given in table 2 are particularly good, despite the fact that the values for the individual UITs vary drastically. The predicted values of the Lyapunov exponents of the chaotic attractor are significantly poorer, which is to be expected as these are notoriously difficult to calculate and sensitive to which parts of the attractor are accounted for. It is self-evident that if UITs exist within an attractor, chaotic trajectories must pass very close to them and therefore, given sufficiently many UITs, the properties of the UITs can predict properties of the attractor. This system was chosen to give a method to find large numbers of UITs. Undoubtedly, UPOs still exist in this system as phase-lockings on the tori, but in general these are of long period. What remains an open question is whether UITs exist in large numbers in realistic systems for which it has been difficult to detect UPOs, such as fluid turbulence. If they do, they could capture physical processes that are currently poorly understood. However, their detection, as well as the computational power needed to converge them, is a significant barrier to extend these ideas to such systems. ## Acknowledgements This work was supported by the European Research Council (ERC) under the European Figure 8: The twelve converged tori at \(\beta=0.01\), \(\epsilon=0.001\), overlaid on the chaotic attractor (rendered as a cloud). The UITs lie within and capture the fractal structure of the attractor. Projection as per figure 2. Union's Horizon 2020 research and innovation programme (grant no. 865677). ## Author declarations The authors have no conflicts to disclose. ## Data availability The data that supports the findings of this study are available from the authors on request.
2310.06705
An overdetermined eigenvalue problem and the Critical Catenoid conjecture
We consider the eigenvalue problem $\Delta^{\mathbb{S}^2} \xi + 2 \xi=0 $ in $ \Omega $ and $\xi = 0 $ along $ \partial \Omega $, being $\Omega$ the complement of a disjoint and finite union of smooth and bounded simply connected regions in the two-sphere $\mathbb{S}^2$. Imposing that $|\nabla \xi|$ is locally constant along $\partial \Omega$ and that $\xi$ has infinitely many maximum points, we are able to classify positive solutions as the rotationally symmetric ones. As a consequence, we obtain a characterization of the critical catenoid as the only embedded free boundary minimal annulus in the unit ball whose support function has infinitely many critical points.
José M. Espinar, Diego A. Marín
2023-10-10T15:27:53Z
http://arxiv.org/abs/2310.06705v1
###### Abstract ###### Abstract We consider the eigenvalue problem \(\Delta^{\mathbb{S}^{2}}\xi+2\xi=0\) in \(\Omega\) and \(\xi=0\) along \(\partial\Omega\), being \(\Omega\) the complement of a disjoint and finite union of smooth and bounded simply connected regions in the two-sphere \(\mathbb{S}^{2}\). Imposing that \(|\nabla\xi|\) is locally constant along \(\partial\Omega\) and that \(\xi\) has infinitely many maximum points, we are able to classify positive solutions as the rotationally symmetric ones. As a consequence, we obtain a characterization of the critical catenoid as the only embedded free boundary minimal annulus in the unit ball whose support function has infinitely many critical points. **An overdetermined eigenvalue problem** **and the Critical Catenoid conjecture** Jose M. Espinar\({}^{\dagger}\) and Diego A. Marin\({}^{\ddagger}\) \({}^{\dagger}\) Department of Geometry and Topology and Institute of Mathematics (IMAG), University of Granada, 18071, Granada, Spain; e-mail: [email protected] \({}^{\ddagger}\) Department of Geometry and Topology and Institute of Mathematics (IMAG), University of Granada, 18071, Granada, Spain; e-mail: [email protected] **Key Words:** Overdetermined problems, P-functions, moving planes, critical catenoid. ## 1 Introduction Consider a Riemannian manifold \((M,g)\) and let \(\Omega\subset M\) be a bounded domain. Additionally, let \(f\) be a positive real-valued Lipschitz function. Consider the following elliptic equation: \[\left\{\begin{aligned} \Delta\xi+f(\xi)&=0& \text{in}&\Omega,\\ \xi&>0&\text{in}&\Omega,\\ \xi&=0&\text{along}&\partial\Omega. \end{aligned}\right. \tag{1.1}\] This problem is commonly known as a Dirichlet problem, and it typically has a unique solution when one exists. Moreover, we introduce the following additional condition to the problem: \[\frac{\partial\xi}{\partial\nu}=c\quad\text{along}\quad\partial\Omega,\quad c \in\mathbb{R}^{+}, \tag{1.2}\] where \(\nu\) denotes the inner normal vector to \(\partial\Omega\), it transforms into an overdetermined elliptic problem (OEP) and solutions only exist for specific domains \(\Omega\). Consequently, a solution to the combined problem (1.1) and (1.2) is represented as a pair \((\Omega,\xi)\), emphasizing that not only the function \(\xi\) but also the domain \(\Omega\) are essential for solving overdetermined problems. For instance, if we consider \(\Omega\) as a geodesic ball in a space form and \(\xi\) as a function dependent solely on the distance from the center of the ball that satisfies (1.1), it becomes evident that \(\xi\) satisfies condition (1.2). This naturally raises the question of whether these are the only types of solutions to the OEP on bounded domains. The interest in such problems traces back to Serrin's seminal paper [52], where he established the rigidity of problem (1.1) and (1.2) when \(\Omega\) is a bounded domain in Euclidean space and \(f\equiv 1\). Serrin's work showed that if \((\Omega,\xi)\) is a \(\mathcal{C}^{2}\) solution (i.e., both the solution \(\xi\) and the domain \(\Omega\) exhibit \(\mathcal{C}^{2}\) regularity) to the OEP (1.1) and (1.2), then \(\Omega\) must be a round ball and \(\xi\) must be rotationally symmetric. Serrin employed the Alexandrov reflection method and a modified maximum principle for domains with corners to establish this result. Subsequently, Pucci and Serrin in [42] extended this result to include a general \(f\) with Lipschitz regularity. Serrin's paper marked the inception of OEPs in which the overdetermined condition (1.2) implies the symmetry of the domain \(\Omega\). However, beyond these well-known examples, there exist more solutions to OEPs, apart from the previously mentioned trivial ones. The first non-symmetric solutions to an OEP in the Euclidean space were constructed by P. Sicbaldi in [55]. He showed the existence of a family of solutions to (1.1) and (1.2) in \(\mathbb{R}^{n}\), \(n\geq 3\), and \(f(t)=\lambda t\) (a linear function). These solutions were perturbations of a straight cylinder \(\mathbb{R}^{n-1}\times\mathbb{R}\) obtained by reformulating the overdetermined problem in terms of a differential operator and applying the Crandall-Rabinowicz bifurcation theorem. Later, many non-trivial solutions have been discovered for various OEPs using bifurcation of symmetric domains in more general Riemannian manifolds (cf. [13, 17, 39, 48, 50] and references therein). Serrin's seminal work [52] holds significance, not only for the classification result itself, but for introducing the'moving plane method" in the study of partial differential equations (PDEs) and, since then, this method has been applied to prove similar results in other space forms, the closed hemisphere \(\mathrm{cl}(\mathbb{S}_{+}^{n})\) and hyperbolic space \(\mathbb{H}^{n}\), or extended to unbounded domains (cf. [3, 4, 14, 15, 16, 32, 36, 45] and references therein). Another widely-used method in the study of OEPs is the one introduced by Weingberger [58]. Weingberger employed the maximum principle with a specific function (a "P-function") and a Pohozaev identity to establish Serrin's classification in an elementary manner. This method, now known as the "P-function method", has been successfully adapted to different equations. For example, it has been employed to prove rigidity statements in space forms (cf. [8, 43]), and product manifolds (cf. [18]). While various analytical methods have been applied to study overdetermined problems (e.g., Morse theory [40, 41] and complex analysis [28, 16]), in recent years methods derived from differential geometry, particularly those related to the theory of minimal and constant mean curvature (CMC) hypersurfaces, have been successfully applied to OEPs in space forms (cf. [11, 16, 47]). Indeed, Serrin's theorem in [52] can be seen as the OEP version of Alexandrov's well-known theorem regarding the uniqueness of compact embedded CMC hypersurfaces in Euclidean space (cf. [2]). An interesting connection between OEPs and the theory of minimal surfaces can be found in the paper by Helein et al. [23], where they established a link between a certain type of minimal surfaces and solutions to the problem (1.1) and (1.2) in \(\Omega\subset\mathbb{R}^{n}\) with \(f\equiv 0\), the "one phase problem". In particular, they constructed a Weierstrass-like representation of solutions to the previous problem, which Traizet subsequently used in [57] to classify solutions to the problem. Another connection, with greater relevance to the objective of this paper in which the theory of minimal surfaces is employed to classify solutions to an OEP, is presented in R. Souam's paper [53] where he investigates the overdetermined eigenvalue problem: \[\begin{cases}\Delta^{\mathbb{S}^{2}}\xi+2\xi=0&\text{ in }\qquad\Omega,\\ \quad\xi=const.&\text{ along }\quad\partial\Omega,\\ \quad\frac{\partial\xi}{\partial\nu}=const.&\text{ along }\quad\partial\Omega, \end{cases} \tag{1.3}\] being \(\Omega\) a \(\mathcal{C}^{2}\) domain in \((\mathbb{S}^{2},g_{\mathbb{S}^{2}})\), the unit sphere with the round metric. It is a known fact (cf. [22, 46]) that if \(\xi\) solves the first equation of the previous system, then the map \[X(z):=\nabla^{\mathbb{S}^{2}}\xi(z)+\xi(z)\cdot z,\quad z\in\Omega \tag{1.4}\] produces a branched minimal surface. Conversely, the support function of any branched minimal surface parameterized by its Gauss map provides a solution to the equation \(\Delta^{\mathbb{S}^{2}}\xi+2\xi=0\). Using this correspondence and Nitsche Theorem [38], Souam established that if \((\Omega,\xi)\) is a solution to (1.3) and \(\Omega\) is simply connected, then \(\Omega\) must be a geodesic disk and \(\xi\) must exhibit rotational symmetry. Recall that, as said above, the moving plane method in the sphere can only be applied for domains contained in a closed hemisphere \(\text{cl}(\mathbb{S}^{n}_{+})\). Recently, Espinar-Mazet [16] extended the classification of simply connected domains (not necessarily contained in any closed hemisphere) in \(\mathbb{S}^{2}\) supporting an overdetermined solution to \(\Delta^{\mathbb{S}^{2}}\xi+f(\xi)=0\) as the rotationally symmetric ones for a wider class of functions \(f\), in particular, this method classify positive solutions to (1.3) in simply connected domains. In fact, Serrin's result is not fully generalizable in the sphere \(\mathbb{S}^{2}\), for there exist non-symmetric solutions to Serrin's overdetermined problem with equation \(-\epsilon\Delta^{\mathbb{S}^{2}}\xi+\xi-\xi^{p}\) (\(\epsilon>0,p>1\)) defined in simply connected domains (c.f. [49]). This paper focuses on positive solutions to the eigenvalue problem: \[\begin{cases}\Delta^{\mathbb{S}^{2}}\xi+2\xi=0&\text{ in }\qquad\Omega,\\ \quad\xi=0&\text{ along }\quad\partial\Omega.\end{cases} \tag{1.5}\] Here, we assume that \(\Omega\subset\mathbb{S}^{2}\) is a \(\mathcal{C}^{2}\)-domain and that \(\xi\) is of class \(\mathcal{C}^{2}\) up to the boundary, so it follows that \(\partial\Omega\) is analytic (and so is the function up to the boundary) because of the regularity results of [29]. Since overdetermined positive solutions to (1.5) on simply connected domains are rotationally symmetric (cf. [16, 53]), we consider "finite type domains", which are the complements of a finite number of disjoint simply connected domains within \(\mathbb{S}^{2}\). The concept at hand is to employ the correlation established in [53], albeit in a distinct fashion, and the methodologies recently advanced in [1] for the classification of solutions to the Serrin equation within annular domains in \(\mathbb{R}^{2}\). Such techniques (cf. also [1, 5, 6, 7]) are particularly interesting for solutions to OEPs in domains in the sphere where the moving plane method is not fully available. We aim to classify rotationally symmetric solutions to (1.5) under certain conditions, which, in turn, leads to the classification of minimal surfaces with specific properties (see Section 6). In particular, we focus on solutions to (1.5) that exhibit infinitely many maximum points. Our primary classification result is as follows: **Theorem A**: Let \((\Omega,\xi)\) be a positive solution to (1.5), with \(\Omega\) being a topological annulus with \(\mathcal{C}^{2}\) boundary. Suppose that \(\xi\) has infinitely many maximum points and that the norm of its gradient is locally constant along the boundary, i.e., \[|\nabla^{\mathbb{S}^{2}}\xi|=b_{\Gamma}>0,\quad\Gamma\in\pi_{0}(\partial \Omega), \tag{1.6}\] where \(b_{\Gamma}\) is a constant for each connected component \(\Gamma\) of \(\partial\Omega\). Then \(\Omega\) is a rotationally symmetric neighborhood of an equator, and \(\xi\) exhibits rotational symmetry with respect to the axis perpendicular to the plane defining this equator. **Remark 1.1**.: _It is worth noting that the constants considered in (1.6) could be distinct for different connected components._ Using the correspondence established in Souam's paper, we can derive an intriguing consequence of Theorem A. Let \(\Sigma\subset\mathbb{R}^{3}\) be an open embedded minimal annulus with boundary, each boundary component intersecting orthogonally a sphere centered at the origin (possibly of different radius). Such surfaces are referred to as "minimal surfaces with free boundaries" (see Definition 6.1). It is evident that if \(\xi\) is a solution to (1.3), and we consider the parametrization (1.4), we obtain a surface of this type (a priori, not necessarily embedded). We arrive at the following result: **Theorem B**: Let \(\Sigma\subset\mathbb{B}^{3}\) be an embedded minimal annulus with free boundaries, and suppose its support function (or distance function) has infinitely many critical points. Then \(\Sigma\) is a piece of a rotationally symmetric catenoid. **Remark 1.2**.: _It is not hard to see that, in the above situation, the set of critical points of the support function coincides with the set of critical points of the distance (to the origin) function._ Notably, if \(\Sigma\subset\mathbb{B}^{3}\) (where \(\mathbb{B}^{3}\) is the Euclidean unit ball centered at the origin) and \(\partial\Sigma\subset\mathbb{S}^{2}\), then \(\Sigma\) is a "free boundary surface" within the unit ball. Such surfaces have been extensively studied in recent years (see [9] or [33] for a survey of recent results). It is important to note that there exists a unique embedded catenoid that intersects the unit sphere \(\mathbb{S}^{2}\) orthogonally, known as the "critical catenoid" (see [12]). Consequently, an immediate consequence of the aforementioned result is: **Corollary C**: Let \(\Sigma\subset\mathbb{B}^{3}\) be an embedded free boundary minimal annulus, and suppose its support function has infinitely many critical points. Then \(\Sigma\) is the critical catenoid. **Remark 1.3**.: _To our knowledge, this is the first classification result of the critical catenoid that does not use Schoen-Fraser classification result by first Steklov eigenfunctions (c.f. [20, Theorem 1.2])._ The question of whether the critical catenoid is the only embedded free boundary minimal annulus within the unit ball remains open. In recent years, this question has garnered significant attention within the mathematical community. The prevailing belief is that the answer is affirmative, and numerous partial results have been obtained to substantiate this belief (see, e.g., [12, 19, 31, 35]). This corollary can be regarded as a small contribution to the proof of the so-called "critical catenoid conjecture," an outstanding open problem in the field. **Remark 1.4**.: _The tools employed in this paper have the potential for extension to positive solutions of the general eigenvalue problem, specifically, positive solutions to \(\Delta\xi+\lambda\xi=0\) within the domain \(\Omega\) and incorporating overdetermined boundary conditions. However, our primary focus lies in exploring the geometric applications associated with the particular case of \(\lambda=2\)._ ### Organization of the paper Our primary focus in Section 2 is to compute the rotationally symmetric solutions to (1.5). We present these solutions as a one-parameter family of normalized functions \((\Omega_{R},\xi_{R})\) that depends on the height relative to the horizontal equator of \(\mathbb{S}^{2}\), \(R\in[0,1]\). We also interpret this family of solutions as the support functions of a family of vertical catenoids. In Section 3, we narrow our focus to a region \(\mathcal{U}\) within \(\Omega\) without maximum points for a particular positive solution \((\Omega,\xi)\) to (1.5). We introduce a parameter \(R(\mathcal{U})\) in a similar manner to [1]. We then establish that if \(\Omega\) is not a topological disk, then \(R(\mathcal{U})=\bar{R}\) is well-defined and takes values in \([0,1)\) (Theorem 3.3). This allows us to associate a model solution \((\Omega_{\bar{R}},\xi_{\bar{R}})\) with a general solution within the region \(\mathcal{U}\). Section 4 introduces what is referred to as a "pseudo-radial function" (see Definition 4.3), which we use to derive estimates for the norm of the gradient of \(\xi\) along its level sets (see Theorem 4.1) and for the geodesic curvature of the level sets (see Theorem 4.2) within \(\mathcal{U}\). We also provide a relation between the length of the zero level set and the top level set (Proposition 4.21). In Section 5, we examine positive solutions to the OEP (1.5) and (1.6). We establish an estimate for the length of the zero level sets of the region \(\partial\mathcal{U}\) using the overdetermined condition (Proposition 5.1). This estimate, combined with those obtained in Section 4, enables us to conclude that either \(\mathcal{U}\subset\Omega\) or \(\Omega\setminus\mathcal{U}\) is contained within an open hemisphere. The statement of Theorem A follows from the moving plane method. In fact, we are able to establish a more general result, as indicated in Theorem 5.1. In Section 6, we delve into the geometric consequences of the classification results obtained in Section 5. Through the study of minimal surfaces with free boundaries, we ascertain that an embedded minimal annulus with each boundary component intersecting a sphere (centered at the origin) orthogonally, must possess an injective Gauss map and its support function must have a definite sign. This immediately leads to the statement of Theorem B and Corollary C using the correspondence from Souam's paper. ## 2 The model solutions In this section, we will describe the family of positive and bounded rotationally symmetric solutions \((\Omega_{R},\xi_{R})\) to the Dirichlet problem (1.5), depending on a parameter \(R\in[0,1]\). In this family, \(\Omega_{0}\) is a symmetric tubular neighborhood of an equator and \(\Omega_{1}\) is an open hemisphere. Let \(\mathbb{R}^{3}\) denote the usual Euclidean space, where \((x,y,z)\) represent cartesian coordinates and \(\langle\cdot,\cdot\rangle\) is the Euclidean scalar product. We parametrize the unit sphere in cylindrical coordinates: \[\mathbb{S}^{2}=\left\{(\sqrt{1-r^{2}}\cos\theta,\sqrt{1-r^{2}}\sin\theta,r) \,:\,r\in[-1,1],\,\theta\in[0,2\pi)\right\},\] hence the induced metric in \(\mathbb{S}^{2}\) in these coordinates is given by \[g_{\mathbb{S}^{2}}=\frac{1}{1-r^{2}}dr^{2}+(1-r^{2})d\theta^{2}.\] Define the orthonormal frame in \(\mathbb{S}^{2}\setminus\{\mathbf{s},\mathbf{n}\}\), where \(\mathbf{s}:=(0,0,-1)\) and \(\mathbf{n}:=(0,0,1)\) denote the south and north pole respectively, given by \[n=\sqrt{1-r^{2}}\partial_{r}\quad\text{and}\quad t=\frac{1}{\sqrt{1-r^{2}}} \partial_{\theta}.\] Then, the Christoffel symbols associated to this frame can be computed from \[\nabla_{n}n=0,\qquad\nabla_{n}t=0,\qquad\nabla_{t}n=-\frac{r}{\sqrt{1-r^{2}}} t,\qquad\nabla_{t}t=\frac{r}{\sqrt{1-r^{2}}}n.\] From now on, as we will be always considering differential equations on the sphere, we denote by \(\nabla\), \(\Delta\) and \(\nabla^{2}\) the gradient, laplacian and hessian operators in \(\mathbb{S}^{2}\). ### Rotationally symmetric solutions From the first equation of (1.5) using the cylindrical coordinates, assuming that \(\xi\) doesn't depend on \(\theta\), we obtain \[(1-r^{2})\frac{\partial^{2}\xi}{\partial r^{2}}-2r\frac{\partial\xi}{\partial r}+ 2\xi=0.\] This is an ordinary differential equation of second order, and it has a two-parameter family of solutions: \[\xi_{\alpha,\beta}(r)=\alpha(1-r\operatorname{arctanh}(r))+\beta r,\quad r\in( -1,1),\quad\alpha,\beta\in\mathbb{R}. \tag{2.1}\] Hence, a function \(\xi_{\alpha,\beta}\) will be a solution to (1.5) if there exist \(a,b\in[-1,1]\), \(a<b\), depending on \(\alpha\) and \(\beta\) such that \[\xi_{\alpha,\beta}(a)=\xi(b)=0\quad\text{and}\quad 0<(\xi_{\alpha,\beta})_{|(a,b )}<+\infty.\] * **Case 1: \(\alpha=0\) and \(\beta\neq 0\):** We obtain the _reescale height function_\(\xi_{0,\beta}(r)=\beta r\). This is a monotone function that takes the zero value only at the origin, and we obtain the solution \((\Omega_{1},\xi_{0,\beta})\), where \(\Omega_{1}\) is the open hemisphere centered either at the north pole \(\mathbf{n}\) if \(\beta>0\) or at the south pole \(\mathbf{s}\) if \(\beta<0\). * **Case 2: \(\alpha\neq 0\) and \(\beta=0\):** We obtain the function \[\xi_{\alpha,0}(r)=\alpha(1-r\operatorname{arctanh}(r)),\quad r\in(-1,1).\] If we assume that \(\alpha>0\), then \(\xi_{\alpha,0}(r)\) has a positive maximum at the origin since \(\xi_{\alpha,0}(-r)=\xi_{\alpha,0}(r)\), it is monotonic in both intervals \((-1,0)\) and \((0,1)\) and \[\lim_{r\to 1^{-}}\xi_{\alpha,0}(r)=\lim_{r\to-1^{+}}\xi_{\alpha,0}=-\infty.\] Hence, it is clear that there exists \(\bar{r}\in(0,1)\) such that \(\xi_{\alpha,0}(\bar{r})=\xi_{\alpha,0}(-\bar{r})=0\) and \((\xi_{\alpha,0})_{|(-\bar{r},\bar{r})}>0\). In this case, we can easily compute that \[\frac{\partial\xi_{\alpha,0}}{\partial r}(0)=0\text{ and }(\xi_{\alpha,0})_{\max}=\xi_{\alpha,0}(0)=\alpha.\] Therefore, \((\Omega_{0},\xi_{\alpha,0})\) is solution to (1.5) such that \(\Omega_{0}\) is a symmetric tubular neighborhood of the equator \(\mathbb{S}^{2}\cap\{z=0\}\). Finally, observe that \(\xi_{-\alpha,0}(r)=-\xi_{\alpha,0}(r)\) which means that it is negative in \((-\bar{r},\bar{r})\) and positive in \((-1,1)\setminus[-\bar{r},\bar{r}]\). However, \[\lim_{r\to 1^{-}}\xi_{-\alpha,0}(r)=\lim_{r\to-1^{+}}\xi_{-\alpha,0}=+\infty,\] that is, \(\xi_{-\alpha,0}\) is not bounded. * **Case 3: \(\alpha\neq 0\) and \(\beta\neq 0\):** Consider \(\alpha>0\). Since \[\xi_{\alpha,\beta}(r)=\alpha\,\xi_{1,\omega}(r)\text{ and }\xi_{1,\omega}(-r)=\xi_{1,-\omega}(r)\] where \(\omega=\beta/\alpha\neq 0\), we can write solutions to (2.1) in a more interesting way \[\xi_{\alpha,\beta}(r)=\alpha\,\xi_{1,\omega}(r),\text{ where }\xi_{1,\omega}(r)=1-r\,\text{arctanh}(r)+\omega r.\] We first compute the zeros of \(\xi_{\alpha,\beta}\). Since \(\xi_{1,\omega}(0)=1\) and \(\lim_{r\to 1^{-}}\xi_{1,\omega}(r)=\lim_{r\to-1^{+}}\xi_{1,\omega}=-\infty\), then \(\xi_{1,\omega}\) there exist uniques \(r_{-}\in(-1,0)\) and \(r_{+}\in(0,1)\) such that \(\xi_{1,\omega}(r_{-})=\xi_{1,\omega}(r_{+})=0\). Moreover, \[-\frac{\partial\xi_{1,\omega}}{\partial r}(r)=\text{arctanh}(r)+\frac{r}{1-r^ {2}}-\omega,\quad r\in(r_{-},r_{+})\] being \[\frac{\partial\xi_{1,\omega}}{\partial r}(r_{-})=-\frac{1}{r_{-}(1-r_{-}^{2}) }>0\text{ and }\frac{\partial\xi_{1,\omega}}{\partial r}(r_{+})=-\frac{1}{r_{+}(1-r_{+}^{2 })}<0,\] so we conclude that there exists a unique \(R\in(r_{-},r_{+})\) such that \(\frac{\partial\xi_{1,\omega}}{\partial r}(R)=0\), which is a maximum. We have \[\omega=\text{arctanh}(R)+\frac{R}{1-R^{2}},\] (2.2) and the value of the maximum of \(\xi_{\alpha,\beta}\) is given by \[(\xi_{\alpha,\beta})_{\text{max}}=\alpha\,\xi_{1,\omega}(R)=\frac{\alpha}{1-R ^{2}}.\] Finally, we note that if \(\alpha<0\) then \(\xi_{\alpha,\beta}\) has a negative sign in \((r_{-},r_{+})\) and a positive sign in \((-1,1)\setminus[r_{-},r_{+}]\); but \(\xi_{\alpha,\beta}\) is not bounded in \((-1,1)\setminus[r_{-},r_{+}]\). Summarizing, we have obtained a two-parameter family of solutions to the Dirichlet problem (1.5), where the parameters are a scale parameter \(\alpha\) and a height parameter \(R\), where \(R\) indicates the parallel in \(\mathbb{S}^{2}\) where the curve of maximum points is located. In fact, we can describe the solutions previously obtained as \[\xi_{\alpha,R}(r)=\alpha\left(1-r\text{arctanh}(r)+\left(\text{arctanh}(R)+ \frac{R}{1-R^{2}}\right)r\right),\quad\forall r\in[r_{-}(R),r_{+}(R)].\] However, note that the eigenvalue equation of the Laplace-Beltrami operator on a general Riemannian manifold is invariant under a change of scale. Thus, the parameter \(\alpha\) is not important and for each \(R\) we can fix a value of the scale to obtain a 1-parameter family of rotationally symmetric solutions to (1.5). Although the most natural way of obtaining the 1-parameter family is to fix \(\alpha=1\), we will use a different normalization for geometrical reasons. **Proposition 2.1**.: _Let \((\Omega,\xi)\) be a positive bounded rotationally symmetric solution to (1.5). Then, up to a rotation, a reflection with respect to the plane \(\{z=0\}\) and a dilation, \((\Omega,\xi)=(\Omega_{R},\xi_{R})\) for \(R\in[0,1]\) as described below:_ * _If_ \(R=1\)_,_ \((\Omega_{1},\xi_{1})\) _is a solution to (_1.5_) in a disk-type domain, where_ \(\Omega_{1}\) _is the open hemisphere centered at the north pole_ \(\mathbf{n}\) _and_ \(\xi_{1}\) _is the height function_ \(\xi_{1}(r)=r\)_._ * _If_ \(R\in[0,1)\)_, then_ \((\Omega_{R},\xi_{R})\) _is a solution to (_1.5_) in a annular-type domain given by_ \[\xi_{R}(r)=\alpha(R)\left(1-r\,\text{\rm arctanh}(r)+\left(\text{\rm arctanh} (R)+\frac{R}{1-R^{2}}\right)r\right),\quad\forall r\in[r_{-}(R),r_{+}(R)],\] _where_ \(-1<r_{-}(R)<0<r_{+}(R)<1\) _with_ \(-r_{-}(0)=r_{+}(0)=\bar{r}\)_, being_ \(\alpha(R)=r_{+}(R)\sqrt{1-r_{+}(R)^{2}}\) _and_ \[\Omega_{R}=\left\{\left(\sqrt{1-r^{2}}\cos(\theta),\sqrt{1-r^{2}}\sin(\theta),r\right)\in\mathbb{S}^{2}\,:\,\,r\in[r_{-}(R),r_{+}(R)],\,\theta\in[0,2\pi) \right\}.\] _Moreover,_ \[\max_{\text{\rm cl}(\Omega)}\xi_{R}:=(\xi_{R})_{\text{\rm max}}=\xi_{R}(R)= \frac{\alpha(R)}{1-R^{2}}=\frac{r_{+}(R)\sqrt{1-r_{+}(R)^{2}}}{1-R^{2}}\] Figure 1: The diagram shows the graphs of the functions that define the model solutions \(\xi_{R}\) with \(R\in\{0,0.25,0.5,0.75\}\). In the blue part, the function is increasing, while in the red part, the function is decreasing, and the function takes its maximum values at the point \(x=R\). With the normalization at hand, the value of \((\xi_{R})_{\text{\rm max}}\) goes to zero as \(R\) approaches \(1\), so \(\xi_{R}\to 0\) in compact sets away from the north pole. In order to embrace the positive solution in the disk-type domain \(\Omega_{1}\) we have to lose continuity at \(R=1\) in the family \(\xi_{R}\) (cf. Proposition 2.1). _and we denote the boundary components as_ \[\Gamma_{\pm}(R)=\left\{\left(\sqrt{1-r_{\pm}(R)^{2}}\cos(\theta),\sqrt{1-r_{\pm}(R )^{2}}\sin(\theta),r_{\pm}(R)\right)\in\mathbb{S}^{2}\,:\,\,\theta\in[0,2\pi) \right\}, \tag{2.3}\] _where \(\Gamma_{+}(R)\) and \(\Gamma_{-}(R)\) are called the upper and lower component of \(\partial\Omega_{R}\) respectively (with respect to the equator \(\mathbb{S}^{2}\cap\{z=0\}\))._ Observe that the constants \(r_{-}(R)\) and \(r_{+}(R)\) are smooth functions of \(R\) since they are implicit solutions to the equation \[1-r\operatorname{arctanh}(r)+\left(\operatorname{arctanh}(R)+\frac{R}{1-R^{2 }}\right)r=0. \tag{2.4}\] Taking into account that \(r_{\pm}(R)\neq 0\) for some \(R\in[0,1]\) by (2.4), we have that \[\frac{\partial r_{\pm}}{\partial R}(R)=\frac{2r_{\pm}^{2}(1-r_{\pm}^{2})}{(1-R ^{2})^{2}},\quad\forall R\in[0,1), \tag{2.5}\] so both \(r_{-}\) and \(r_{+}\) are increasing functions of \(R\). The following properties of our family of solutions are straightforward. **Proposition 2.2**.: _Let \((\Omega_{R},\xi_{R})\), \(R\in[0,1)\), be one of the model solutions described in Proposition 2.1. Set \(\alpha(R)=\alpha\). Then we have the following identities:_ Figure 2: The graphs of \(r_{+}\) (above) and \(r_{-}\) (below) as functions of the parameter \(R\). One can see that \(-r_{-}(0)=r_{+}(0)\) and \(r_{+}\to 1\) and \(r_{-}\to 0\) as \(R\) goes to \(1\). * _The derivatives of the function up to second order:_ \[\frac{\partial\xi_{R}}{\partial r}(r)=\frac{1}{r}\left(\xi_{R}-\frac{\alpha}{1-r ^{2}}\right)\quad\text{and}\quad\frac{\partial^{2}\xi_{R}}{\partial r^{2}}(r)=- \frac{2\alpha}{(1-r^{2})^{2}}.\] * _The gradient and Hessian :_ \[\nabla\xi_{R}(r)=\frac{\sqrt{1-r^{2}}}{r}\left(\xi_{R}-\frac{\alpha }{1-r^{2}}\right)n\quad\text{and}\] \[\nabla^{2}\xi_{R}(r)=\begin{pmatrix}-\xi_{R}-\frac{\alpha}{1-r^{2 }}&0\\ 0&-\xi_{R}+\frac{\alpha}{1-r^{2}}\end{pmatrix}.\] * _The norm of the gradient along the zero level sets:_ \[|\nabla\xi_{R}|=1\,\text{along}\,\,\Gamma_{+}(R)\quad\text{and}\quad\,| \nabla\xi_{R}|=\frac{r_{+}\sqrt{1-r_{+}^{2}}}{r_{-}\sqrt{1-r_{-}^{2}}}\,\,\text {along}\,\,\Gamma_{-}(R).\] _Moreover, the following identity follows away from the critical points of \(\xi_{R}\):_ \[\nabla^{2}\xi_{R}(r)=\left(\frac{r^{2}\alpha}{((1-r^{2})\xi_{R}-\alpha)^{2}} \,|\nabla\xi_{R}|-\xi\right)g_{\mathbb{S}^{2}}-\frac{2\alpha r^{2}}{((1-r^{2}) \xi_{R}-\alpha)^{2}}d\xi_{R}\otimes d\xi_{R}. \tag{2.6}\] ### Geometric interpretation Let us consider the minimal catenoid \(C_{\alpha,\omega}\), defined as the unique rotationally symmetric minimal surface around the \(z-\)axis, symmetric with respect to the plane \(\{z=-\omega\}\) and necksize \(\alpha\). This catenoid can be parameterized by \[\psi_{\alpha,\omega}(r,\theta)=\alpha\left(\frac{\cos\theta}{\sqrt{1-r^{2}}}, \frac{\sin\theta}{\sqrt{1-r^{2}}},\operatorname{arctanh}(r)-\omega\right),\, \,\,r\in(-1,1)\,\,\text{and}\,\,\theta\in[0,2\pi), \tag{2.7}\] with outward unit normal \[N(r,\theta)=\left(\sqrt{1-r^{2}}\,\cos\theta,\sqrt{1-r^{2}}\,\sin\theta,-r \right)\in\mathbb{S}^{2}\setminus\{\mathbf{s},\mathbf{n}\}\,.\] The support function \(\xi(r,\theta)=\langle\psi(r,\theta),N((r,\theta)\rangle\) is then given by \[\xi(r,\theta)=\alpha(1-r\operatorname{arctanh}(r)+\omega r).\] The coordinates \((r,\theta)\in(-1,1)\times[0,2\pi)\) can be seen as a parametrization of \(\mathbb{S}^{2}\setminus\{\mathbf{s},\mathbf{n}\}\) via the unit normal \(N\). As we noted in the introduction, the support function \(\xi\) satisfies \[\Delta\xi+2\xi=0\,\,\text{in}\,\,\mathbb{S}^{2}\setminus\{\mathbf{s},\mathbf{ n}\}\,.\] Thus, the rotationally symmetric solutions in annular domains described above are nothing but the support function of a given catenoid in \(\mathbb{R}^{3}\) (up to scaling and vertical translation) defined in the appropriate spherical domain by the Gauss map. Now we can use this correspondence to derive some geometric properties of the functions defined in the previous section. On the one hand, note that the zero level set of \(\xi\) gives us the points of \(C_{\alpha,\omega}\) where the position vector \(\psi(r,\theta)=|\psi(r,\theta)|\) is orthogonal to the normal vector \(N(r,\theta)\). Note also that \(\xi\) depends only on \(r\), so the image via \(\psi\) of the zero level set of \(\xi\) on \(C_{\alpha,\omega}\) are horizontal circles contained in spheres (of different radii except for \(\omega=0\)) centered at the origin, and \(C_{\alpha,\omega}\) intersects these spheres orthogonally. On the other hand, note that \[\lim_{r\to\pm 1}N(r,\theta)=\mp(0,0,1)\quad\text{and}\quad\lim_{r\to\pm 1} \sqrt{1-r^{2}}\psi(r,\theta)=\alpha(\cos\theta,\sin\theta,0),\] and it is clear that the function \(f(r):=\sqrt{1-r^{2}}\left(\operatorname{arctanh}(r)-\omega\right)\) satisfies \(f((-1,1))=(-1,1)\). Thus, we can deduce geometrically that there exist uniques \(-1<r_{-}<0<r_{+}<1\) such that \(\xi(r_{+})=\xi(r_{-})=0\). Let us denote the image in \(C_{\alpha,\omega}\) of the two components of the zero level set of \(\xi\) as \(\zeta_{\pm}=\{\psi(r_{\pm},\theta):\ \theta\in[0,2\pi)\}\). Now, observe that \[|\psi(r_{\pm},\theta)|^{2}=\alpha^{2}\left(\frac{1}{1-r_{\pm}^{2}}+( \text{artanh}(r_{\pm})-\omega)^{2}\right)=\frac{\alpha^{2}}{r_{\pm}^{2}\left(1 -r_{\pm}^{2}\right)}, \tag{2.8}\] so taking \(\alpha=r_{+}\sqrt{1-r_{+}^{2}}\) we obtain that \(\zeta_{+}\) is contained in the unit sphere \(\mathbb{S}^{2}=\mathbb{S}^{2}(0,1)\). On the other hand, it easy to check that the function \(f(R)=r_{+}^{2}(R)(1-r_{+}^{2}(R))/r_{-}^{2}(R)(1-r_{-}^{2}(R))\) with \(R\in[0,1)\) has derivative \[f^{\prime}(R)=\frac{4r_{+}^{2}(R)(1-r_{+}^{2}(R))r_{-}^{2}(R)(1-r_{-}^{2}(R))} {(1-R^{2})^{2}r_{-}^{4}(R)(1-r_{-}^{2}(R))^{2}}\left((r_{+}-2r_{+}^{3})-(r_{-} -2r_{-}^{3})\right),\] so taking into account that \(\bar{r}>0.8\) it can be easily checked that \(f^{\prime}(R)<0\) for all \(R\in[0,1)\). We deduce then that there exist \(\tilde{r}\leq 1\) such that \(\zeta_{-}\in\mathbb{S}^{2}(\tilde{r})\), where \(\mathbb{S}^{2}(\tilde{r})\) denotes the euclidean sphere centered at the origin and radius \(\tilde{r}\). Thus, we conclude that solutions \(\xi_{R}\) described in Proposition 2.1 correspond to support functions of pieces of catenoids \[C_{R}:=\left\{\psi_{\alpha(R),\omega(\alpha(R))}(r,\theta):\ r\in(r_{-}(R),r_{+ }(R)),\,\theta\in[0,2\pi)\right\}, \tag{2.9}\] where \[\alpha(R)=r_{+}(R)\sqrt{1-r_{+}^{2}(R)}\quad\text{and}\quad\omega(R)=\text{ arctanh}(R)+\frac{R}{1-R^{2}},\] and \(\psi_{\alpha(R),\omega(R)}\) is given by (2.7). It holds that \(C_{R}\subset\mathbb{B}^{3}\) for each \(R\in[0,1)\). Also, since \(-r_{-}(0)=r_{+}(0)=\bar{r}\), it is clear that \(C_{0}\) corresponds to the critical catenoid, i.e. the unique free boundary minimal catenoid in the unit sphere. Note that \(N(C_{R})=\Omega_{-R}\) is the reflection with respect to the plane \(\{z=0\}\) of \(\Omega_{R}\) for each \(R\in[0,1)\). ## 3 Normalized Wall Shear Stress From now on, \(\Omega\) will always denote a finite type region in the two-sphere. **Definition 3.1**.: _Let \(\Omega\subset\mathbb{S}^{2}\) be a domain with \(\mathcal{C}^{2}\) boundary. Then we say that \(\Omega\) is of finite type if there exist a finite number of disjoint simply connected domains \(D_{1},\ldots,D_{k}\subset\mathbb{S}^{2}\) such that_ \[\Omega=\mathbb{S}^{2}\setminus\left(\cup_{i=1}^{k}\mathsf{cl}(D_{i})\right). \tag{3.1}\] If \(\Omega\) is a finite type domain with \(k\geq 1\) boundary components, we will always write \(\partial\Omega=\Gamma_{1}\cup\cdots\cup\Gamma_{k}\), being \(\Gamma_{i}=\partial D_{i}\) for \(i\in\{1,\ldots,k\}\). Following the ideas in [1], in this section we will classify solutions \((\Omega,\xi)\) to the eigenvalue problem \[\left\{\begin{aligned} \Delta\xi+2\xi=0&\quad \text{in}&\quad\Omega,\\ \xi>0&\quad\text{in}&\quad\Omega,\\ \xi=0&\quad\text{along}&\quad\partial\Omega, \end{aligned}\right. \tag{3.2}\] in terms of its _normalized wall shear stress_, a scale-invariant quantity. By the maximum principle, a solution \((\Omega,\xi)\) to (3.2) does not have interior minimums. Hence, the set of interior critical points \[\operatorname{Crit}(\xi):=\{p\in\Omega\,:\,\,\nabla\xi(p)=0\}\] Figure 3: Here, two model catenoids defined in (2.9). Left: model catenoid with parameter \(R=1/2\). It can be seen that \(\zeta_{+}\) is contained in the unit sphere and \(\zeta_{-}\) is contained in a sphere of smaller radius. Right: critical catenoid, which corresponds to \(C_{0}\). contains only saddle and maximum points. Given a solution \((\Omega,\xi)\) to (3.2) we denote by \[\operatorname{Max}(\xi):=\left\{p\in\Omega\,:\,\,\xi(p)=\xi_{\text{max}}:=\max_{ \text{cl}(\Omega)}\xi\right\},\] and it is clear that this set must be non-empty. We will refer to \(\operatorname{Max}(\xi)\) as the _top level set_ of \(\xi\). **Remark 3.1**.: _If a solution \((\Omega,\xi)\) to (3.2) has a connected component \(\mathcal{U}\subset\Omega\setminus\operatorname{Max}(\xi)\) such that \(\operatorname{cl}(\mathcal{U})\cap\partial\Omega=\emptyset\), then either \(\xi=\xi_{\text{max}}\) in \(\mathcal{U}\) or there exists an interior minimum in \(\mathcal{U}\), a contradiction in any case by the maximum principle._ We also observe a structure result for the top level set: **Lemma 3.1**.: _Let \((\Omega,\xi)\) be a solution to (3.2), \(\Omega\subset\mathbb{S}^{2}\) of finite type. Suppose that \(\xi\) has infinite points of maximum, i.e., \(\#\text{Max}(\xi)=+\infty\). Then_ \[\operatorname{Max}(\xi)=\gamma^{0}\cup\gamma^{1},\] _where \(\gamma^{0}\) is a finite set (could be empty) of points and \(\gamma^{1}\) is a finite set of disjoint analytic closed curves. Moreover, for each \(\gamma\in\gamma^{1}\) it holds that \(\Omega\setminus\gamma=\Omega_{1}^{\gamma}\cup\Omega_{2}^{\gamma}\) with \(\partial\Omega_{i}^{\gamma}\cap\partial\Omega\neq\emptyset\) for \(i=1,2\)._ Proof.: We use the same arguments as in the proof of [1, Theorem D]. Solutions to (3.2) are real analytic (cf. [29]), so by the Lojasiewicz structure theorem it follows that \(\text{Max}(\xi)=\gamma^{0}\cup\gamma^{1}\), where \(\gamma^{0}\) is a set of isolated points and \(\gamma^{1}\) is a set of analytic curves. As \(\gamma^{0}\subset\mathbb{S}^{2}\) must be compact, it follows that \(\gamma^{0}\) is a finite set of isolated points, and by [7, Corollary 3.4.] we get that \(\gamma^{1}\) must be a finite set of analytic closed curves, possibly intersecting at a finite set of points. Furthermore, the curves of \(\gamma^{1}\) are disjoint since, at the intersection points of two curves, the hessian of \(\xi\) must vanish, which contradicts (3.2). Finally, Remark 3.1 implies that \(\gamma\in\gamma^{1}\) is not contractible in \(\Omega\), that is, \(\Omega\setminus\gamma\) cannot contain a topological disk component \(D\) such that \(\partial D\cap\partial\Omega=\emptyset\). Hence, \(\Omega\setminus\gamma=\Omega_{1}^{\gamma}\cup\Omega_{2}^{\gamma}\) with \(\partial\Omega_{i}^{\gamma}\cap\partial\Omega\neq\emptyset\) for \(i=1,2\), as claimed. The above lemma motivates the following **Definition 3.2**.: _Let \((\Omega,\xi)\) be a solution to (3.2) and assume that \(\gamma\subset\operatorname{Max}(\xi)\) is a closed embedded curve. Then, \(\Omega\setminus\gamma=\Omega_{1}^{\gamma}\cup\Omega_{2}^{\gamma}\) and we will say that \(\Omega_{i}^{\gamma}\), \(i=1,2\), is the partition with respect to \(\gamma\)._ Now, we are ready to introduce the definition of normalised wall shear stress (NWSS): **Definition 3.3**.: _Let \((\Omega,\xi)\) be a solution to (3.2) and let \(\Gamma\in\pi_{0}(\partial\Omega)\) be a connected component of the boundary. We define the normalised wall shear stress (NWSS) of \(\Gamma\) as_ \[\overline{\tau}(\Gamma):=\frac{\max_{\Gamma}|\nabla\xi|}{\xi_{\text{max}}}. \tag{3.3}\] _If now \(\mathcal{U}\) is a connected component of \(\Omega\setminus\mathrm{Max}(\xi)\), \(\partial\Omega\cap\mathrm{cl}(\mathcal{U})\neq\emptyset\), we define the NWSS of \(\mathcal{U}\) as_ \[\overline{\tau}(\mathcal{U}):=\max\left\{\overline{\tau}(\Gamma)\,:\,\,\Gamma \in\pi_{0}(\partial\Omega\cap\mathrm{cl}(\mathcal{U}))\right\}. \tag{3.4}\] _Otherwise, we set \(\overline{\tau}(\mathcal{U})=0\)._ The rigidity results proved in the rest of the paper are derived by comparing geometric quantities of a fixed solution to the eigenvalue problem \((\Omega,\xi)\) with those of a certain model solution \((\Omega_{R},\xi_{R})\); which motivates the following: **Definition 3.4**.: _We say that a solution \((\Omega,\xi)\) to (3.2) is equivalent to a model solution \((\Omega_{R},\xi_{R})\), for some \(R\in[0,1]\), if they differ up to a rotation and a dilation. In such case, we will denote \((\Omega,\xi)\equiv(\Omega_{R},\xi_{R})\)._ In order to choose the appropriate model solution to compare with \((\Omega,\xi)\), we will use the NWSS defined previously. ### NWSS on the model solutions The NWSS in our model solutions at each of the components of the boundary can be computed in terms of the parameter \(R\). In fact, \[\overline{\tau}_{\pm}(R):=\overline{\tau}\left(\Gamma_{\pm}(R)\right)=\pm \frac{1-R^{2}}{r_{\pm}(R)\sqrt{1-r_{\pm}(R)^{2}}},\quad R\in[0,1) \tag{3.5}\] where \(\Gamma_{+}(R)\) and \(\Gamma_{-}(R)\) are defined in (2.3). Moreover, it can be shown that \(\overline{\tau}_{-}\) and \(\overline{\tau}_{+}\) are invertible functions. Set \[\overline{\tau}_{\pm}(0)=\frac{1}{\bar{r}\sqrt{1-\bar{r}^{2}}}=:\tau_{0}>1\quad \left(\text{where}\quad 1-\bar{r}\,\text{arctanh}(\bar{r})=0\right), \tag{3.6}\] where, recall, \(r_{+}(0)=-r_{-}(0)=\bar{r}>0\). **Lemma 3.2**.: _The functions \(\overline{\tau}_{+}:[0,1)\to[\tau_{0},+\infty)\) and \(\overline{\tau}_{-}:[0,1)\to(1,\tau_{0}]\) defined by (3.5) are increasing and decreasing respectively._ Proof.: First, remember that in the previous section we proved \[\text{arctanh}(r_{\pm})-\frac{1}{r_{\pm}}=\text{arctanh}(R)+\frac{R}{1-R^{2} },\quad\forall R\in[0,1), \tag{3.7}\] so we have that \(r_{+}(R)\to 1\) and \(r_{-}(R)\to 0\) as \(R\to 1\), and then by (2.5) we get \[r_{-}:[0,1)\to[-\bar{r},0)\quad\text{and}\quad r_{+}:[0,1)\to[\bar{r},1)\] are increasing functions of \(R\). **Claim A:**\(\overline{\tau}_{+}\) and \(\overline{\tau}_{-}\) are monotonic. Proof of Claim A.: We will see that the derivative of both functions cannot vanish. By contradiction, suppose there exists \(\bar{R}\in(0,1)\) such that \(\overline{\tau}^{\prime}_{\pm}(\bar{R})=0\). Then, by (3.5), we get \[2\bar{R}\,r_{\pm}(\bar{R})(1-r_{\pm}(\bar{R})^{2})+(1-2r_{\pm}(\bar{R})^{2})(1- \bar{R}^{2})r^{\prime}_{\pm}(\bar{R})=0\] and hence \[\frac{\partial r_{\pm}}{\partial R}(\bar{R})=\frac{2(1-r_{\pm}(\bar{R})^{2})r _{\pm}(\bar{R})\bar{R}}{(1-\bar{R}^{2})(2r_{\pm}(\bar{R})^{2}-1)}, \tag{3.8}\] so, comparing to (2.5), we get \[r_{\pm}(\bar{R})(2r_{\pm}(\bar{R})^{2}-1)+(1-\bar{R}^{2})\bar{R}=0. \tag{3.9}\] * Assume first that \(\bar{\tau}^{\prime}_{+}(\bar{R})=0\). Since \(r_{+}(R)\geq\bar{r}>0.8\) for all \(R\in[0,1]\), then \(2r_{+}^{2}(\bar{R})-1>0\), which contradicts (3.9). * Second, assume that \(\bar{\tau}^{\prime}_{-}(\bar{R})=0\). Then, (3.9) implies that \(2r_{-}(\bar{R})^{2}-1>0\) and, by (3.8), we get \(\frac{\partial r_{-}}{\partial R}(\bar{R})<0\) since \(r_{-}(\bar{R})<0\), which contradicts (2.5). This proves Claim A. Next, we study the limit of \(\overline{\tau}_{\pm}\) as \(R\) goes to one. From (3.7) we obtain \[\frac{1+r_{\pm}(R)}{1+R}(1-R)=(1-r_{\pm}(R))\text{exp}\left(\frac{2}{r_{\pm}( R)}+\frac{2R}{1-R^{2}}\right),\quad\forall R\in[0,1)\] that yields \[\lim_{R\to 1^{-}}(1-r_{\pm}(R))\text{exp}\left(\frac{2}{r_{\pm}(R)}+\frac{2R}{1 -R^{2}}\right)=0.\] Since \(r_{+}(R)\to 1\) as \(R\to 1\) and \(e^{x}\geq x^{2}\) for all \(x\geq 0\), we get \[0=\lim_{R\to 1^{-}}\left|(1-r_{+}^{2}(R))\text{exp}\left(\frac{1}{1-R^{2}} \right)\right|\geq\lim_{R\to 1^{-}}\left|\frac{1-r_{+}^{2}(R)}{(1-R^{2})^{2}}\right|\] and we conclude that \(\overline{\tau}_{+}\to+\infty\) as \(R\to 1\). On the other hand, using again (3.7) and since \(r_{-}(R)\to 0\) as \(R\to 1\) it is easy to conclude that \[1 =\lim_{R\to 1^{-}}\frac{\text{arctanh}(r_{-}(R))-\frac{1}{r_{-}(R) }}{\text{arctanh}(R)+\frac{R}{1-R^{2}}}=\lim_{R\to 1^{-}}-\frac{\frac{1}{r_{-}(R)}}{ \text{arctanh}(R)+\frac{R}{1-R^{2}}}\] \[=\lim_{R\to 1^{-}}-\frac{1-R^{2}}{r_{-}(R)\left((1-R^{2})\text{ arctanh}(R)+R\right)}=\lim_{R\to 1^{-}}-\frac{1-R^{2}}{r_{-}(R)},\] and by the last equality we conclude that \(\overline{\tau}_{-}(R)\to 1\) as \(R\to 1\). This finishes the proof. Looking at the previous lemma, one can figure out how to choose a model solution associated to an arbitrary solution to the eigenvalue problem \((\Omega,\xi)\): fixed \(\mathcal{U}\subset\Omega\setminus\text{Max}(\xi)\), if we have that \(\overline{\tau}(\mathcal{U})>1\) then there exists a unique model solution \((\Omega_{R},\xi_{R})\) such that \(\overline{\tau}(\mathcal{U})=\overline{\tau}(\mathcal{V})\) for one of the components \(\mathcal{V}\) of \(\Omega_{R}\setminus\text{Max}(\xi_{R})\). This a priori is not a good correspondence, as one could have a solution to the eigenvalue problem with NWSS less or equal to one in a connected component of \(\Omega\setminus\text{Max}(\xi)\). The key fact is that then one can prove that the solution must be \((\Omega_{1},\xi_{1})\), so we can rule out this case. To prove that we will need two technical results, which will be proven using \(P\)-functions. ### Associated \(P\)-function We will find a \(P\)-function associated with (3.2), that is, a subharmonic function associated with a solution to (3.2). The next result does not need to impose boundary conditions. **Proposition 3.1**.: _Let \(\Omega\subset\mathbb{S}^{2}\) be a smooth domain and \(\xi:\Omega\to\mathbb{R}\) be a smooth solution to \(\Delta\xi+2\xi=0\). Then, the \(P\)-function \(\mathcal{P}:=|\nabla\xi|^{2}+\xi^{2}\) satisfies_ \[\Delta\mathcal{P}\geq 0\text{ in }\Omega. \tag{3.10}\] Proof.: By the Bochner-Weitzenbock formula, a solution \(\xi\) to (3.2) satisfies \[\frac{1}{2}\Delta\left|\nabla\xi\right|^{2}=\left|\nabla^{2}\xi\right|^{2}- \left|\nabla\xi\right|^{2},\] and a standard computation shows \[\frac{1}{2}\Delta\xi^{2}=\xi\Delta\xi+\left|\nabla\xi\right|^{2}=-\frac{( \Delta\xi)^{2}}{2}+\left|\nabla\xi\right|^{2}.\] Thus, summing up the above two equations, and the Geometric-Arithmetic Inequality, we obtain \[\frac{1}{2}\Delta\mathcal{P}=\left|\nabla^{2}\xi\right|^{2}-\frac{(\Delta\xi) ^{2}}{2}\geq 0,\] that is, (3.10) holds. As a consequence of the above \(P\)-function, we can obtain a characterization of disk-type solutions to (3.2) in terms of the normalized wall shear stress (NWSS) of the boundary components. Specifically **Theorem 3.1**.: _Let \(\Omega\subset\mathbb{S}^{2}\) be a smooth domain and \(\xi:\Omega\to\mathbb{R}\) be a non-vanishing smooth solution to \(\Delta\xi+2\xi=0\) in \(\Omega\) and \(\xi=0\) along \(\partial\Omega\). Assume that_ \[\bar{\tau}(\Gamma)\leq 1\text{ for all }\Gamma\in\pi_{0}(\partial\Omega), \tag{3.11}\] _then \(\Omega\) is a geodesic disk and \(\xi\) is rotationally symmetric._ Proof.: Observe that (3.11) and \(\xi=0\) along \(\partial\Omega\) imply that \[\max_{\partial\Omega}\mathcal{P}\leq\xi_{\text{max}}^{2}.\] Let \(x\in\Omega\) be a point where \(\xi^{2}\) attains its maximum, hence \(\xi(x)\nabla\xi(x)=0\) and \(\xi(x)\neq 0\), thus \(|\nabla\xi|(x)=0\) and \(\mathcal{P}(x)=\xi_{\text{max}}^{2}\). Therefore, the strong maximum principle implies that \(\mathcal{P}=\xi_{\text{max}}^{2}\) on \(\overline{\Omega}\) and \(\nabla^{2}\xi=-\xi g_{\mathbb{S}^{2}}\); now it follows that \(\Omega\) is a geodesic ball and \(\xi\) is rotationally symmetric using the same argument as in the proof of Lemma 2.4 of [8]. Now we will introduce an energy function along level sets of solutions to (3.2) and we will show an important property of such energy function when \(\overline{\tau}(\mathcal{U})\leq 1\). Given \(\mathcal{U}\in\pi_{0}(\Omega\setminus\text{Max}(\xi))\), we define the energy function \(E:[0,\xi_{\text{max}})\to\mathbb{R}\) given by \[E(t)=\frac{1}{\xi_{\text{max}}^{2}-t^{2}}\int_{\text{cl}(\mathcal{U})\cap\{ \xi=t\}}|\nabla\xi|. \tag{3.12}\] Note that \(E\) is continuous in regular values of \([0,\xi_{\text{max}})\) since the Lebesgue integral is absolutely continuous. **Lemma 3.3**.: _Let \((\Omega,\xi)\) be a solution to (3.2). Suppose that there exists \(\mathcal{U}\in\pi_{0}(\Omega\setminus\text{Max}(\xi))\) such that \(\overline{\tau}(\mathcal{U})\leq 1\). Then \(E\) is non-increasing._ Proof.: Given \(\epsilon>0\), consider the inner region \[\mathcal{U}_{\epsilon}:=\{p\in\mathcal{U}:\ \xi(p)<\xi_{\text{max}}-\epsilon\} \subset\mathcal{U}. \tag{3.13}\] Since the critical values of \(\xi\) in \(\mathcal{U}\) are isolated (cf. [54]), then we can choose \(\epsilon\) small enough so that \(\xi_{\text{max}}-\epsilon\) is a regular value, and hence, the set \(\mathcal{U}\cap\{\xi=\xi_{\text{max}}-\epsilon\}\) is the disjoint union of a finite number of analytic curves by the Lojasiewicz structure theorem. Using (3.10) and the maximum principle, we obtain that \[\max_{\mathcal{U}_{\epsilon}}\mathcal{P}\leq\max_{\partial\mathcal{U}_{ \epsilon}}\mathcal{P}.\] On the one hand, \(|\nabla\xi|\to 0\) as \(\epsilon\to 0\) yields \[\lim_{\epsilon\to 0^{+}}\max_{\{\xi=\xi_{\text{max}}-\epsilon\}}\mathcal{P}=\xi_ {\text{max}}^{2}.\] On the other hand, from Remark 3.1 and \(\overline{\tau}(\mathcal{U})\leq 1\), we get \(|\nabla\xi|^{2}+\xi^{2}=|\nabla\xi|^{2}\leq\xi_{\text{max}}^{2}\) along \(\partial\Omega\cap\text{cl}(\mathcal{U})\). Hence, we conclude that \[|\nabla\xi|^{2}+\xi^{2}=\mathcal{P}\leq\xi_{\text{max}}^{2}\quad\text{in} \quad\mathcal{U}.\] Hence, using \(\Delta\xi+2\xi=0\) in \(\Omega\), we obtain \[\operatorname{div}\left(\frac{\nabla\xi}{\xi_{\text{max}}^{2}-\xi^{2}}\right)= \frac{2\xi}{(\xi_{\text{max}}^{2}-\xi^{2})^{2}}\left(\mathcal{P}-\xi_{\text{ max}}^{2}\right),\] and, since \(\xi\geq 0\) in \(\Omega\) and \(\mathcal{P}\leq\xi_{\text{max}}^{2}\) in \(\mathcal{U}\), we get \[\operatorname{div}\left(\frac{\nabla\xi}{\xi_{\text{max}}^{2}-\xi^{2}}\right) \leq 0\quad\text{in}\quad\mathcal{U}.\] Now, we choose regular values \(0\leq t_{1}<t_{2}<\xi_{\text{max}}\) and we integrate the above inequality along the finite perimeter set \(\{t_{1}\leq\xi\leq t_{2}\}\). Then, applying the Divergence Theorem: \[0\geq \int_{\{t_{1}\leq\xi\leq t_{2}\}}\operatorname{div}\left(\frac{ \nabla\xi}{\xi_{\text{max}}^{2}-\xi^{2}}\right)\] \[=\int_{\{u=t_{1}\}}\left\langle\frac{\nabla\xi}{\xi_{\text{max}} ^{2}-\xi^{2}},n\right\rangle+\int_{\{u=t_{2}\}}\left\langle\frac{\nabla\xi}{ \xi_{\text{max}}^{2}-\xi^{2}},n\right\rangle\] \[=-\frac{1}{\xi_{\text{max}}^{2}-t_{1}^{2}}\int_{\{u=t_{1}\}}| \nabla\xi|+\frac{1}{\xi_{\text{max}}^{2}-t_{2}^{2}}\int_{\{u=t_{2}\}}|\nabla\xi|\] \[=-E(t_{1})+E(t_{2}),\] where we have used that \(\nu=-\nabla\xi/|\nabla\xi|\) along \(\{u=t_{1}\}\) and \(\nu=\nabla\xi/|\nabla\xi|\) along \(\{u=t_{2}\}\) are the outer unit normal respectively. Therefore, \[E(t_{1})\geq E(t_{2})\text{ for }0\leq t_{1}<t_{2}<\xi_{\text{max}}\text{ regular values}.\] Since \(E\) is continuous and the set of critical values is finite, then \(E\) is a non-increasing function. ### Expected critical height As said above, the NWSS on the model solutions will allow us to define an expected value for any solution to (3.2). Specifically: **Definition 3.5**.: _Let \((\Omega,\xi)\) be a solution to (3.2) and \(\mathcal{U}\in\pi_{0}(\Omega\setminus\text{Max}(\xi))\). Set \(\tau_{0}\) as in (3.6), if \(\Omega\) is not a topological disk then we define the expected critical height of \(\mathcal{U}\) as follows:_ * _if_ \(\overline{\tau}(\mathcal{U})<\tau_{0}\)_, we set_ \[\bar{R}(\mathcal{U})=\overline{\tau}_{-}^{-1}\left(\overline{\tau}(\mathcal{U })\right),\] (3.14) * _if_ \(\overline{\tau}(\mathcal{U})\geq\tau_{0}\)_, we set_ \[\bar{R}(\mathcal{U})=\overline{\tau}_{+}^{-1}\left(\overline{\tau}(\mathcal{U}) \right).\] (3.15) _If \(\Omega\) is a topological disk, then we set \(\bar{R}(\mathcal{U})=1\)._ Observe that in (3.14) we are assuming implicitly that, if \(\Omega\) is not a topological disk, then \(\overline{\tau}(\mathcal{U})>1\) for each connected component \(\mathcal{U}\) of \(\Omega\setminus\text{Max}(\xi)\). In order to prove that Definition 3.5 is consistent we first show: **Theorem 3.2**.: _Let \((\Omega,\xi)\) be a solution to (3.2) and \(\mathcal{U}\in\pi_{0}(\Omega\setminus\text{Max}(\xi))\). If \(\overline{\tau}(\mathcal{U})\leq 1\), then \((\Omega,\xi)\equiv(\Omega_{1},\xi_{1})\)._ Proof.: By Lemma 3.1, \(\text{Max}(\xi)=\gamma^{0}\cup\gamma^{1}\), where \(\gamma^{1}\) is the disjoint union of a finite number of analytic curves and \(\gamma^{0}\) is a finite set of isolated points. We first show the result if \(\gamma^{1}=\emptyset\). In such a case, \(\Omega\setminus\text{Max}(\xi)\) would be connected and Theorem 3.1 implies that \((\Omega,\xi)\) is a rotationally symmetric solution on a disk, so it must be \((\Omega_{1},\xi_{1})\) after a rescaling. Hence, we can assume that \(\gamma^{1}\) is non-empty and is in contact with \(\mathcal{U}\). Since \(\gamma^{1}\cap\text{cl}(\mathcal{U})\) is a set of analytic closed curves, by the Lojasiewicz inequality (see [7, Theorem 2.1]), there exists a neighborhood \(V\) of \(\gamma^{1}\cap\text{cl}(\mathcal{U})\), \(V\cap\gamma^{0}=\emptyset\), and two constants \(c>0\) and \(1/2\leq\theta<1\) such that \[\xi_{\text{max}}-\xi<1\text{ and }\left|\nabla\xi\right|(x)\geq c(\xi_{\text{ max}}-\xi(x))^{\theta},\quad\forall x\in V.\] Then, since \(\{\xi=t\}\cap\text{cl}(\mathcal{U})\subset V\) as \(t\to\xi_{\text{max}}\), using the energy function (3.12) we get the following chain of inequalities \[\frac{1}{\xi_{\text{max}}^{2}}\int_{\text{cl}(\mathcal{U})\cap \{\xi=\xi_{\text{max}}\}}\left|\nabla\xi\right|=E(0)\geq^{\text{Lemma \ref{lem:1}}}E(t) =\frac{1}{\xi_{\text{max}}^{2}-t^{2}}\int_{\text{cl}(\mathcal{U}) \cap\{\xi=t\}}\left|\nabla\xi\right|\] \[\geq\frac{1}{\xi_{\text{max}}-t}\int_{\text{cl}(\mathcal{U})\cap \{\xi=t\}}\left|\nabla\xi\right|\] \[\geq\frac{c}{(\xi_{\text{max}}-t)^{1-\theta}(\xi_{\text{max}}+t) }\left|\{\xi=t\}\cap\text{cl}(\mathcal{U})\right|,\] We can assume that all \(t\)'s in the previous inequalities are regular values of \(\xi\), as we know by [54, Theorem 1] that if \(Z\) is the set of critical points of \(\xi\) then \(\xi(K\cap Z)\) is finite for any compact set \(K\subset\Omega\). Then \(\left|\{\xi=t\}\cap\text{cl}(\mathcal{U})\right|>0\). Letting \(t\to\xi_{\text{max}}\) we obtain that \(E(0)=+\infty\), a contradiction. Finally, we are ready to prove that Definition 3.5 is well-posed: **Theorem 3.3**.: _Let \((\Omega,\xi)\) be a solution to (3.2) and \(\mathcal{U}\in\pi_{0}(\Omega\setminus\text{Max}(\xi))\). Then, the expected critical height of \(\mathcal{U}\), \(\bar{R}(\mathcal{U})\), is well defined and takes values in \([0,1]\). Moreover, \(\bar{R}(\mathcal{U})=1\) if, and only if, \((\Omega,\xi)\equiv(\Omega_{1},\xi_{1})\)._ Proof.: First, from Remark 3.1, any \(\mathcal{U}\in\pi_{0}(\Omega\setminus\text{Max}(\xi))\) must satisfy \(\partial\Omega\cap\overline{\mathcal{U}}\neq\emptyset\) and, from (3.3), \(\overline{\tau}(\mathcal{U})\geq 0\). Observe that, by definition, \(R(\mathcal{U})=1\) if, and only if, \(\overline{\tau}(\mathcal{U})\in[0,1]\). Then, Theorem 3.2 implies that \((\Omega,\xi)=(\Omega_{1},\xi_{1})\) up to a rotation and a dilation. Therefore, there exists a bijective correspondence between \(\bar{R}(\mathcal{U})\in[0,1)\) and \(\bar{\tau}(\mathcal{U})\in(1,+\infty)\) by Lemma 3.2, which finishes the proof. ## 4 Comparison geometry Now, using the expected critical height we can associate to each general solution \((\Omega,\xi)\) one of the model solutions defined in Section 2. **Definition 4.1**.: _Let \((\Omega,\xi)\) be a solution to (3.2), \(\mathcal{U}\in\pi_{0}\left(\Omega\setminus\text{Max}(\xi)\right)\) and set \(\bar{R}:=R(\mathcal{U})\) the expected critical height of \(\mathcal{U}\). Then we say that \((\Omega_{\bar{R}},\xi_{\bar{R}})\) is the associated model solution to \(\xi\) inside \(\mathcal{U}\)._ **Remark 4.1**.: _From now on, we omit the parameter \(\bar{R}\) and we only write \((\bar{\Omega},\bar{\xi}):=(\Omega_{\bar{R}},\xi_{\bar{R}})\), we also denote the extremal of the interval of definition and the scale parameter of \(\bar{\xi}\) as \(\bar{r}_{\pm}:=r_{\pm}(\bar{R})\) and \(\bar{\alpha}=\alpha(\bar{R})\) respectively, since \(\bar{R}\) is already fixed._ In this section, we will construct a function to compare the level sets of \(\xi\) with those of our model solution. To construct the comparison function, we need our general function \(\xi\) to be normalized. **Definition 4.2**.: _Let \((\Omega,\xi)\) be a solution to (3.2), \(\mathcal{U}\in\pi_{0}\left(\Omega\setminus\text{Max}(\xi)\right)\) and set \(\bar{R}:=R(\mathcal{U})\) the expected critical height of \(\mathcal{U}\). Then we will say that the solution \(\xi\) is normalized if \(\xi_{\text{max}}=(\xi_{R})_{\text{max}}\)._ Let us suppose that \(\xi\) is normalized, consider the function \(F:[0,\xi_{\text{max}}]\times[\bar{r}_{-},\bar{r}_{+}]\to\mathbb{R}\) given by \[F(\xi,r)=\xi-\bar{\alpha}(1-r\operatorname{arctanh}(r)+\omega r),\] where \(\omega\) is given by (2.2) and \(\bar{\alpha}\) is given in Proposition 2.1. Hence \[\bar{\alpha}^{-1}\frac{\partial F}{\partial r}(r)=\operatorname{arctanh}(r)+ \frac{r}{1-r^{2}}-r.\] and therefore \(\frac{\partial F}{\partial r}=0\) if, and only if, \(r=\bar{R}\). As a consequence of the Implicit Function Theorem, there exist two smooth functions \[\chi_{-}:[0,\xi_{\text{max}}]\to[\bar{r}_{-},\bar{R}]\quad\text{and}\quad\chi _{+}:[0,\xi_{\text{max}}]\to[\bar{R},\bar{r}_{+}]\] such that \[F(\xi,\chi_{\pm}(\xi))=0\quad\text{for all}\quad\xi\in[0,\xi_{\text{max}}].\] Now we can define a _pseudo-radial function_ following [1, Definition 3]. **Definition 4.3**.: _Let \(\mathcal{U}\subset\Omega\setminus\operatorname{Max}(\xi)\) be a connected component and set \(\tau_{0}\) as in (3.6). Then:_ * _If_ \(\overline{\tau}(\mathcal{U})\geq\tau_{0}\)_, we define the pseudo-radial function_ \(\Psi_{+}\) _associated with the region_ \(\mathcal{U}\) _as_ \[\begin{split}\Psi_{+}:&\mathcal{U}\to[\bar{R},\bar{ r}_{+}]\\ p&\mapsto\Psi_{+}(p):=\chi_{+}\left(\xi(p)\right). \end{split}\] (4.1) * _If_ \(\overline{\tau}(\mathcal{U})<\tau_{0}\)_, we define the pseudo-radial function_ \(\Psi_{-}\) _associated with the region_ \(\mathcal{U}\) _as_ \[\begin{split}\Psi_{-}:&\mathcal{U}\to[\bar{r}_{-}, \bar{R}]\\ p&\mapsto\Psi_{-}(p):=\chi_{-}\left(\xi(p)\right). \end{split}\] (4.2) **Remark 4.2**.: _Note that if \(\mathcal{U}\subset\Omega\setminus\operatorname{Max}(\xi)\) is such that \(\overline{\tau}(\mathcal{U})=\tau_{0}\) then we can define the pseudo-radial function associated to \(\mathcal{U}\) as \(\Psi_{+}:\mathcal{U}\to[0,\bar{r}]\) or as \(\Psi_{-}:\mathcal{U}\to[-\bar{r},0]\). We can use both definitions because the model solution \((\Omega_{0},\xi_{0})\) is symmetric with respect to the meridian \(\{r=0\}\) in cylindrical coordinates in the sphere, and both components of \(\Omega_{0}\setminus\operatorname{Max}(\xi_{0})=\Omega\setminus\{r=0\}\) have the same NWSS equal to \(\tau_{0}\). To fix one, we choose to define the pseudo-radial function as (4.1) when \(\overline{\tau}(\mathcal{U})=\tau_{0}\)._ **Remark 4.3**.: _In order to perform some formal computations using the pseudo-radial function, it is important to do previous considerations about the notation. First, we will denote \(\Psi_{\pm}=\Psi\) and \(\chi_{\pm}=\chi\) and we will do the computations considering both possibilities at the same time. On the other hand, note that \(\chi\) is a function of \(\xi\) as a real parameter, while \(\Psi\) is a function defined on a domain in the sphere. Also, we denote the derivatives with respect to \(\xi\) with a dot. For example, \(\dot{\chi}\) will denote the derivative of \(\chi\) with respect to the parameter \(\xi\)._ A straightforward computation gives \[0=\frac{dF}{d\xi}(\xi,\chi_{\pm}(\xi))=\frac{\partial F}{\partial\xi}(\xi, \chi_{\pm}(\xi))+\frac{\partial F}{\partial r}(\xi,\chi_{\pm}(\xi))\dot{\chi} _{\pm}(\xi)\] and \[0=\frac{d^{2}F}{d\xi^{2}}(\xi,\chi_{\pm}(\xi))=\frac{\partial^{2}F}{\partial r ^{2}}(\xi,\chi_{\pm}(\xi))\dot{\chi}_{\pm}(\xi)^{2}+\frac{\partial F}{ \partial r}(\xi,\chi_{\pm}(\xi))\ddot{\chi}_{\pm}(\xi)\] where we have used that \(\frac{\partial F}{\partial\xi}(\xi,\chi_{\pm}(\xi))=1\) and \(\frac{\partial^{2}F}{\partial\xi^{2}}(\xi,\chi_{\pm}(\xi))=0=\frac{\partial^{ 2}F}{\partial r\partial\xi}(\xi,\chi_{\pm}(\xi))\). We get \[\dot{\chi}_{\pm}(\xi)=-\frac{\partial F}{\partial r}(\xi,\chi_{\pm}(\xi))^{-1 }=\mp\sqrt{1-\chi_{\pm}(\xi)^{2}}\left|\nabla\bar{\xi}\right|(\chi_{\pm}(\xi ))^{-1} \tag{4.3}\] and \[\ddot{\chi}_{\pm}(\xi)=-\frac{\partial^{2}F}{\partial r^{2}}(\xi,\chi_{\pm}( \xi))\frac{\partial F}{\partial r}(\xi,\chi_{\pm}(\xi))^{-3}=\mp\frac{2\bar{ \alpha}\left|\nabla\bar{\xi}\right|(\chi_{\pm}(\xi))^{-3}}{\sqrt{1-\chi_{\pm}( \xi)^{2}}}. \tag{4.4}\] We can also compute \[\nabla\Psi=\dot{\chi}\nabla\xi\,\,\,\text{and}\,\,\,\nabla^{2}\Psi=\dot{\chi }\nabla^{2}\xi+\ddot{\chi}d\xi\otimes d\xi.\] ### Gradient estimates First, we will use the pseudo-radial function to compare the norm of the gradient of the solution \(\xi\) along level sets with that of the model solution \(\bar{\xi}\). Suppose that the function \(\xi\) is normalized and consider the comparison function \[W_{\bar{R}}=\left|\nabla\bar{\xi}\right|^{2}\circ\Psi=\frac{1-\Psi^{2}}{\Psi^{2 }}\left(\xi-\frac{\bar{\alpha}}{1-\Psi^{2}}\right)^{2}, \tag{4.5}\] and set \[W=\left|\nabla\xi\right|^{2}\;\text{in}\;\mathcal{U}. \tag{4.6}\] Our aim is to prove the following result: **Theorem 4.1**.: _Let \((\Omega,\xi)\) be a solution to (3.2), \(\mathcal{U}\in\pi_{0}\left(\Omega\setminus\text{Max}(\xi)\right)\) and set \(\bar{R}=R(\mathcal{U})\in[0,1)\), the expected critical height of \(\mathcal{U}\). Let \((\bar{\Omega},\bar{\xi})\) be its associated model solution and assume that \(\xi\) is normalized. Then, it holds_ \[W(p)\leq W_{\bar{R}}(p)\;\text{for all}\;p\in\mathcal{U},\] _moreover, if the equality holds at one single point of \(\mathcal{U}\), then \((\Omega,\xi)\equiv(\bar{\Omega},\bar{\xi})\)._ To prove the previous theorem we will need to know how the previous functions behaves near the top level set \(\text{Max}(\xi)\). The following lemma will be useful later (cf. [1]). **Lemma 4.1**.: _Let \((\Omega,\xi)\), \((\bar{\Omega},\bar{\xi})\) and \(\mathcal{U}\) be as in Theorem 4.1 and consider \(W_{\bar{R}}\) defined in (4.5). Then, if \(p\in\text{Max}(\xi)\), it holds:_ \[\lim_{x\in\mathcal{U},x\to p}\frac{W_{\bar{R}}}{\xi_{\text{max}}-\xi}=4\xi_{ \text{max}}.\] Proof.: The proof of this lemma is a simple computation using the Taylor expansions of \(W_{\bar{R}}\) and \(\xi_{\text{max}}-\xi\) as functions of \(\Psi\). Observe that taking \(z=\Psi-\bar{R}\) then it is clear that \[\begin{split}\xi_{\text{max}}-\xi=\frac{\bar{\alpha}}{1-\bar{R}^ {2}}-\xi&=\frac{\bar{\alpha}}{(1-\bar{R}^{2})^{2}}(\Psi-\bar{R}) ^{2}+O((\Psi-\bar{R})^{3})\\ &=\frac{\bar{\alpha}}{(1-\bar{R}^{2})^{2}}z^{2}+O(z^{3}).\end{split} \tag{4.7}\] Note that we have done the Taylor expansion of \(\xi\circ\Psi\) at \(\Psi=\bar{R}\), although \(\Psi(p)\) is not defined in \(\text{Max}(\xi)\). However, it is clear that we can extend \(\Psi\) to \(\text{Max}(\xi)\) by \(\Psi(p)=\bar{R}\) for all \(p\in\text{Max}(\xi)\), so the previous expansion makes sense. On the other hand, observe that \[W_{\bar{R}}(\Psi)=\frac{1-\Psi^{2}}{\Psi^{2}}\left(\xi-\frac{\bar{\alpha}}{1- \Psi^{2}}\right)^{2}=(1-\Psi^{2})\left(\frac{\partial\xi}{\partial\Psi}\right) ^{2}.\] Hence \[\frac{\partial W_{\bar{R}}}{\partial\Psi}(\bar{R})=0\text{ and }\frac{\partial^{2}W_{\bar{R}}}{ \partial\Psi^{2}}(\bar{R})=2(1-\bar{R}^{2})\left(\frac{\partial^{2}\xi}{ \partial\Psi^{2}}(\bar{R})\right)^{2}=\frac{8\bar{\alpha}^{2}}{(1-\bar{R}^{2} )^{3}},\] so the Taylor expansion of \(W_{\bar{R}}\) up to second order is \[W_{\bar{R}}=\frac{4\bar{\alpha}^{2}}{(1-\bar{R}^{2})^{3}}z^{2}+O(z^{3}). \tag{4.8}\] Then it is clear that \[\lim_{x\in\mathcal{U},x\to p}\frac{W_{\bar{R}}}{\xi_{\text{max}}-\xi}=\lim_{z \to 0}\frac{\frac{4\bar{\alpha}^{2}}{(1-\bar{R}^{2})^{3}}z^{2}+O(z^{3})}{ \frac{\bar{\alpha}}{(1-\bar{R}^{2})^{2}}z^{2}+O(z^{3})}=\frac{4\bar{\alpha}}{ 1-\bar{R}^{2}}\] and the result is proved. Next, we shall establish some differential inequalities. Firstly, we want to bound the norm of the hessian of \(\xi\) in terms of the functions defined in (4.5) and (4.6). From (2.6), we obtain the following expression: \[0 \leq \left|\nabla^{2}\xi+\frac{2\bar{\alpha}\Psi^{2}}{\left((1-\Psi^{ 2})\xi-\bar{\alpha}\right)^{2}}d\xi\otimes d\xi+\left(\xi-\frac{\Psi^{2}\bar{ \alpha}}{\left((1-\Psi^{2})\xi-\bar{\alpha}\right)^{2}}\left|\nabla\xi\right|^ {2}\right)g_{\mathbb{S}^{2}}\right|^{2}\] \[= \left|\nabla^{2}\xi\right|^{2}+\frac{4\bar{\alpha}^{2}\Psi^{4}}{ \left((1-\Psi^{2})\xi-\bar{\alpha}\right)^{4}}\left|d\xi\otimes d\xi\right|^ {2}+\left(\xi-\frac{\Psi^{2}\bar{\alpha}}{\left((1-\Psi^{2})\xi-\bar{\alpha} \right)^{2}}\left|\nabla\xi\right|^{2}\right)\left|g_{\mathbb{S}^{2}}\right|^ {2}\] \[+\frac{4\bar{\alpha}\Psi^{2}}{\left((1-\Psi^{2})\xi-\bar{\alpha} \right)^{2}}\left(\xi-\frac{\Psi^{2}\bar{\alpha}}{\left((1-\Psi^{2})\xi-\bar{ \alpha}\right)^{2}}\left|\nabla\xi\right|^{2}\right)\left\langle d\xi\otimes d \xi,g_{\mathbb{S}^{2}}\right\rangle\] \[+\frac{4\bar{\alpha}\Psi^{2}}{\left((1-\Psi^{2})\xi-\bar{\alpha} \right)^{2}}\left\langle\nabla^{2}\xi,d\xi\otimes d\xi\right\rangle+2\left( \xi-\frac{\Psi^{2}\bar{\alpha}}{\left((1-\Psi^{2})\xi-\bar{\alpha}\right)^{2} }\left|\nabla\xi\right|^{2}\right)\left\langle\nabla^{2}\xi,g_{\mathbb{S}^{2} }\right\rangle.\] Using that \[\left|d\xi\otimes d\xi\right|^{2}=\left|\nabla\xi\right|^{4},\ \left\langle d\xi\otimes d\xi,g_{\mathbb{S}^{2}}\right\rangle=\left|\nabla\xi \right|^{2}\text{ and }\left\langle\nabla^{2}\xi,g_{\mathbb{S}^{2}}\right\rangle=\Delta\xi,\] we get \[0 \leq \left|\nabla^{2}\xi\right|^{2}+\frac{4\bar{\alpha}\Psi^{2}}{\left( (1-\Psi^{2})\xi-\bar{\alpha}\right)^{2}}\left\langle\nabla^{2}\xi,d\xi\otimes d \xi\right\rangle-2\xi^{2}+\frac{2\bar{\alpha}^{2}\Psi^{4}}{\left((1-\Psi^{2}) \xi-\bar{\alpha}\right)^{4}}\left|\nabla\xi\right|^{4}\] \[+\frac{4\xi\bar{\alpha}\Psi^{2}}{\left((1-\Psi^{2})\xi-\bar{ \alpha}\right)^{2}}\left|\nabla\xi\right|^{2},\] and then we reach the inequality \[\begin{split}\left|\nabla^{2}\xi\right|^{2}\geq&- \frac{4\bar{\alpha}\Psi^{2}}{\left((1-\Psi^{2})\xi-\bar{\alpha}\right)^{2}}\left< \nabla^{2}\xi,d\xi\otimes d\xi\right>+2\xi^{2}\\ &-\frac{2\bar{\alpha}^{2}\Psi^{4}}{\left((1-\Psi^{2})\xi-\bar{ \alpha}\right)^{4}}\left|\nabla\xi\right|^{4}-\frac{4\xi\bar{\alpha}\Psi^{2}}{ \left((1-\Psi^{2})\xi-\bar{\alpha}\right)^{2}}\left|\nabla\xi\right|^{2}.\end{split} \tag{4.9}\] To compute the scalar product \(\left<\nabla^{2}\xi,d\xi\otimes d\xi\right>\) we use the following lemma which is a standard computation in geodesic coordinates. **Lemma 4.2**.: _Let \((M,g)\) be a \(n\)-dimensional Riemannian manifold and \(f\in\mathcal{C}^{\infty}(M)\). Then:_ \[\left<\nabla\left|\nabla f\right|^{2},\nabla f\right>=2\nabla^{2}f(\nabla f, \nabla f)=2\left<\nabla^{2}f,df\otimes df\right>. \tag{4.10}\] Note that \[\nabla W_{\bar{R}}=\frac{\partial}{\partial\Psi}\left(\frac{1-\Psi^{2}}{\Psi^ {2}}\left(\xi-\frac{\bar{\alpha}}{1-\Psi^{2}}\right)^{2}\right)\dot{\chi}(\xi )\nabla\xi=-2\left(\xi+\frac{\bar{\alpha}}{1-\Psi^{2}}\right)\nabla\xi,\] and then using (4.10) we get \[\left<\nabla^{2}\xi,d\xi\otimes d\xi\right>=\frac{1}{2}\left<\nabla(W-W_{\bar {R}}),\nabla\xi\right>-\left(\xi+\frac{\bar{\alpha}}{1-\Psi^{2}}\right)\left| \nabla\xi\right|^{2}.\] We use the previous equality and (4.9) to get \[\begin{split}\left|\nabla^{2}\xi\right|^{2}\geq&- \frac{2\bar{\alpha}\Psi^{2}}{\left((1-\Psi^{2})\xi-\bar{\alpha}\right)^{2}} \left<\nabla(W-W_{\bar{R}}),\nabla\xi\right>+2\xi^{2}\\ &-\frac{2\bar{\alpha}^{2}\Psi^{4}}{\left((1-\Psi^{2})\xi-\bar{ \alpha}\right)^{4}}\left|\nabla\xi\right|^{4}+\frac{4\bar{\alpha}^{2}\Psi^{2} }{\left(1-\Psi^{2}\right)\left((1-\Psi^{2})\xi-\bar{\alpha}\right)^{2}}\left| \nabla\xi\right|^{2}.\end{split} \tag{4.11}\] Now, we want to apply the maximum principle to the function \(W-W_{\bar{R}}\), so we want this function to satisfy an elliptic inequality. With this in mind, we compute the laplacian of this expression. Using the Bochner's formula, \[\Delta(W-W_{\bar{R}})=2\left|\nabla^{2}\xi\right|^{2}-4\left(\xi+\frac{\bar{ \alpha}}{1-\Psi^{2}}\right)+\frac{4\Psi^{2}\bar{\alpha}}{\left(1-\Psi^{2} \right)\left((1-\Psi^{2})\xi-\bar{\alpha}\right)}W.\] From (4.11), we get \[\begin{split}\Delta(W-W_{\bar{R}})\geq&-\frac{4\bar {\alpha}\Psi^{2}}{\left((1-\Psi^{2})\xi-\bar{\alpha}\right)^{2}}\left<\nabla (W-W_{\bar{R}}),\nabla\xi\right>+\frac{8\bar{\alpha}^{2}\Psi^{2}}{\left(1- \Psi^{2}\right)\left((1-\Psi^{2})\xi-\bar{\alpha}\right)^{2}}W\\ &-\frac{4\bar{\alpha}^{2}\Psi^{4}}{\left((1-\Psi^{2})\xi-\bar{ \alpha}\right)^{4}}W^{2}-\frac{4\xi\bar{\alpha}}{1-\Psi^{2}}+\frac{4\Psi^{2} \bar{\alpha}}{\left(1-\Psi^{2}\right)\left((1-\Psi^{2})\xi-\bar{\alpha} \right)}W.\end{split}\] Then, using that \[W_{\bar{R}}=\frac{\left((1-\Psi^{2})\xi-\bar{\alpha}\right)^{2}}{\Psi^{2}(1-\Psi ^{2})}\] and, after some simplifications, we get \[\begin{split}\Delta(W-W_{\bar{R}})\geq&-\frac{4\bar{ \alpha}\Psi^{2}}{\left((1-\Psi^{2})\xi-\bar{\alpha}\right)^{2}}\left\langle \nabla(W-W_{\bar{R}}),\nabla\xi\right\rangle\\ &+\frac{4\Psi^{2}\bar{\alpha}}{\left((1-\Psi^{2})\xi-\bar{\alpha }\right)^{2}}\left(\xi-\frac{\Psi^{2}\bar{\alpha}}{\left((1-\Psi^{2})\xi-\bar {\alpha}\right)^{2}}\left|\nabla\xi\right|^{2}\right)(W-W_{\bar{R}}).\end{split} \tag{4.12}\] Hence, we have arrived at a differential inequality that is not elliptic at the quantity \(W-W_{\bar{R}}\). To obtain an elliptic relation, we define the function \(F_{\beta}=\beta\cdot(W-W_{\bar{R}})\), where \(\beta=\beta(\Psi)>0\) is a function that we will determine later. First, we observe that \[\Delta F_{\beta}=\beta\cdot\Delta(W-W_{\bar{R}})+2\left\langle\nabla\beta, \nabla(W-W_{\bar{R}})\right\rangle+(W-W_{\bar{R}})\Delta\beta. \tag{4.13}\] We compute the second and the third terms of the sum. Using that \(\nabla\beta=\beta^{\prime}(\Psi)\dot{\chi}(\xi)\nabla\xi\) (we use \(\beta^{\prime}\) to denote the derivative with respect to \(\Psi\)), we have that \[2\left\langle\nabla\beta,\nabla(W-W_{\bar{R}})\right\rangle=2\dot{\chi}\frac{ \beta^{\prime}}{\beta}\left\langle\beta\nabla(W-W_{\bar{R}}),\nabla\xi\right\rangle.\] It is clear that \[\left\langle\nabla F_{\beta},\nabla\xi\right\rangle=\dot{\chi}\beta^{\prime}W (W-W_{\bar{R}})+\left\langle\beta\nabla(W-W_{\bar{R}}),\nabla\xi\right\rangle,\] so the second term of the sum is \[2\left\langle\nabla\beta,\nabla(W-W_{\bar{R}})\right\rangle=2\dot{\chi}\frac{ \beta^{\prime}}{\beta}\left\langle\nabla F_{\beta},\nabla\xi\right\rangle-2 \dot{\chi}^{2}\left(\frac{\beta^{\prime}}{\beta}\right)^{2}WF_{\beta}.\] For the third term of the sum, using the chain rule we get \[\Delta\beta=-2\xi\dot{\chi}\beta^{\prime}+\left(\ddot{\chi}\beta^{\prime}+ \dot{\chi}^{2}\beta^{\prime\prime}\right)W,\] so the third term of (4.13) is \[(W-W_{\bar{R}})\Delta\beta=-2\xi\dot{\chi}\frac{\beta^{\prime}}{\beta}F_{ \beta}+\left(\ddot{\chi}\frac{\beta^{\prime}}{\beta}+\dot{\chi}^{2}\frac{ \beta^{\prime\prime}}{\beta}\right)WF_{\beta}.\] Finally observe that from (4.3) and (4.4) we obtain, using that \(\Psi(p)=\chi(\xi(p))\), that \[\dot{\chi}=\frac{\Psi(1-\Psi^{2})}{(1-\Psi^{2})\xi-\bar{\alpha}}\quad\text{and }\quad\ddot{\chi}=\frac{2\bar{\alpha}\Psi^{3}(1-\Psi^{2})}{\left((1-\Psi^{2}) \xi-\bar{\alpha}\right)^{3}},\] so combining (4.12) and (4.13), after some simplifications, it holds \[\begin{split}\Delta F_{\beta}\geq&\frac{2\Psi(1-\Psi^{ 2})}{(1-\Psi^{2})\xi-\bar{\alpha}}\left(\frac{\beta^{\prime}}{\beta}-\frac{2 \bar{\alpha}\Psi}{(1-\Psi^{2})\left((1-\Psi^{2})\xi-\bar{\alpha}\right)}\right) \langle\nabla F_{\beta},\nabla\xi\rangle\\ &-\frac{2\xi\Psi(1-\Psi^{2})}{(1-\Psi^{2})\xi-\bar{\alpha}} \left(\frac{\beta^{\prime}}{\beta}-\frac{2\Psi\bar{\alpha}}{(1-\Psi^{2})\left( (1-\Psi^{2})\xi-\bar{\alpha}\right)}\right)F_{\beta}\\ &+\frac{\Psi^{2}(1-\Psi^{2})^{2}\left|\nabla\xi\right|^{2}}{\left( (1-\Psi^{2})\xi-\bar{\alpha}\right)^{2}}\Bigg{(}\left(\frac{\beta^{\prime}}{ \beta}\right)^{\prime}-\left(\frac{\beta^{\prime}}{\beta}\right)^{2}+\frac{6 \Psi\bar{\alpha}}{(1-\Psi^{2})\left((1-\Psi^{2})\xi-\bar{\alpha}\right)}\frac {\beta^{\prime}}{\beta}\\ &-\frac{4\Psi^{2}\bar{\alpha}^{2}}{(1-\Psi^{2})^{2}\left((1- \Psi^{2})\xi-\bar{\alpha}\right)^{2}}\Bigg{)}F_{\beta}.\end{split} \tag{4.14}\] Now we are ready to prove Theorem 4.1. Proof of Theorem 4.1:.: We give the proof in two parts: first, we will prove that \(F_{\beta}\) satisfies an elliptic inequality, hence \(W\leq W_{\bar{R}}\) in \(\mathcal{U}\) and \(W\equiv W_{\bar{R}}\) in \(\mathcal{U}\) if \(W(p)=W_{\bar{R}}(p)\) at one single point \(p\in\mathcal{U}\) by the maximum principle. Second, we will show the rigidity statement in the latter case. Define \(F_{\beta}=\beta\cdot(W-W_{\bar{R}})\), where \[\beta(\Psi):=\frac{\sqrt{1-\Psi^{2}}}{\sqrt{W_{\bar{R}}}}>0\] is a solution to the differential equation \[\frac{\beta^{\prime}}{\beta}-\frac{2\Psi\bar{\alpha}}{\left(1-\Psi^{2}\right) \left((1-\Psi^{2})\xi-\bar{\alpha}\right)}=0.\] Take \(\epsilon>0\) small enough and define \(\mathcal{U}_{\epsilon}\) as in (3.13), using (4.14) we get that \(F_{\beta}\) satisfies the elliptic inequality \[\Delta F_{\beta}-\frac{8\bar{\alpha}\Psi^{4}(1-\Psi^{2})\xi}{\left((1-\Psi^{2 })\xi-\bar{\alpha}\right)^{4}}F_{\beta}\geq 0\text{ in }\mathcal{U}_{\epsilon}.\] Hence, using Lemma 4.1 and the Reverse Lojasiewicz Inequality [7, Theorem 2.2], it follows that \[\lim_{x\in\mathcal{U},x\rightarrow\text{Max}(\xi)}\frac{W}{\sqrt{W_{\bar{R}} }}=\lim_{x\in\mathcal{U},x\rightarrow\text{Max}(\xi)}\frac{\left|\nabla\xi \right|^{2}}{2\sqrt{\xi_{\text{max}}}\sqrt{\xi_{\text{max}}-\xi}}=0,\] so it is clear that \[\lim_{x\in\mathcal{U},x\rightarrow\text{Max}(\xi)}F_{\beta}(p)=\lim_{x\in \mathcal{U},x\rightarrow\text{Max}(\xi)}\sqrt{1-\Psi^{2}}\frac{W}{\sqrt{W_{ \bar{R}}}}-\sqrt{1-\Psi^{2}}\sqrt{W_{\bar{R}}}=0.\] Therefore, applying the maximum principle to \(F_{\beta}\) in \(\mathcal{U}_{\epsilon}\) and letting \(\epsilon\) tend to zero (such that \(\xi_{\text{max}}-\epsilon\) is always a regular value) we conclude the first part of the proof. We continue with the second part of the proof. We proceed as in [5, Theorem 4.2]. If we assume that \(W(p)=W_{\bar{R}}(\Psi(\xi(p)))\) in \(\mathcal{U}\), then it follows that \(W=\left|\nabla\xi\right|^{2}\) is a positive function in \(\mathcal{U}\) that only depends on \(\xi\) and hence it is constant along level sets. Therefore, there are no critical points of \(\xi\) in \(\mathcal{U}\), so we can parametrize \(\mathcal{U}\) by level sets with coordinates \((\xi,\theta)\). In these coordinates, the metric on the sphere has the form \[g_{\mathbb{S}^{2}}=\frac{1}{W}d\xi^{2}+G(\xi,\theta)d\theta^{2},\] where \(G\) is a positive function. It is easy to compute the hessian of \(\xi\) on level sets in these coordinates: \[\nabla^{2}_{ij}\xi=\partial^{2}_{ij}\xi-\Gamma^{k}_{ij}\partial_{k}\xi=- \Gamma^{\xi}_{ij},\] and \[-\Gamma^{\xi}_{\xi\xi}=\frac{\dot{W}}{2W},\,-\Gamma^{\xi}_{\xi\theta}=- \Gamma^{\xi}_{\theta\xi}=0,\,\text{and}\,-\Gamma^{\xi}_{\theta\theta}=-\frac{ 1}{2}W\frac{\partial G}{\partial\xi},\] where \(\dot{W}\) denote the derivative of \(W\) with respect to \(\xi\). Then we can compute the geodesic curvature of the level sets with respect to the normal vector given by \(\nabla\xi/\left|\nabla\xi\right|\) easily. Using the formula given in [21], we have \[\kappa(\xi,\theta)=\frac{\nabla^{2}\xi(\nabla\xi,\nabla\xi)-\left|\nabla\xi \right|^{2}\Delta\xi}{\left|\nabla\xi\right|^{3}}=-\frac{\dot{W}}{2W^{5/2}}+ \frac{2\xi}{W^{1/2}}. \tag{4.15}\] Thus, the geodesic curvature of the level sets only depends on \(\xi\), so they are curves of constant geodesic curvature. As the level set curves do not cross, we have that they are intersections of parallel planes with \(\mathbb{S}^{2}\). Then, it follows that \(\Gamma=\partial\Omega\cap\overline{\mathcal{U}}\) is a circle on the sphere and \(\xi=\bar{\xi}\) in a neighborhood of \(\Gamma\) in \(\mathcal{U}\), so we conclude that \(\xi=\bar{\xi}\) in \(\Omega\) since they are analytic functions. This concludes the proof of the theorem. **Remark 4.4**.: _Note that we are using the sign convention for the curvature such that the geodesic curvature of \(\partial\mathbb{D}(\mathbf{n},s)\), where \(\mathbb{D}(\mathbf{n},s)\) is the geodesic disk in \(\mathbb{S}^{2}\) centered at \(\mathbf{n}\) of radius \(s<\pi/2\), with respect to the inner normal is positive. It is important to remark on this because in [1, 7] the authors used the opposite sign convention._ ### Curvature estimates Here, we are going to obtain curvature estimates of the level sets of our solution \(\xi\), taking advantage of the gradient estimates proved in Theorem 4.1. We recall that our model solution satisfies: \[\nabla\bar{\xi}=\frac{\sqrt{1-r^{2}}}{r}\left(\bar{\xi}-\frac{\bar{\alpha}}{1-r^{2 }}\right)n\] and \[\nabla^{2}\bar{\xi}=\begin{pmatrix}-\bar{\xi}-\frac{\bar{\alpha}}{1-r^{2}}&0\\ 0&-\bar{\xi}+\frac{\bar{\alpha}}{1-r^{2}}\end{pmatrix},\] so using (4.15), we have that \[\kappa(r)=\begin{cases}-\frac{r}{\sqrt{1-r^{2}}},&\text{if }r\geq 0,\\ \frac{r}{\sqrt{1-r^{2}}},&\text{if }r<0,\end{cases}\] where the geodesic curvature is computed with respect to the inner orientation to \(\mathcal{U}\), i.e., using the conormal vector given by \(\nu=\nabla\xi/\left|\nabla\xi\right|\). In the next results, we estimate the geodesic curvature of the zero and top level sets of \(\xi\). **Proposition 4.1**.: _Let \((\Omega,\xi)\) be a solution to (3.2), \(\mathcal{U}\subset\Omega\setminus\operatorname{Max}(\xi)\) a connected component and \(\bar{R}=R(\mathcal{U})\) the expected critical height of the region. Let \((\bar{\Omega},\bar{\xi})\) be its associated model solution in \(\mathcal{U}\) and \(p\in\partial\Omega\) such that_ \[\left|\nabla\xi\right|^{2}(p)=\max_{\partial\Omega\cap\operatorname{cl}( \mathcal{U})}\left|\nabla\xi\right|^{2}. \tag{4.16}\] Figure 4: Here it appears the radial graph of the support function of the critical catenoid, which is the graph \(\Sigma_{\xi_{0}}=\left\{(1+\xi_{0}(r))(\sqrt{1-r^{2}}\cos\theta,\sqrt{1-r^{2}} \sin\theta,r):\ r\in[-\bar{r},\bar{r}],\theta\in[0,2\pi)\right\}\), where \(\xi_{0}\) is the model solution defined in Section 2. We see the zero-level sets in blue, the top-level set in red, and also the level set of height \(r=0.5\) in purple. Here, the curvature of the level sets which are not in the top stratum is computed with respect to the conormal vector \(\nu=\nabla\xi/\left|\nabla\xi\right|\), which points to the curve \(\gamma\) in each point of the domain. _Then, if \(\kappa(p)\) denotes the curvature of \(\partial\Omega\) at \(p\) with respect the inner orientation to \(\mathcal{U}\), it holds_ \[\kappa(p)\leq-\frac{\bar{r}_{+}}{\sqrt{1-\bar{r}_{+}^{2}}}\quad\text{if }\overline{\tau}(\mathcal{U})\geq\tau_{0}\quad\text{and}\quad\kappa(p)\leq \frac{\bar{r}_{-}}{\sqrt{1-\bar{r}_{-}^{2}}}\quad\text{if }\overline{\tau}(\mathcal{U})<\tau_{0},\] _where \(\bar{r}_{+}>0\) and \(\bar{r}_{-}<0\) are defined in_ Remark 4.1. Proof.: Since the geodesic curvature of the level sets of \(\xi\) does not change after a change of scale, we will assume that \(\xi\) is normalized in the sense of Definition 4.2. Consider the curve \[\sigma(s)=\text{exp}_{p}\left(\frac{\nabla\xi}{\left|\nabla\xi\right|}(p)s \right),\quad s\in\mathbb{R},\] at some \(p\in\text{cl}(\mathcal{U})\cap\partial\Omega\), here the exponential map is the one of \(\mathbb{S}^{2}\). We compute the Taylor expansion of \(W\) along this curve: \[W(\gamma(s))=W(p)+\left\langle\nabla W,\nabla\xi/\left|\nabla\xi\right| \right\rangle(p)s+O(s^{2}),\quad s\in(-\epsilon,\epsilon),\] for some \(\epsilon>0\) small enough. Now, it is easy to check that \[\kappa(p)=\frac{\nabla^{2}\xi(\nabla\xi,\nabla\xi)-\left|\nabla\xi\right|^{2} \Delta\xi}{\left|\nabla\xi\right|^{3}}=\frac{\left\langle\nabla\left|\nabla \xi\right|^{2},\nabla\xi\right\rangle}{2\left|\nabla\xi\right|^{3}},\] so we have that \[W(\gamma(s))=W(p)+2\kappa(p)W(p)s+O(s^{2}),\quad s\in(-\epsilon,\epsilon).\] Next, we compute the Taylor expansion of \(W_{\bar{R}}\) along the same curve: \[W_{\bar{R}}(\gamma(s)) = W_{\bar{R}}(p)+\left\langle\nabla W_{\bar{R}}(p),\nabla\xi/ \left|\nabla\xi\right|(p)\right\rangle s+O(s^{2})\] \[= W_{\bar{R}}(p)+\left(\pm\frac{2\Psi}{\sqrt{1-\Psi^{2}}}\sqrt{W_ {\bar{R}}}-\frac{4\bar{\alpha}}{1-\Psi^{2}}\right)\sqrt{W}(p)s+O(s^{2}),\] where \(s\in[0,\delta)\) for some \(0<\delta\leq\epsilon\). Since \(W_{\bar{R}}(p)=W(p)\) at \(p\in\partial\Omega\) by (4.16) (this is clear using the definition of \(\overline{\tau}(\mathcal{U})\)), Theorem 4.1 and the expansions of \(W\) and \(W_{\bar{R}}\) near \(p\) imply \[\kappa\leq\pm\frac{\Psi}{\sqrt{1-\Psi^{2}}}\frac{\sqrt{W_{\bar{R}}}}{\sqrt{W}} -\frac{2\bar{\alpha}}{(1-\Psi^{2})\sqrt{W}}\text{ at }p,\] where the sign is positive if \(\overline{\tau}(\mathcal{U})\geq\tau_{0}\) and negative if \(\overline{\tau}(\mathcal{U})<\tau_{0}\). Finally, since \[\sqrt{W}(p)=\sqrt{W_{\bar{R}}}(p)=\pm\frac{\bar{\alpha}}{\bar{r}_{\pm}\sqrt{1- \bar{r}_{\pm}^{2}}}\text{ and }\Psi(p)=\bar{r}_{\pm},\] we conclude the following estimate for the curvature of the zero level set of \(\mathcal{U}\): \[\kappa(p)\leq\begin{cases}-\frac{\bar{r}_{+}}{\sqrt{1-\bar{r}_{+}^{2}}},&\text{ if }\overline{\tau}(\mathcal{U})\geq\tau_{0}\\ \frac{\bar{r}_{-}}{\sqrt{1-\bar{r}_{-}^{2}}},&\text{ if }\overline{\tau}(\mathcal{U})< \tau_{0}.\end{cases}\] **Proposition 4.2**.: _Let \((\Omega,\xi)\) be a solution to (3.2), \(\mathcal{U}\subset\Omega\setminus\text{Max}(\xi)\) a connected component and \(\bar{R}=R(\mathcal{U})\) the expected critical height of the region. Let \((\bar{\Omega},\bar{\xi})\) be its associated model solution in \(\mathcal{U}\). Let \(\gamma\subset\text{cl}(\mathcal{U})\cap\text{Max}(\xi)\) be an analytic curve and \(p\in\gamma\). Then if \(\kappa(p)\) denotes the curvature of \(\gamma\) at \(p\) with respect the inner orientation to \(\mathcal{U}\), it holds_ \[\kappa(p)\leq\frac{\bar{R}}{\sqrt{1-\bar{R}^{2}}}\quad\text{if }\overline{\tau}( \mathcal{U})\geq\tau_{0}\quad\text{and}\quad\kappa(p)\leq-\frac{\bar{R}}{ \sqrt{1-\bar{R}^{2}}}\quad\text{if }\overline{\tau}(\mathcal{U})<\tau_{0}.\] Proof.: As in the proof of Proposition 4.1, we assume again that the solution \(\xi\) is normalized. To estimate the geodesic curvature of \(\gamma\) we use [7, Theorem 3.1]. If we denote \[\rho(x)=\text{dist}(x,\gamma),\quad x\in\mathcal{U},\] then \[\xi=\xi_{\text{max}}-\frac{\varphi}{2}\rho^{2}-\frac{\varphi(\xi_{\text{max}} )}{6}\kappa\,\rho^{3}+O(\rho^{4}),\] where \(\varphi\) is such that \(\Delta\xi=-\varphi(\xi)\). Replacing the explicit terms in the previous expansion, we obtain: \[\xi=\frac{\bar{\alpha}}{1-\bar{R}^{2}}-\frac{\bar{\alpha}}{1-\bar{R}^{2}}\rho ^{2}-\frac{\bar{\alpha}}{3(1-\bar{R}^{2})}\kappa\,\rho^{3}+O(\rho^{4})\] in a certain neighborhood of \(\gamma\) inside \(\mathcal{U}\). Computing the gradient of the previous expression, we get \[\nabla\xi=-\frac{\bar{\alpha}}{1-\bar{R}^{2}}\rho\left(2+\kappa\,\rho\right) \partial_{\rho}+O(\rho^{3}),\] and then we conclude that \[W=\frac{4\bar{\alpha}^{2}\rho^{2}}{(1-\bar{R}^{2})^{2}}\left(1+\kappa\,\rho \right)+O(\rho^{4}). \tag{4.17}\] We want to obtain an expansion of \(W_{\bar{R}}\) as the previous one. To achieve this, we use the Taylor expansions obtained in the proof of Lemma 4.1. Set \(z=\Psi-\bar{R}\). Observe that from (4.7) it is easy to check that \[\frac{\xi_{\text{max}}-\xi}{z^{2}}=\frac{\xi_{\text{max}}-\xi}{(\Psi-\bar{R}) ^{2}}\to\text{constant, as }\Psi\to\bar{R},\] so we conclude that \[z=\pm\frac{1-\bar{R}^{2}}{\sqrt{\bar{\alpha}}}\sqrt{\xi_{\text{max}}-\xi}+O\left( \xi_{\text{max}}-\xi\right)^{3/2}, \tag{4.18}\] where the sign of the first term is positive if \(\overline{\tau}(\mathcal{U})\geq\tau_{0}\) and negative otherwise. We will need the third order term in the expansion (4.8), so we compute the third derivative of \(W_{\bar{R}}\). A straightforward computation shows \[\frac{\partial^{3}W_{\bar{R}}}{\partial\Psi^{3}}(\bar{R})=-12\bar{R}\frac{4 \bar{\alpha}^{2}}{(1-\bar{R}^{2})^{4}}+6(1-\bar{R}^{2})\left(\frac{2\bar{ \alpha}}{(1-\bar{R}^{2})^{2}}\frac{8\bar{\alpha}\bar{R}}{(1-\bar{R}^{2})^{3}} \right)=\frac{48\bar{R}\bar{\alpha}^{2}}{(1-\bar{R}^{2})^{4}},\] so the expansion of \(W_{\bar{R}}(\Psi)\) at \(\Psi=\bar{R}\) up to third order is given by \[W_{\bar{R}}=\frac{4\bar{\alpha}^{2}}{(1-\bar{R}^{2})^{3}}z^{2}+\frac{8\bar{R} \bar{\alpha}^{2}}{(1-\bar{R}^{2})^{4}}z^{3}+O(z^{4}).\] Substituting (4.18) in the previous expansion we get \[W_{\bar{R}}=\frac{4\bar{\alpha}}{1-\bar{R}^{2}}(\xi_{\text{max}}-\xi)\pm\frac {8\bar{R}\bar{\alpha}^{1/2}}{1-\bar{R}^{2}}(\xi_{\text{max}}-\xi)^{3/2}+O(\xi_ {\text{max}}-\xi)^{2},\] using again [7, Theorem 3.1] and simplifying we conclude that \[W_{\bar{R}}=\frac{4\bar{\alpha}^{2}}{(1-\bar{R}^{2})^{2}}\rho^{2}\left(1+ \left(\frac{1}{3}\kappa\pm\frac{2}{3}\frac{\bar{R}}{\sqrt{1-R^{2}}}\right) \rho\right)+O(\rho^{7/2}). \tag{4.19}\] Finally, we proceed as at the end of the proof of Proposition 4.1. By Theorem 4.1 we have that \(W\leq W_{\bar{R}}\) in \(\mathcal{U}\), comparing the expansions near \(\gamma\), (4.17) and (4.19), we have that \[\kappa(p)\leq\frac{\kappa(p)}{3}\pm\frac{2}{3}\frac{\bar{R}}{\sqrt{1-R^{2}}},\] that is, \[\kappa(p)\leq\begin{cases}\frac{\bar{R}}{\sqrt{1-\bar{R}^{2}}},&\text{if } \overline{\tau}(\mathcal{U})\geq\tau_{0}\\ -\frac{\bar{R}}{\sqrt{1-\bar{R}^{2}}},&\text{if }\overline{\tau}(\mathcal{U})< \tau_{0}.\end{cases}\] At this point, we can announce the following result: **Theorem 4.2**.: _Let \((\Omega,\xi)\) be a solution to (3.2). Assume that there exists an analytic closed curve \(\gamma\subset\text{\rm Max}(\xi)\), let \(\Omega_{1}^{\gamma},\Omega_{2}^{\gamma}\subset\Omega\) be the partition with respect to \(\gamma\) and set \(R_{1}=R(\Omega_{1}^{\gamma})\) and \(R_{2}=R(\Omega_{2}^{\gamma})\) the expected critical heights associated to that regions._ * _If_ \(\overline{\tau}(\Omega_{1}^{\gamma})\leq\tau_{0}\)_, then_ \(\overline{\tau}(\Omega_{2}^{\gamma})\geq\tau_{0}\) _and for each_ \(p\in\gamma\) _it holds_ \[\frac{R_{1}}{\sqrt{1-R_{1}^{2}}}\leq\kappa(p)\leq\frac{R_{2}}{\sqrt{1-R_{2}^{2}}},\] (4.20) _where_ \(\kappa(p)\) _is the geodesic curvature of_ \(\gamma\) _in_ \(p\) _with respect to the normal vector pointing to the region_ \(\Omega_{2}^{\gamma}\)_._ * _If_ \(\overline{\tau}(\Omega_{1}^{\gamma})=\overline{\tau}(\Omega_{2}^{\gamma})= \tau_{0}\)_, then_ \(\kappa=0\) _along_ \(\gamma\)_._ _In particular, \(R_{2}\geq R_{1}\) and equality holds if, and only if, \((\Omega,\xi)\equiv(\bar{\Omega},\bar{\xi})\), where \((\bar{\Omega},\bar{\xi})\) is the associated model solution given in Definition 4.1._ Proof.: First, since we are assuming that \(\overline{\tau}(\Omega_{1}^{\gamma})\leq\tau_{0}\), by Proposition 4.2 we conclude that \[0\leq\frac{R_{1}}{\sqrt{1-R_{1}^{2}}}\leq\kappa(p)\quad\forall p\in\gamma,\] so, in particular, \(\kappa\geq 0\) with respect to the normal orientation pointing to \(\Omega_{2}^{\gamma}\). Then \(\overline{\tau}(\Omega_{2}^{\gamma})\geq\tau_{0}\), because otherwise, using Proposition 4.2 with this region, we would conclude that \(\kappa<0\) along \(\gamma\), a contradiction. It follows also that it must be \(R_{1}\geq R_{2}\), and thus we obtain the chain of inequalities (4.20). Assume now that \(R_{1}=R_{2}=\bar{R}\), then \(\gamma\) must have constant geodesic curvature, so it must be a parallel at height \(\bar{R}\). Let us suppose that \(\xi\) is normalized. Then, we get that \((\Omega,\xi)\) is a model solution as follows: take \(\mathcal{N}\) a tubular neighborhood of \(\gamma\) of radius \(\epsilon<\min\,(-\bar{r}_{-},\bar{r}_{+})\), then \(\xi\) is a solution to \[\left\{\begin{aligned} \Delta\xi+2\xi=0&\quad \text{in}&\quad\mathcal{N},\\ \xi=\xi_{\text{max}}&\quad\text{along}& \quad\gamma,\\ \frac{\partial\xi}{\partial\nu}=0&\quad\text{along}& \quad\gamma.\end{aligned}\right.\] Since \(\gamma\) is non-characteristic, it follows from the Cauchy-Kovalevskaya theorem (cf. [30]) that \(\xi\) is the unique solution to the previous problem. But the associated model solution \((\bar{\Omega},\bar{\xi})\) is also a solution to the previous problem, so we conclude that \(\xi\equiv\bar{\xi}\) in \(\mathcal{N}\) (up to a rotation if necessary). It is clear then that it must be \((\Omega,\xi)\equiv(\bar{\Omega},\bar{\xi})\) by analyticity. Finally, consider the case \(\overline{\tau}(\Omega_{1}^{\gamma})=\overline{\tau}(\Omega_{2}^{\gamma})=\tau _{0}\). It follows from Proposition 4.2 that \(\kappa\leq 0\) with respect to the inner and to the outer normal to \(\Omega_{2}^{\gamma}\) simultaneously, so it must be \(\kappa=0\) along \(\gamma\). Then we get that it must be \((\Omega,\xi)\equiv(\Omega_{0},\xi_{0})\) using the same argument as in the previous case. ### Length estimates We show here a relation between the length of the zero and top level sets: **Proposition 4.3**.: _Let \((\Omega,\xi)\) be a solution to (3.2), \(\mathcal{U}\subset\Omega\setminus\operatorname{Max}(\xi)\) a connected component and \(\bar{R}=R(\mathcal{U})\) the expected critical height of the region. Suppose that \(\operatorname{cl}(\mathcal{U})\cap\operatorname{Max}(\xi)=\gamma^{\mathcal{U}}\) and \(\operatorname{cl}(\mathcal{U})\cap\partial\Omega=\Gamma^{\mathcal{U}}\) are sets of analytic closed curves. Then_ \[\frac{\left|\gamma^{\mathcal{U}}\right|}{\sqrt{1-\bar{R}^{2}}}\leq\frac{\left| \Gamma^{\mathcal{U}}\right|}{\sqrt{1-r_{+}^{2}}}\quad\text{if }\overline{\tau}(\mathcal{U}) \geq\tau_{0}\quad\text{and}\quad\frac{\left|\gamma^{\mathcal{U}}\right|}{\sqrt {1-\bar{R}^{2}}}\leq\frac{\left|\Gamma^{\mathcal{U}}\right|}{\sqrt{1-r_{-}^{2} }}\quad\text{if }\overline{\tau}(\mathcal{U})<\tau_{0}, \tag{4.21}\] _where \(\left|\gamma^{\mathcal{U}}\right|\) and \(\left|\Gamma^{\mathcal{U}}\right|\) denotes the sum of the lengths of each set of curves._ Proof.: We follow the proof of [1, Proposition 5.5]. Assume that \(\xi\) is normalized in the sense of Definition 4.2. Given \(\epsilon>0\), set \(\mathcal{U}_{\epsilon}\) as in (3.13) (as always, we suppose that \(\xi_{\text{max}}-\epsilon\) is regular). Then, by the divergence theorem, it holds: \[\int_{\mathcal{U}_{\epsilon}}\frac{-2\xi}{(1-\Psi^{2})\xi-\bar{ \alpha}}\,dA =\int_{\mathcal{U}_{\epsilon}}\frac{\Delta\xi}{(1-\Psi^{2})\xi- \bar{\alpha}}\,dA\] \[=\int_{\mathcal{U}_{\epsilon}}\left\langle\frac{\nabla\xi}{(1- \Psi^{2})\xi-\bar{\alpha}},\nabla\xi\right\rangle\,dA+\int_{\partial\mathcal{U} _{\epsilon}}\frac{\left\langle\nabla\xi,\nu\right\rangle}{(1-\Psi^{2})\xi- \bar{\alpha}}\,ds,\] where \(\Psi\) is the pseudo-radial function given in Definition 4.3, \(\bar{\alpha}=\alpha(\bar{R})\) as in Proposition 2.1 and \(\nu\) is the inner unit normal to \(\mathcal{U}_{\epsilon}\). Thus, on the one hand \[\nabla\frac{\Psi}{(1-\Psi^{2})\xi-\bar{\alpha}}=\frac{2\Psi^{3}(1-\Psi^{2}) \xi}{((1-\Psi^{2})\xi-\bar{\alpha})^{3}}\nabla\xi\] and on the other hand \[\frac{((1-\Psi^{2})\xi-\bar{\alpha})^{3}}{2\Psi^{3}(1-\Psi^{2})\xi}\cdot\frac {2\Psi\xi}{(1-\Psi^{2})\xi-\bar{\alpha}}=\frac{\left((1-\Psi^{2})\xi-\bar{ \alpha}\right)^{2}}{\Psi^{2}(1-\Psi^{2})}=W_{\bar{R}},\] so we obtain the identity \[\int_{\mathcal{U}_{\epsilon}}\frac{2\Psi^{3}(1-\Psi^{2})\xi}{((1-\Psi^{2}) \xi-\bar{\alpha})^{3}}(W-W_{\bar{R}})\,dA=-\int_{\Gamma_{\mathcal{U}}}\frac{ \left|\nabla\xi\right|\Psi}{(1-\Psi^{2})\xi-\bar{\alpha}}\,ds+\int_{\gamma_{ \epsilon}^{\mathcal{U}}}\frac{\left|\nabla\xi\right|\Psi}{(1-\Psi^{2})\xi- \bar{\alpha}}\,ds, \tag{4.22}\] where \(\gamma_{\epsilon}^{\mathcal{U}}=\{p\in\mathcal{U}:\;\xi(p)+\epsilon=\xi_{ \text{max}}\}\). It is clear that for \(\epsilon>0\) small enough, \(\gamma_{\epsilon}^{\mathcal{U}}\) is a set of analytic curves, and \(\gamma_{\epsilon}^{\mathcal{U}}\to\gamma^{\mathcal{U}}\) when \(\epsilon\to 0\). Next, we analyze both sides of the previous identity. We begin with the right-hand side. Since \(\Psi=r_{\pm}\) and \(\max_{\Gamma^{\mathcal{U}}}\left|\nabla\xi\right|=\pm\bar{\alpha}/r_{\pm} \sqrt{1-r_{\pm}^{2}}\), it is clear that \[\int_{\Gamma^{\mathcal{U}}}\frac{\left|\nabla\xi\right|\Psi}{(1-\Psi^{2})\xi- \bar{\alpha}}\,ds=\mp\frac{1}{\sqrt{1-r_{\pm}^{2}}}\int_{\Gamma^{\mathcal{U}} }\frac{\left|\nabla\xi\right|}{\max_{\Gamma^{\mathcal{U}}}\left|\nabla\xi \right|}\begin{cases}\geq-\frac{\left|\Gamma^{\mathcal{U}}\right|}{\sqrt{1-r_{+ }^{2}}}&\text{if}\quad\overline{\tau}(\mathcal{U})\geq\tau_{0},\\ \leq\frac{\left|\Gamma^{\mathcal{U}}\right|}{\sqrt{1-r_{-}^{2}}}&\text{if} \quad\overline{\tau}(\mathcal{U})<\tau_{0}.\end{cases} \tag{4.23}\] Also, note that by the Taylor expansions given in (4.17) and (4.19) we have \[\lim_{p\to\gamma^{\mathcal{U}}}\frac{W}{W_{\bar{R}}}=1.\] Since \[\frac{\left|\nabla\xi\right|\Psi}{(1-\Psi^{2})\xi-\bar{\alpha}}=\mp\frac{1}{ \sqrt{1-\Psi^{2}}}\sqrt{\frac{W}{W_{\bar{R}}}},\] (as always, the upper sign corresponds to the case \(\overline{\tau}(\mathcal{U})\geq\tau_{0}\) and the lower sign otherwise) we conclude that \[\lim_{\epsilon\to 0^{+}}\int_{\gamma_{\mathcal{U}}^{\mathcal{U}}}\frac{\left| \nabla\xi\right|\Psi}{(1-\Psi^{2})\xi-\bar{\alpha}}\,ds=\mp\frac{\left|\gamma^ {\mathcal{U}}\right|}{1-\bar{R}^{2}}. \tag{4.24}\] Now, we look at the left-hand side of (4.22). Note that \[\frac{\Psi^{3}(1-\Psi^{2})\xi}{((1-\Psi^{2})\xi-\bar{\alpha})^{3}}=\mp\frac{1} {W_{\bar{R}}^{3/2}\sqrt{1-\Psi^{2}}},\] therefore, using Theorem 4.1, we conclude that \[\int_{\mathcal{U}_{\epsilon}}\frac{2\Psi^{3}(1-\Psi^{2})\xi}{((1-\Psi^{2}) \xi-\bar{\alpha})^{3}}(W-W_{\bar{R}})\,dA\begin{cases}\geq 0&\text{if}\quad \overline{\tau}(\mathcal{U})\geq\tau_{0},\\ \leq 0&\text{if}\quad\overline{\tau}(\mathcal{U})<\tau_{0}.\end{cases} \tag{4.25}\] Finally, combining (4.23), (4.24) and (4.25) we obtain the inequalities (4.21). ## 5 Overdetermined solutions Let \(\Omega\subset\mathbb{S}^{2}\) be a domain of finite type with \(k\geq 2\) boundary components and consider the overdetermined elliptic problem \[\left\{\begin{aligned} \Delta\xi+2\xi=0&\text{in} &\Omega,\\ \xi>0&\text{in}&\Omega,\\ \xi=0&\text{along}&\partial\Omega,\\ |\nabla\xi|=b_{i}>0&\text{along}&\Gamma_{i},\quad\text{for}\,i\in\{1, \dots,k\},\end{aligned}\right. \tag{5.1}\] where we have that \(\partial\Omega=\bigcup_{i=1}^{k}\Gamma_{i}\) using the notation introduced in Definition 3.1. **Remark 5.1**.: _When \(\Omega\) is a domain of finite type with \(k=1\), i.e., a topological disk, a solution \(\xi\) to (5.1) must be rotationally symmetric and \(\Omega\) a geodesic disk by [16, 53], that is, \((\xi,\Omega)\equiv(\xi_{1},\Omega_{1})\)._ In this section, we will classify the solutions \((\Omega,\xi)\) to (5.1) which have infinitely many maximum points. To do so, we will use the estimates computed in Section 4 and a length estimate that uses the overdetermined condition and Proposition 4.1. **Proposition 5.1**.: _Let \((\Omega,\xi)\) be a solution to (5.1) and \(\mathcal{U}\subset\Omega\setminus\operatorname{Max}(\xi)\) an annular connected component, i.e., homeomorphic to an annulus, such that \(\partial\mathcal{U}=\Gamma\cup\gamma\), \(\Gamma\subset\partial\Omega\) and \(\gamma\subset\operatorname{Max}(\xi)\). Then, it holds_ \[|\Gamma|\leq 2\pi\sqrt{1-r_{+}^{2}}\quad\text{if }\overline{\tau}(\mathcal{U}) \geq\tau_{0}\quad\text{and}\quad|\Gamma|\leq 2\pi\sqrt{1-r_{-}^{2}}\quad\text{if } \overline{\tau}(\mathcal{U})<\tau_{0}.\] Proof.: First, observe that Proposition 4.1 and the overdetermined condition \(\xi=0\) and \(|\nabla\xi|=b_{i}>0\) along each \(\Gamma_{i}\subset\partial\Omega\) connected component imply that \[|\kappa(p)|\geq\left|\frac{\bar{r}_{\pm}}{\sqrt{1-\bar{r}_{\pm}}}\right|\text { for all }p\in\Gamma\] where \(\bar{r}_{+}>0\) and \(\bar{r}_{-}<0\) are defined in Remark 4.1 and \(\kappa\) is the geodesic curvature of \(\Gamma\). Then, by Blaschke turning theorem (generalized to space forms in [27]) we conclude that there exists \(p\in\mathbb{S}^{2}\) such that \(\Gamma\) is contained in the geodesic disk \(D:=\mathbb{D}(p,|r_{\pm}|)\). Now, given a closed convex curve \(\sigma:I\to\mathbb{S}^{2}\), Crofton's formula (cf. [37]) tells us that \[\operatorname{Length}(\sigma)=\frac{1}{4}\int_{\mathbb{S}^{2}}n_{\sigma}(a)\, da,\] where \(n_{\sigma}(a)\) measures the number of points in which the plane \(P_{a}:=<\{a\}>^{\perp}\), \(a\in\mathbb{S}^{2}\), and the curve \(\sigma\) intersects. Since any \(P_{a}\) intersects an embedded strictly convex closed curve in the sphere at most at two points and \(\Gamma\subset D\), the result follows. Recall that \(\Gamma\) is strictly convex with the correct orientation. Now, we have the ingredients to prove our main classification result: **Theorem 5.1**.: _Let \((\Omega,\xi)\) be a solution to the overdetermined problem (5.1), where \(\Omega\) is a \(\mathcal{C}^{2}\)-domain of finite type with \(k\geq 2\) boundary components. Suppose that \(\operatorname{Max}(\xi)\) contains a closed curve \(\gamma\) such that the partition with respect to \(\gamma\), \(\Omega\setminus\gamma=\Omega_{1}^{\gamma}\cup\Omega_{2}^{\gamma}\) contains a component, say \(\Omega_{1}^{\gamma}\), that is a topological annulus. Then, \(k=2\) and \(\Omega\) is rotationally symmetric. In particular, there exists an \(R\in[0,1)\) such that \((\Omega,\xi)=(\Omega_{R},\xi_{R})\) up to a rotation and a dilation._ Proof.: By hypothesis, \(\Omega_{1}^{\gamma}\) is a topological annulus and, up to rearranging the indexes if necessary, \(\partial\Omega_{1}^{\gamma}=\Gamma_{1}\cup\gamma\), where \(\Gamma_{1}=\partial\Omega\cap\partial\Omega_{1}^{\gamma}\). It follows from Proposition 4.21 and Proposition 5.1 that \[|\gamma_{1}|\leq 2\pi\sqrt{1-R_{1}^{2}}\frac{|\Gamma_{1}|}{\sqrt{1-r_{\pm}^{2} }}\leq 2\pi\sqrt{1-R_{1}^{2}},\] where \(R_{1}=R(\Omega_{1}^{\gamma})\in[0,1)\) is the expected critical height of the region \(\Omega_{1}^{\gamma}\). Then, [37, Proposition 9.54] implies that \(\gamma_{1}\) is either an equator or it is contained in an open hemisphere, denoted by \(\mathbb{S}_{+}^{2}\). In the first case, we conclude that \((\Omega,\xi)=(\Omega_{0},\xi_{0})\) up to a rotation and a dilation applying the Cauchy-Kovalevskaya theorem. In the second case, since \(\gamma\) is not contractible in \(\Omega\) by Lemma 3.1, either \(\Omega_{1}^{\gamma}\subset\mathbb{S}_{+}^{2}\) or \(\Omega_{2}^{\gamma}\subset\mathbb{S}_{+}^{2}\). In any case, we can apply the moving plane method in \(\mathbb{S}_{+}^{2}\) (a slight modification of the arguments in [32, 43]) to the component \(\Omega_{i}^{\gamma}\subset\mathbb{S}_{+}^{2}\) to conclude that \(\Omega\) and \(\xi\) are rotationally symmetric. Since the model solutions are the only solutions to (3.2) with this property, this proves the result. **Remark 5.2**.: _Let \(\Omega\) be a finite domain with \(k\geq 2\) and \(\xi\) a solution to (5.1), a simple topological argument and Lemma 3.1 imply that, if the top level set of \(\xi\) contains \(k-1\) curves \(\gamma_{i}\), \(i=1,\ldots,k\), then \(\Omega\setminus\bigcup_{i=1}^{k-1}\gamma_{i}\) contains an annular component in the conditions of Theorem 5.1. Hence, \((\Omega,\xi)=(\Omega_{R},\xi_{R})\) up to a rotation and a dilation._ In the case that \(\Omega\) is a topological annulus, Theorem 5.1 and Remark 5.2 imply: **Theorem A:**_Let \((\Omega,\xi)\) be a solution to (5.1), \(\Omega\) an annular domain (i.e., \(k=2\)). Suppose that \(\xi\) has infinitely many maximum points inside \(\Omega\). Then, there exist \(R\in[0,1)\) such that \((\Omega,\xi)\equiv(\Omega_{R},\xi_{R})\)._ Figure 5: Here, a finite domain in the unit sphere. The domain is \(\Omega=\mathbb{S}^{2}\setminus\bigcup_{i=1}^{3}D_{i}\), where \(D_{1},D_{2}\) and \(D_{3}\) are the blue regions. We have that \(\Omega\setminus\gamma=\Omega_{1}^{\gamma}\cup\Omega_{2}^{\gamma}\) is a partition of \(\Omega\), and the region \(\Omega_{1}^{\gamma}\) has the topology of an annulus. If \(\xi\) solves (5.1) and \(\gamma\in\text{Max}(\xi)\) then \((\Omega,\xi)\) satisfies the hypothesis of Theorem 5.1. We conclude this section by noting that we can change the topological condition on \(\Omega\setminus\gamma\) in Theorem 5.1 by a restriction in the NWSS of the solution. In fact, using Theorem 4.2 we can obtain the following result: **Theorem 5.2**.: _Let \((\Omega,\xi)\) be a solution to the overdetermined problem (5.1), where \(\Omega\) is of finite type with \(k\geq 2\). Suppose that \(\xi\) has infinitely many maximum points and that at least one of the connected components of \(\Omega\setminus\operatorname{Max}(\xi)\) has NWSS less or equal to \(\tau_{0}\). Then, \(\Omega\) is rotationally symmetric. In particular, there exists \(R\in[0,1)\) such that \((\Omega,\xi)\equiv(\Omega_{R},\xi_{R})\)._ Proof.: Let \(\mathcal{U}\subset\Omega\setminus\operatorname{Max}(\xi)\) such that \(\overline{\tau}(\mathcal{U})\leq\tau_{0}\) by hypothesis and set \(\partial\mathcal{U}\cap\operatorname{Max}(\xi)=\bigcup_{j=1}^{m}\gamma_{j}\), the boundary components of \(\mathcal{U}\) in the top level set. Theorem 4.2 implies that the geodesic curvature of \(\gamma_{j}\), with respect to the inner orientation to \(\mathcal{U}\), is non-positive. Therefore, each \(\gamma_{j}\) is contained in a closed hemisphere and hence, there exists, at least, a connected component \(\gamma\subset\partial\mathcal{U}\cap\operatorname{Max}(\xi)\) such that one of the components, \(\Omega\setminus\gamma=\Omega_{1}^{\gamma}\cup\Omega_{2}^{\gamma}\), of the partition with respect to \(\gamma\), say \(\Omega_{1}^{\gamma}\), is contained in a closed hemisphere, denoted by \(\operatorname{cl}(\mathbb{S}_{+}^{2})\). We distinguish two cases. First, assume that the geodesic curvature of \(\gamma\) vanishes identically. In this case, Cauchy-Kovalevskaya theorem implies that \((\xi,\Omega)=(\xi_{0},\Omega_{0})\), up to a rotation and scaling. Second, if the geodesic curvature does not vanish identically, we can move slightly \(\operatorname{cl}(\mathbb{S}_{+}^{2})\) to obtain an open hemisphere, still denoted by \(\mathbb{S}_{+}^{2}\), such that \(\operatorname{cl}(\Omega_{1}^{\gamma})\subset\mathbb{S}_{2}^{+}\). Hence, again by the moving plane method using the overdetermined condition, we infer that \((\xi,\Omega)\equiv(\xi_{R},\Omega_{R})\) for some \(R\in(0,1)\). ## 6 Geometric application In this section, we will exploit the correspondence of solutions to (5.1) and free boundary minimal surfaces explained in the introduction. Let \(\mathbb{B}^{3}(R)\) denote the open Euclidean ball of radius \(R>0\) centered at the origin, \(\partial\mathbb{B}^{3}(R)=\mathbb{S}^{2}(R)\). For simplicity, we also set \(\mathbb{B}^{3}=\mathbb{B}^{3}(1)\) and \(\mathbb{S}^{2}\equiv\mathbb{S}^{2}(1)\). **Definition 6.1**.: _Let \(\Sigma\subset\mathbb{R}^{3}\) be an open immersed minimal surface with boundary. We will say that \(\Sigma\) has free boundaries if each boundary component of \(\Sigma\) meets orthogonally a sphere centered at the origin, possibly of different radii. If \(\Sigma\subset\mathbb{B}^{3}\) and \(\partial\Sigma\subset\mathbb{S}^{2}\) then \(\Sigma\) is said to be a free boundary minimal surface in the unit ball._ We recall the correspondence between minimal surfaces with free boundaries and the solutions to (5.1) (cf. [53, Proposition 2.1]). **Proposition 6.1**.: _Let \(\Omega\subset\mathbb{S}^{2}\) be a \(\mathcal{C}^{2}\)-domain in the sphere. Suppose that \(\Omega\) is not a topological disk and there exists a nonzero function \(\xi\in\mathcal{C}^{2}(\Omega)\cap\mathcal{C}^{1}(\operatorname{cl}(\Omega))\) solution to the problem_ \[\left\{\begin{aligned} \Delta\xi+2\xi=0&\text{ in }&\Omega,\\ \xi=0&\text{ along }&\partial\Omega,\\ \left|\nabla\xi\right|^{2}=b_{i}^{2}&\text{ along }&\Gamma_{i}\in\pi_{0}(\partial\Omega),\,i\in\{1,\dots,k\}\,,\end{aligned}\right. \tag{6.1}\] _where \(\nu\) is the exterior unit normal to \(\Omega\) and \(b_{i}\) is a constant for each \(i\in\{1,\ldots,k\}\). Then, the map:_ \[X_{\xi}: \Omega \longrightarrow\mathbb{R}^{3}\] \[z \mapsto X_{\xi}(z):=\nabla^{\mathbb{S}^{2}}\xi(z)+\xi(z)\cdot z.\] _defines a branched minimal surface in \(\mathbb{R}^{3}\) with free boundaries, where each boundary component \(X_{\xi}(\Gamma_{i})\) lies in the sphere \(\mathbb{S}^{2}\left(|b_{i}|\right)\)._ _Conversely, let \(\Sigma\) be a minimal surface with free boundaries, \(\partial\Sigma=\bigcup_{i=1}^{k}\zeta_{i}\), and injective Gauss map \(N:\Sigma\to\mathbb{S}^{2}\). Set \(u(p):=\left\langle p,N(p)\right\rangle\) its support function. Then_ \[\xi(z):=(u\circ N^{-1})(z)=\left\langle N^{-1}(z),z\right\rangle,\quad\forall z \in N(\Sigma)=\Omega \tag{6.2}\] _satisfies (6.1), where \(\partial\Omega=\bigcup_{i=1}^{k}\Gamma_{i}\), \(N(\zeta_{i})=\Gamma_{i}\), and \(|b_{i}|\) is the radius of the sphere in which \(\zeta_{i}\in\pi_{0}(\partial\Sigma)\) is contained._ From now on, up to a dilation in \(\mathbb{R}^{3}\), \(\Sigma\subset\mathbb{B}^{3}\) will denote an embedded minimal annulus with free boundaries, \(\partial\Sigma=\zeta_{1}\cup\zeta_{2}\), such that \(\zeta_{1}\subset\mathbb{S}^{2}\) and \(\zeta_{2}\subset\mathbb{S}^{2}(\tilde{r})\) for some \(0<\tilde{r}\leq 1\). We will always assume that the boundaries of \(\Sigma\) intersect the spheres _from the inside_, i.e., let \(\nu:\partial\Sigma\to\mathbb{S}^{2}\) the exterior conormal to \(\Sigma\) then, \(\left\langle p,\nu(p)\right\rangle>0\) for all \(p\in\partial\Sigma\). Now, we are ready to prove Theorem B. We rewrite it here using the notation introduced so far. **Theorem B:**_Let \(\Sigma\subset\mathbb{B}^{3}\) be an embedded minimal annulus with free boundaries, \(\partial\Sigma=\zeta_{1}\cup\zeta_{2}\), such that \(\zeta_{1}\subset\mathbb{S}^{2}\) and \(\zeta_{2}\subset\mathbb{S}^{2}(\tilde{r})\) for some \(0<\tilde{r}\leq 1\). Suppose that its support function has infinitely many critical points. Then, there exists \(R\in[0,1)\) such that \(\Sigma=C_{R}\), up to a rotation around the origin, where \(C_{R}\) is one of the model catenoids defined in (2.9)._ Proof of Theorem B.: First, since \(\Sigma\) is minimal, the Hopf differential is holomorphic and so the interior zeros are isolated and of negative index. Moreover, since \(\text{cl}(\Sigma)\) intersects some sphere at a constant angle, Joachimstahl's theorem [10, p. 152] implies that each boundary component is a line of curvature, and so the zeros at the boundary are also isolated and of negative index. Since the zeros of the Hopf differential correspond to umbilic points we have that the index at any umbilical point is negative. Using that \(\Sigma\) is a topological annulus, the Poincare-Hopf index theorem asserts that \(\Sigma\) is either totally umbilical, which is not possible, or there are no umbilic points (cf. [24, 38] for details). Next, since there are no umbilic points, the Gaussian curvature \(K\) is strictly negative in \(\text{cl}(\Sigma)\) and the boundary curves \(\zeta_{1}\subset\mathbb{S}^{2}\) and \(\zeta_{2}\subset\mathbb{S}^{2}(\tilde{r})\) are strictly convex spherical curves. This also implies that the Gauss map \(N:\text{cl}(\Sigma)\to\mathbb{S}^{2}\) is a local diffeomorphism. Moreover, \(N:\zeta_{i}\to N(\zeta_{i})=\Gamma_{i}\), \(i=1,2\), is one to one. Consider the projection map \(\varrho:\mathbb{S}^{2}(\tilde{r})\to\mathbb{S}^{2}\) given by \(\varrho(p)=p/|p|\). We next show: **Claim A:**_Let \(\Sigma\) be an embedded minimal annulus with free boundaries. Then \(\mathbb{S}^{2}\setminus(\zeta_{1}\cup\varrho(\zeta_{2}))=A\cup B_{1}\cup \varrho(B_{2})\), where \(A\) is a topological annulus and \(B_{1}\subset\mathbb{S}^{2}\) and \(\varrho(B_{2})\), \(B_{2}\subset\mathbb{S}^{2}(\tilde{r})\), are disjoint topological disks with boundaries \(\zeta_{1}\) and \(\varrho(\zeta_{2})\) respectively. In addition, \(B_{1}\) and \(\varrho(B_{2})\) are contained in two different closed hemispheres of \(\mathbb{S}^{2}\)._ Proof.: Let \(F_{i}=\int_{\zeta_{i}}\nu\) be the flux along \(\zeta_{i}\) (for \(i=1,2\)). Then, since \(\Sigma\) is a minimal annulus, we have \(F_{1}=-F_{2}\). But \(\Sigma\) has free boundaries, then it follows that \(\nu(p)=p/|p|\) for each \(p\in\zeta_{i}\), \(i=1,2\), so we obtain that \(\zeta_{1}\) and \(\varrho(\zeta_{2})\) are convex curves contained in \(\mathbb{S}^{2}\) with opposite center mass. On the other hand, since the convex hull of \(\zeta_{1}\) (resp. \(\varrho(\zeta_{2})\)), \(\operatorname{conv}(\zeta_{1})\) (resp. \(\operatorname{conv}(\varrho(\zeta_{2}))\)), is given by the intersection (see [51]) of all closed halfspaces containing \(\zeta_{1}\) (resp. \(\varrho(\zeta_{2})\)), if \(H\) is a closed halfspace of equation \(\langle q,v_{0}\rangle\leq r_{0}\), for \(v_{0}\in\mathbb{R}^{3}\setminus\{0\},r_{0}\in\mathbb{R}\), with \(\zeta_{1}\subset H\) (resp. \(\varrho(\zeta_{2})\subset H\)) then it is clear that \(F_{1}\in H\) (resp. \(F_{2}\in H\)). Moreover, \(F_{i}\in\partial H\) if and only if \(\zeta_{i}\subset\partial H\). Therefore, \(F_{1}\) (resp. \(F_{2}\)) is a non zero vector with \(F_{1}\in\operatorname{conv}(\zeta_{1})\setminus\zeta_{1}\) (resp. \(F_{2}\in\operatorname{conv}(\varrho(\zeta_{2}))\setminus\varrho(\zeta_{2})\)). This fact proves that both \(\zeta_{1}\) and \(\varrho(\zeta_{2})\) cannot be contained in a common _closed_ hemisphere of \(\mathbb{S}^{2}\), otherwise \(F_{1}+F_{2}\neq 0\). Thus the result is proven. Let \(B_{i}\) be the convex inner domain inside \(\mathbb{S}^{2}(r_{i})\), \(r_{1}=1\) and \(r_{2}=\tilde{r}\), bounded by the boundary component \(\zeta_{i}\), \(i\in\{1,2\}\). Consider the closed cones of \(\mathbb{R}^{3}\) given by \[\mathcal{C}_{i}=\{\lambda p:\;\lambda\geq 0,\;p\in\zeta_{i}\cup B_{i}\},\quad i \in\{1,2\}. \tag{6.3}\] From now on we fix the outward orientation \(N\), that is, \(N\) points towards \(\mathbb{S}^{2}(r_{i})\setminus\text{cl}(B_{i})\) along each boundary component \(\zeta_{i}\). For every support plane \(P\) to \(\mathcal{C}_{i}\) consider its exterior unit normal \(v_{P}\). The set \(\mathcal{N}_{i}\) of exterior unit normals \(v_{P}\) to \(\mathcal{C}_{i}\) is a convex set of the sphere (see [51]). Moreover, since \(\Sigma\) meets \(\mathbb{S}^{2}\) and \(\mathbb{S}^{2}(\tilde{r})\) at constant angles, the boundary of \(\mathcal{N}_{i}\) is determined by the curve \(N(\zeta_{i})\). **Claim B:**\(\mathcal{N}_{1}\cap\mathcal{N}_{2}=\emptyset\). Proof of Claim B.: First, by contradiction, assume that \(N(\zeta_{1})\cap N(\zeta_{2})\neq\emptyset\). Then, there would exist \(N_{0}\in N(\zeta_{1})\cap N(\zeta_{2})\), and so the plane \(P\) with equation \(\langle q,N_{0}\rangle=0\) satisfies that \(\zeta_{1}\cup\zeta_{2}\) is contained in a closed halfspace determined by \(P\). But, this contradicts that \(\zeta_{1}\) and \(\varrho(\zeta_{2})\) cannot be contained in a common _closed_ hemisphere of \(\mathbb{S}^{2}\) by Claim A. Second, by contradiction again, assume \(\mathcal{N}_{1}\subset\mathcal{N}_{2}\). An argument as above shows that \(\zeta_{1}\) and \(\varrho(\zeta_{2})\) are contained in the same hemisphere. Which is a contradiction. Therefore, by Claim B, \(\mathbb{S}^{2}\setminus(N(\zeta_{1})\cup N(\zeta_{2}))\) has three connected components: two open disks whose closure are \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\) and an open annulus \(\Omega\) whose boundary curves are \(N(\zeta_{1})\cup N(\zeta_{2})\). Hence, as the degree of a differential map between compact manifolds is the same in all regular values (see [25, Chapter 5, Lemma 1.4]) an elementary topological argument shows that \(N\) is a global diffeomorphism from \(\operatorname{cl}(\Sigma)\) onto the closure of the annulus \(\Omega:=N(\Sigma)\subset\mathbb{S}^{2}\). Thus, we can pushforward the support function \(u(p):=\langle p,N(p)\rangle\), \(p\in\operatorname{cl}(\Sigma)\), to \(\operatorname{cl}(\Omega):=N(\operatorname{cl}(\Sigma))\), \(\xi\) defined in (6.2), and \(\xi\) satisfies (6.1) and has infinitely many critical points. As a consequence of the injectivity of the Gauss map, we will deduce that the support function of \(\Sigma\) must have a constant sign. **Claim C:**_Let \(\Sigma\) be an embedded minimal annulus with free boundaries. Then,_ \[\Sigma\cap\mathcal{C}_{i}=\emptyset,\quad i\in\{1,2\},\] _where \(\mathcal{C}_{i}\) are the closed convex cones defined by (6.3)._ **Remark 6.1**.: _This result can be easily thought of if we replace the cones \(\mathcal{C}_{i}\) with a convex body whose boundary is a smooth surface and agrees with the cone except in a small neighborhood of the vertex. And also replace support planes with tangent planes._ Proof of Claim C.: Let \(v_{i}\in B_{i}\subset\mathcal{C}_{i}\) a fixed vector contained in the interior of \(\mathcal{C}_{i}\). We first observe that the cone \(\mu v_{i}+\mathcal{C}_{i}\) given as the translation of \(\mathcal{C}_{i}\) using the vector \(\mu v_{i}\) is included in the interior of \(\mathcal{C}_{i}\) for each \(\mu>0\). Then, by Claim A, \((\mu v_{i}+\mathcal{C}_{i})\cap\operatorname{cl}(\Sigma)=\emptyset\) for \(\mu\) large enough. Now, decrease \(\mu\) until \((\mu v_{i}+\mathcal{C}_{i})\cap\overline{\Sigma}\neq\emptyset\) for the first time. Denote by \(\mu_{0}\) that \(\mu\). If \(\mu_{0}>0\) then \(\mu_{0}v_{i}+\mathcal{C}_{i}\) intersects \(\operatorname{cl}(\Sigma)\) at an interior point \(p_{0}\in\Sigma\). So, the tangent plane of \(\Sigma\) at \(p_{0}\) is a support plane of \(\mu_{0}v_{i}+\mathcal{C}_{i}\) (or equivalently, a support plane of \(\mathcal{C}_{i}\)), with the same interior unit normal \(N(p_{0})\). This is a contradiction because the image of the interior unit normal of \(\Sigma\) and \(\mathcal{C}_{i}\) are two disjoint sets of the sphere by Claim B. If \(\mu_{0}=0\) then \(\zeta_{i}\subset\mathcal{C}_{i}\cap\operatorname{cl}(\Sigma)\). But it is impossible the existence of another point \(p_{0}\in\Sigma\) at \(\mathcal{C}_{i}\cap\operatorname{cl}(\Sigma)\) using the same reasoning as in the case \(\mu_{0}>0\). It is important to recall that we are considering the outward orientation. Then, Claim C implies that the support function \(u(p)=\langle p,N(p)\rangle\) is positive at some point on \(\Sigma\). **Claim D:**_Let \(\Sigma\) be an embedded minimal annulus with free boundaries. Then, its support function \(u(p)\) is strictly positive in \(\Sigma\) and vanishes along \(\partial\Sigma\). Here, \(N\) is the outward orientation._ Proof of Claim D.: First, we will show that \(\Sigma\cap L(\mathbf{o},p)=\{p\}\) for all \(p\in\Sigma\), where \(L(\mathbf{o},p)\) is the half-line starting at the origin \(\mathbf{o}\) passing through \(p\in\Sigma\). Assume this was not the case. Consider the dilation \(\Sigma_{\lambda}:=\lambda\Sigma\), \(\lambda\geq 1\). Observe that \(\partial\Sigma_{\lambda}\subset\mathcal{C}_{1}\cup\mathcal{C}_{2}\), where \(\mathcal{C}_{i}\), \(i=1,2\), are the closed cones defined in (6.3). Then, for \(\lambda>1\) big enough, \(\Sigma\cap\Sigma_{\lambda}=\emptyset\). If there were a point \(q\in\Sigma\) so that \(|\Sigma\cap L(\mathbf{o},p)|\geq 2\), then there exists \(\bar{\lambda}>1\) so that \(\Sigma\cap\Sigma_{\bar{\lambda}}\neq\emptyset\) and \(\Sigma\cap\Sigma_{\lambda}=\emptyset\) for all \(\bar{\lambda}<\lambda\), observe that the intersection points must be at the interior of the surfaces. This implies that the tangent planes at the intersection points \(\Sigma\cap\Sigma_{\tilde{\lambda}}\) must be equal and, also, the normals must be equal. But this contradicts that the Gauss map is a global diffeomorphism. Therefore, \(u(p)=\langle p,N(p)\rangle\geq 0\) (with the outward orientation) on \(\text{cl}(\Sigma)\). Since \(u=0\) along \(\partial\Sigma\) and \(u\) is a Jacobi function, it is standard that either \(u\equiv 0\), which is impossible, or \(u>0\) on the interior. This finishes the proof. **Remark 6.2**.: _When \(\Sigma\subset\mathbb{R}^{3}\) is an embedded free boundary minimal surface, Kusner and McGrath in [31, Corollary 4.2] proved that \(\Sigma\) is a radial graph. However, their proof depends strongly on the two-piece property for embedded free boundary minimal surfaces in \(\mathbb{B}^{3}\) (see [34]), a property that is not clear to hold in our case._ Observe that Claim D implies that \(\text{cl}(\Sigma)\) is a radial graph. Finally, we need to prove: **Claim E:**_If \(\#\mathrm{Crit}(u)=+\infty\), then \(\tilde{\gamma}=\mathrm{Max}(u)=\mathrm{Crit}(u)\) is a closed simple curve._ Proof of Claim E.: If \(\#\mathrm{Crit}(u)=+\infty\), by analyticity, we already know that there exists, at least, one simple curve \(\tilde{\gamma}\) contained in \(\mathrm{Crit}(u)\); we will see that this is the only one. Consider \(\varrho:\text{cl}(\Sigma)\to\mathbb{S}^{2}\) the radial projection \(\varrho(p)=p/|p|\) and \(N:\text{cl}(\Sigma)\to\mathbb{S}^{2}\) the outward orientation. Is it easy to observe that \(p\in\mathrm{Crit}(u)\) if, and only if, \(\varrho(p)=N(p)\) and hence \(\gamma=\varrho(\tilde{\gamma})=N(\tilde{\gamma})\). Without loss of generality, up to a rotation, we can assume that the flux along the boundary components of \(\Sigma\) is vertical. Thus, since \(\text{cl}(\Sigma)\) is a radial graph, \(\gamma\) divides \(\mathbb{S}^{2}\) into two components \(S^{+}\cup S^{-}=\mathbb{S}^{2}\setminus\gamma\) where \(S^{+}\) contains the north pole and \(S^{-}\) contains the south pole. Moreover, \(\gamma\subset\varrho(\text{cl}(\Sigma))\cap N(\text{cl}(\Sigma))\) divides both set into two components, that is, \[\mathcal{F}^{\pm}=\varrho(\text{cl}(\Sigma))\cap S^{\pm}\text{ and }\Omega^{\pm}= \ N(\text{cl}(\Sigma))\cap S^{\pm}\] Since both, \(\varrho\) and \(N\), are one-to-one, they map a component \(\text{cl}(\Sigma)\setminus\tilde{\gamma}\) into a component of their image. Let \(\Sigma^{+}\) be the connected component of \(\Sigma\setminus\tilde{\gamma}\) whose boundary component \(\zeta^{+}=\partial\Sigma^{+}\setminus\tilde{\gamma}\) has flux in the positive vertical axis. Thus, by the free boundary condition \(N(\Sigma^{+})=\Omega^{-}\) and \(\varrho(\Sigma^{+})=\mathcal{F}^{+}\), which implies that \(N(\Sigma^{+})\cap F(\Sigma^{+})=\emptyset\). The same happens on the other component \(\Sigma^{-}\) of \(\Sigma\setminus\tilde{\gamma}\) and boundary component \(\zeta^{-}=\partial\Sigma^{-}\setminus\tilde{\gamma}\). Therefore \[N(\Sigma\setminus\tilde{\gamma})\cap\varrho(\Sigma\setminus\tilde{\gamma})=\emptyset,\] which means that the only critical points of \(u\) are those of \(\tilde{\gamma}\). So far, we have proven that \(\tilde{\gamma}=\mathrm{Crit}(u)=\mathrm{Crit}(d)\), where \(d(p)=|p|^{2}\) is the square distance function. Since \(\Sigma\) is minimal, the only possibility is that \(\tilde{\gamma}\) is a curve of absolute minimums of \(d\), that is, there exists \(\tilde{R}\in(0,1)\) so that \(\tilde{\gamma}\subset\mathbb{S}^{2}(\tilde{R})\), \(\Sigma\) is tangential to \(\mathbb{S}^{2}(\tilde{R})\) along \(\tilde{\gamma}\), and \(\tilde{\gamma}\) is line of curvature of \(\Sigma\) by Joachimstahl's theorem [10, p. 152]. This proves Claim E. Then, we conclude that \((\Omega:=N(\Sigma),\xi:=u\circ N^{-1})\) is a solution to (5.1) satisfying the conditions of Theorem A since \(\gamma=N(\tilde{\gamma})=\max(\xi)\) by Claim E. Hence \((\xi,\Omega)\equiv(\xi_{R},\Omega_{R})\) for some \(R\in[0,1)\), this finishes the proof of Theorem B. Finally, Corollary C follows as an immediate consequence of Theorem B, taking into account that \(C_{0}\) is the only free boundary catenoid of the family in \(\mathbb{B}^{3}\). ## Acknowledgments The first author is partially supported by Spanish MIC Grant PID2020-118137GB-I00 and MIC-NextGenerationEU Grant 30.RP.23.00.04 CONSOLIDACION2022. The second author is partially supported by the _Maria de Maeztu_ Excellence Unit IMAG, reference CEX2020-001105-M, funded by MCINN/AEI/10.13039/ 501100011033/CEX2020-001105-M.
2302.09627
Thickness dependence of superconductivity in FeSe films
The films of FeSe on substrates have attracted attention because of their unusually high-temperature (Tc) superconducting properties whose origins continue to be debated. To disentangle the competing effects of the substrate and interlayer and intralayer processes, we present here results of density functional theory (DFT)-based analysis of the electronic structure of unsupported FeSe films consisting of 1 to 5 layers (1L-5L). Furthermore, by solving the Bardeen-Schrieffer-Cooper (BCS) equation with spin-wave exchange attraction derived from the Hubbard model, we find the superconducting critical temperature Tc for 1L-5L and bulk FeSe systems in reasonable agreement with experimental data. Our results point to the importance of correlation effects in superconducting properties of single- and multi-layer FeSe films, independently of the role of substrate.
Jia Shi, Duy Le, Volodymyr Turkowski, Naseem Ud Din, Tao Jiang, Qiang Gu, Talat S. Rahman
2023-02-19T17:09:19Z
http://arxiv.org/abs/2302.09627v1
# Thickness dependence of superconductivity in FeSe films ###### Abstract Thin films of FeSe on substrates have attracted attention because of their unusually high-temperature (T\({}_{\rm c}\)) superconducting properties whose origins continue to be debated. To disentangle the competing effects of the substrate and interlayer and intralayer processes, we present here results of density functional theory (DFT)-based analysis of the electronic structure of unsupported FeSe films consisting of 1 to 5 layers (1L - 5L). Furthermore, by solving the Bardeen-Schrieffer-Cooper (BCS) equation with spin-wave exchange attraction derived from the Hubbard model, we find the superconducting critical temperature T\({}_{\rm c}\) for 1L-5L and bulk FeSe systems in reasonable agreement with experimental data. Our results point to the importance of correlation effects in superconducting properties of single- and multi-layer FeSe films, independently of the role of substrate. ## 1 Introduction The discovery of superconductivity in Fe\({}_{\rm x}\)Se\({}_{\rm 1\mathchar 45}\) systems has triggered extensive research on these layered materials because of the high transition temperature and several unique properties that come from the presence of the nematic, magnetic and superconducting orders [1-3]. The systems are of interest also because the insights they may provide regarding the role of strong electron correlations in the occurrence of superconductivity. It is important to mention that bulk ("infinite-layer") FeSe undergoes transition to superconducting regime at a notable critical temperature T\({}_{\rm c}\sim\) 8K (at ambient pressure) [4]. Besides a not dramatically different value of T\({}_{\rm c}\), this material shares a common crystal structure with other Fe-based planar compounds [5-8]. Yet, there is no clear understanding of physics behind superconductivity even in bulk FeSe (i.e., a system with no substrate) because of simultaneous presence of several phase transitions with similar energy order - nematic, magnetic and superconducting. For example, Wang et al. [9] conclude from analysis of their neutron scattering data that spin fluctuations generate both nematicity and superconductivity (this also agrees with theoretical results (see references in [9] and also a recent work [10]). On the other hand, high-temperature (110K) inelastic neutron-scattering measurements [11] demonstrate that there are two types of fluctuations in bulk FeSe - stripe and Neel spin fluctuations - that are seen over a wide energy range. Wang et al. [11] thus conclude that FeSe is "a novel S = 1 nematic quantum-disordered paramagnet interpolating between the Neel and stripe magnetic instabilities". To further clarify the role of nematicity in superconductivity in bulk FeSe, a comparative _ab initio_ study of the superconducting properties of the orthorhombic distorted and the higher-energy (not yet discovered experimentally) tetragonal FeSe were performed [12], only to conclude that nematicity is not the driving force for the superconductivity. Another combined experimental-theoretical study of FeSe [13] concluded that high-T\({}_{\rm c}\) superconductivity in FeSe based systems could not be explained on the basis of strain driven mechanisms (see a review [14] in which the interplay of nematicity, magnetism and superconductivity in bulk FeSe is discussed). In addition to the complexities mentioned above, Sun et al. [15] (see also Refs. [16,17]) showed that there are indications of a pseudogap in the phase diagram of bulk FeSe, i.e. the fluctuations of the phase of the order parameter, similar to that in cuprate superconductors, might be important in the system (they can be even more important in the 2D case). In short, superconductivity in bulk FeSe continues to vex researchers as it might be the result of several competing transitions. Recently, a large superconducting gap of about 20 meV and a T\({}_{\rm c}\) an order of magnitude higher than that in \(\alpha\)-FeSe phase [18-21] have been observed in single-layer FeSe grown on insulating and doped SrTiO\({}_{3}\) (STO) substrate through molecular beam epitaxy. Note that despite the simplicity of the Fermi surface of the 1L, the symmetry of the order parameter is still not known. Since scanning tunneling microscopy data for 1L FeSe on STO show scattering between and within the electron pockets, and magnetic impurities suppress superconductivity while non-magnetic ones do not [22], they suggests that the pairing might have s-symmetry. On the other hand, results of a Quantum Monte Carlo study of the 1L FeSe on STO [23] show that phonons play an important role in superconductivity, in agreement with ARPES data [24], and that the pairing can have either s- or d-symmetry depending on the type of spin fluctuations that "glue" the Cooper pairs. The authors also do not rule out importance of the nematic fluctuations in the heavy doped systems that can enhance both s- and d-wave pairings. In general, similar to the bulk case there are evidences of the important role of nematicity [25], fluctuations [26] and stripe order [27] in the superconducting phase in 1L FeSe. For more details, we suggest the following reviews: Ref. [28] for a discussion of possible phonon- and spin-fluctuation exchange mechanisms and Refs. [29,30] for potential topological phases in 1L FeSe/STO and their application in topological-quantum-computation platform. Relevant to this work, scanning tunneling and electron energy loss spectroscopy data [31] strongly supports predominant **electronic pairing mechanism**. By now, a significant amount of experimental data has been also accumulated for FeSe films on substrates, for which the presence of the substrate also complicates understanding of the mechanism responsible for the superconductivity in these systems. Rather unexpectedly, it was found [32] that contrary to the case of the monolayer, bilayer FeSe film on STO is an insulator, most probably as a result of the strongly-reduced doping efficiency in the bottom FeSe layer as compared to the 1L FeSe-STO interface (for a review of the superconducting properties of films, see Ref. [33]). Furthermore, it was found from transport measurements [34] that 3L and 5L FeSe films on STO show the same \(\rm T_{c}=40K\) as the 1L, while the normal resistivity decreases with increasing thickness. These results also suggest that the superconductivity emanates at the interface and possibly in the first FeSe layer. It has also been argued [34] that the charge transfer from the doped substrate is crucial for high \(\rm T_{c}\). To disentangle the effect of the substrate in influencing the superconductivity of films, it is important to analyze the superconducting properties of isolated (free) systems. There are also other reasons for such a study. For example, it was demonstrated in Ref. [35] that electrochemically etched ultrathin FeSe transistor on MgO demonstrates \(\rm T_{c}=40\) K, i.e. shows that external field generates high \(\rm T_{c}\) in a thin film, regardless of the substrate (with critical thickness of 10 layers for high-\(\rm T_{c}\) superconductivity), i.e. the superconductivity must be generated in the FeSe subsystem. Another example can be found in Ref. [36] in which a thin FeSe flake with \(\rm T_{c}\) less than 10K was doped by a gate voltage to display a high temperature superconductivity onset at 48 K. To better understand the properties of films, it is natural to establish at first the differences in the physical properties of the FeSe bulk and film systems. It is already known that, contrary to bulk FeSe, in 1L FeSe (system with the highest \(\rm T_{c}\) among the class of iron-based superconductors [37-39]) the Fermi surface has no hole pocket at the Brillouin zone center, and that the Fermi surface topology varies dramatically with increasing thickness of the films [24,40]. As for the origin of the superconductivity in FeSe films, including the pairing mechanism, it is a hotly debated topic [41-46]. It is generally accepted that phonon-mediated pairing can be excluded since electron-phonon coupling in these systems is too weak to overcome the Coulomb repulsion and obtain a high \(\rm T_{c}\)[47,48] (also, no isotope effect has been detected in these systems [49,50]). Thus, it has been proposed that strong electron correlations play an important role in these structures [51-53]. In fact, neutron inelastic-scattering measurements demonstrated that in FeSe magnetic scattering in momentum space is very close to the wave vector \((\pi,\frac{\pi}{2})\), which means that magnetic order in FeSe might differ from the collinear AFM. In the non-superconducting phase, FeSe does exhibit spin fluctuations at low temperatures [54]. Furthermore, high-pressure experiments confirmed the presence of spin density wave (SDW) in bulk FeSe and thin FeSe films, and it was established that magnetic fluctuations (SDWs) are pivotal for the stabilization of high-Tc superconductivity in the bulk and flakes/thin films of FeSe on SrTiO\({}_{3}\) under pressure [40, 55]. Thus, we included the spin degree in the calculations, with spin arrangements set as checkerboard AFM order obtained theoretically by Cao et al. [56] that allowed to explain insulator-superconductor transition in 1L FeSe. (Our finding indicates that the total energy of the AFM state is lower than the nonmagnetic (NM) state, and magnetic moments (Table I below) are consistent with the theoretical result for the magnetic moment 1.7 for both bulk and monolayer FeSe in the AFM checkerboard phase case [56]). For other theoretical studies of the spin-fluctuation mechanism of the superconductivity in FeSe, we refer the reader to works [57] (bulk)and [2, 58, 59] (films). Very relevant to the work here, an important study [60] of the possibility of spin-wave- (i.e., electron-correlation-) induced superconductivity in several FeSe systems - isolated monolayer, monolayer on STO and bulk - was performed by using a combined quasi-particle self-consistent GW (QSGW) and DMFT approach. It was found that in these systems with five Fe d-bands near the Fermi surface correlations are orbital-selective, i.e., "Hundness" (described by the exchange parameter J) might play an important role in the properties of the materials, including the value of T\({}_{\rm c}\). It was also found that d\({}_{\rm xy}\) is the orbital with strongest correlations, in agreement with experimental data [61], and that the results strongly depend on the value of J. In bulk FeSe, both DFT, QSGW and DMFT predict that the d\({}_{\rm xy}\) orbital crosses the Fermi level (in disagreement with ARPES data which shows that the d\({}_{\rm xy}\) band is 17meV below the Fermi surface [60]). Since in the isolated 1L FeSe the d\({}_{\rm xy}\) band is even further from the Fermi level (below by 300meV, the energy of magnetic excitations) [60], spin-fluctuations and hence superconductivity is suppressed. On the other hand, as the authors claim [60], the STO substrate changes the situation: the substrate leads to a larger number of carriers and moves the d\({}_{\rm xy}\) closer to the Fermi energy (still separated by 50-100meV, in agreement with ARPES data). Thus, on the basis of the spin-wave exchange mechanism it was concluded [60] that the d\({}_{\rm xy}\) band is mostly responsible for the enhancement of the superconducting temperature from 9K in the bulk to \(\sim\)45K in the 1L FeSe/STO (this calculated T\({}_{\rm c}\) is lower than the experimental value 75 K). To summarize, the authors found that the main reasons for high T\({}_{\rm c}\) in 1L FeSe on STO substrate are J (larger Hund's correlations correspond to a lesser electron screening) and (substrate-induced) proximity of the d\({}_{\rm xy}\) band to the Fermi energy. In other words, combined effect of electron correlations and of the substrate are responsible for high-T\({}_{\rm c}\) in the system. In this work, to quantify the role of correlations in the films we perform first-principles calculations of the electronic structure of isolated 1L - 5L FeSe systems, the results of which are then used to analyze superconducting properties of the systems, assuming an antiferromagnetic (AFM) spin wave-mediated pairing corresponding to the proposed spin-fluctuation within (\(\pi\), \(\pi\)) nesting vector [57, 62]. As we show, based on a rather simple model, that _allowed us to perform a comparative analysis of the properties of the complex multilayer systems_, we are able to obtain superconducting critical temperatures that are not too different from what has been observed for the corresponding systems on the substrate, suggesting that spin-mediated pairing can be dominant in both 1L and multi-layer systems on STO substrate. ## 2 Methodology Our first-principles calculations were carried out using the Vienna _ab initio_ simulation package (VASP) [63] within the framework of density functional theory+U (DFT+U). Apart from the fact that we need to use DFT+U instead of DFT for systems in which electron correlations are important, DFT produces unobservable hole pockets in the Fermi surface which is contrary to experimental observations (see below). The projected augmented wave (PAW) pseudopotential was applied to describe the effect of ions, and the exchange/correlation effects were included by using the generalized gradient approximations (GGA) in the form of the Perdew-Burke-Ernzerhof (PBE) potential [64, 65]. The one-electron wave functions were expanded in a plane-wave basis with a cutoff energy of 500 eV. The Brillouin-zone was built by using (13\(\times\)13\(\times\)1) k-point sampling in the Monkhorst-Paxton mesh. The self-consistent convergence to the tolerance was set at 1\(\times\)10\({}^{-6}\) eV/A, and the maximal force on atoms for the total energy convergence was chosen to be 0.001 eV/A. \(\alpha\)-FeSe occurs naturally in a tetragonal phase and its symmetry is determined by the P4/nmm space group. Our model supercell consists of n=1-5 layers of FeSe, constructed by using the optimized lattice constant of 3.76 A and a vacuum slab of 15 A that separates periodical images along the direction normal to the FeSe film. The model supercells of the studied five FeSe films of different thickness are shown schematically in Fig. 1. Figure 1: The model supercells of the FeSe films with thickness ranging from 1L (left) to 5L (right). The 15 Å vacuum “sticks” along the C axis are also shown. The green circles refer to Se and brown - to Fe atoms. Our calculations show that the total energy of the checkerboard AFM state of the systems is lower than the nonmagnetic (NM) one. Results of our calculations for the magnitude of the magnetic moment for different systems at U=0 are summarized in Table 1 (and at finite U's - in Supplementary Information (SI), Section 6). The results are consistent with the theoretical result for the magnitude of the magnetic moment 1.7 for both bulk and monolayer FeSe in the same phase obtained by Cao et al. [56]. In this work, we consider spin fluctuation-mediated scenario of superconducting pairing generated by the appropriate part of the Hamiltonian that can be derived by using the Hubbard model (details of the formalism can be found in Ref.[66], and for similar approaches for FeSe and iron superconductors see a recent review [67]). In the case of the AFM spin wave-exchange interaction the total superconducting Hamiltonian has the following form (the free electron part plus the interaction part that also defines spin fluctuations): \[\begin{split}\mathrm{H}=\sum_{\mathrm{l},\vec{\mathrm{k}}, \mathrm{s}}\varepsilon^{\mathrm{l}}\big{(}\vec{\mathrm{k}}\big{)}\mathrm{C}^{+ }_{\mathrm{l}\mathrm{i}\mathrm{s}}\mathrm{c}_{\mathrm{l}\mathrm{i}\mathrm{s}} \\ -\frac{1}{2\mathrm{N}^{2}}\sum_{\begin{subarray}{c}\mathrm{l}, \mathrm{m}\vec{\mathrm{k}},\vec{\mathrm{k}}^{\prime},\vec{\mathrm{q}}\\ \mathrm{s}_{1}-\mathrm{s}_{4}\end{subarray}}\mathrm{v}(\vec{\mathrm{q}}) \overline{\sigma}_{\mathrm{s}_{1}\mathrm{s}_{2}}\overline{\sigma}_{\mathrm{ s}_{3}\mathrm{s}_{4}}\mathrm{c}^{+}_{\mathrm{l}\mathrm{i}\mathrm{s}_{1}}\mathrm{c}_{ \mathrm{l}\mathrm{i}\mathrm{s}_{2}}\mathrm{c}^{+}_{m,\vec{\mathrm{k}}^{\prime} -\overline{\sigma}_{\mathrm{s}_{3}}}\mathrm{C}_{m,\vec{\mathrm{k}}^{\prime} \mathrm{s}_{4}},\end{split} \tag{1}\] where \(\mathrm{l}\), \(\vec{\mathrm{k}}\), and \(\mathrm{s}\) are the orbital (we use five d-orbitals in our calculations), momentum and spin indices, \(\overline{\sigma}_{\mathrm{s}_{1}\mathrm{s}_{2}}\) are the Pauli matrices, \[\mathrm{v}(\overline{\mathrm{q}})=\mathrm{U}+\mathrm{U}^{2}\chi(\overline{ \mathrm{q}}) \tag{2}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Systems & 1L (\(\mu_{B}\)) & 2L (\(\mu_{B}\)) & 3L (\(\mu_{B}\)) & 4L (\(\mu_{B}\)) & 5L (\(\mu_{B}\)) & Bulk (\(\mu_{B}\)) \\ \hline Magnetic moment & 1.792 & 1.787 & 1.786 & 1.762 & 1.780 & 1.709 \\ \hline \end{tabular} \end{table} Table 1: Modulus of the magnetic moment per layer in the AFM checkerboard phase in FeSe systems. is the electron-electron interaction, U is the local Coulomb repulsion, and N is the number of sites in the lattice. Since in DFT+U the correlation effects are already included at the quasi-particle level, in our analysis in Eq. (2) we use the one-loop approximation for calculating the susceptibility: \[\chi(\vec{\mathrm{q}})=\chi_{0}(\vec{\mathrm{q}}), \tag{3}\] where \[\chi_{0}(\vec{\mathrm{q}})=-\mathrm{i}\sum_{l,s_{1}-s_{4}}\int\frac{\mathrm{d} \omega^{\prime}}{2\pi}\int\frac{\mathrm{d}^{2}q^{\prime}}{(2\pi)^{2}}\sigma_{s _{1}s_{2}}^{2}\mathrm{G}_{l,s_{2}s_{3}}(\omega^{\prime},\vec{\mathrm{q}}^{ \prime})\sigma_{s_{3}s_{4}}^{2}\mathrm{G}_{l,s_{4}s_{1}}(\omega+\omega^{\prime },\vec{\mathrm{q}}+\vec{\mathrm{q}}^{\prime}) \tag{4}\] is the "free-electron" susceptibility In Eq. (4), \(\mathrm{G}_{lS_{1}s_{2}}(\omega,\vec{\mathrm{q}})\) is the Fourier-transformed single-electron retarded Green's function \[\mathrm{G}_{lS_{1}s_{2}}(t,\vec{\mathrm{r}})=-\mathrm{i}\theta(t)\big{\langle} c_{lS_{1}}(t,\vec{\mathrm{r}})c_{lS_{2}}^{+}(0,0)\big{\rangle}{=}-\mathrm{i} \theta(t)\delta_{s_{1}s_{2}}\big{\langle}c_{lS_{1}}(t,\vec{\mathrm{r}})c_{lS_ {1}}^{+}(0,0)\big{\rangle} \tag{5}\] that has the following form in frequency-momentum representation: \[\mathrm{G}_{lS_{1}s_{2}}(\omega,\vec{\mathrm{q}})=\delta_{s_{1}s_{2}}\frac{1} {\omega-\varepsilon^{l}(\vec{\mathrm{q}})+\mathrm{i}\delta}, \tag{6}\] where \(\varepsilon^{l}(\vec{\mathrm{q}})\) is the DFT+U dispersion of the \(l\)th band. To take into account multi-orbital effects, we calculate susceptibility with contribution of all d-bands. After the electron susceptibility \(\chi(\vec{\mathrm{q}})\) was calculated, we approximated it by a function to ensure that it has maximum at \(\vec{\mathrm{q}}\equiv\vec{\mathrm{Q}}=\left(\frac{\pi}{\mathrm{a}},\frac{\pi }{\mathrm{a}}\right)\) corresponding to the AFM spin-wave exchange interaction: \[\chi(\vec{\mathrm{q}})\approx\bar{\chi}_{0}\big{[}1-\mathrm{b}\big{(}\cos q_{ x}+\cos q_{y}\big{)}\big{]}, \tag{7}\] where parameter \(\mathrm{b}>0\) for AFM spin arrangement (in the ferromagnetic case, \(\mathrm{b}<0\)) and \(\bar{\chi}_{0}\) is a momentum-independent parameter. Parameters \(b\) and \(\bar{\chi}_{0}\) were obtained by fitting the susceptibility (7) to the exact function (3). Next, we used the fact that the electron-electron attractive interaction can be factorized: \[V_{\vec{\mathrm{k}}\vec{\mathrm{k}}^{\prime}}=\sum_{\mathrm{n}}V_{\mathrm{n} }\gamma_{\mathrm{n}}\big{(}\vec{\mathrm{k}}\big{)}\gamma_{\mathrm{n}}\big{(} \vec{\mathrm{k}}^{\prime}\big{)}, \tag{8}\] where \(\gamma_{\mathrm{n}}\big{(}\vec{\mathrm{k}}\big{)}\) is the symmetry function in channel n. In particular, in the attractive singlet-pairing extended s-wave channel, one has \[\gamma_{s(e)}\big{(}\widehat{\mathrm{k}}\big{)}=\cos\mathrm{k_{x}}+\cos\mathrm{k_ {y}},\hskip 14.226378ptV_{s(e)}=-\frac{3}{4}\mathrm{U}^{2}\overline{\chi}_{0} \mathrm{b} \tag{9}\] and in the also attractive d-wave channel, considered in this paper, the corresponding functions are \[\gamma_{d}\big{(}\widehat{\mathrm{k}}\big{)}=\cos\mathrm{k_{x}}-\cos\mathrm{k_ {y}},\hskip 28.452756ptV_{d}=-\frac{3}{4}\mathrm{U}^{2}\overline{\chi}_{0} \mathrm{b} \tag{10}\] (the interaction strengths \(\mathrm{V_{s(e)}}\) are equal in both channels). To approximate exact susceptibility (see SI, Section 7) by Eq. (7), we put \(\mathrm{b}=0.5\) and \(\overline{\chi}_{0}=\big{(}\chi(\pi,\pi)+\chi(0,0)\big{)}/2\). Which gives a susceptibility with maximum at \(\overline{\mathrm{q}}=(\pi,\pi)\) and minimum at \(\overline{\mathrm{q}}=(0,0)=(2\pi,2\pi)\), which is a reasonable approximation for susceptibility for the AFM spin-exchange scenario of superconductivity (indeed, as it follows from SI, Section 7, exact susceptibility has a sharp maximum at \(\overline{\mathrm{q}}=(\pi,\pi)\)). In this case, for the superconducting gap one has the following function: \[\Delta_{\mathrm{n}}\big{(}\widehat{\mathrm{k}}\big{)}=\Delta_{\mathrm{on}} \gamma_{\mathrm{n}}\big{(}\widehat{\mathrm{k}}\big{)}, \tag{11}\] where \(\Delta_{\mathrm{on}}\) is a channel-dependent parameter. The corresponding channel-dependent BCS equation for the critical temperature is \[1=\mathrm{V_{n}}\int\frac{d^{2}k}{(2\pi)^{2}}\gamma_{\mathrm{n}}^{2}\big{(} \widehat{\mathrm{k}}\big{)}\frac{\tanh\frac{\varepsilon\big{(}\widehat{ \mathrm{k}}\big{)}}{2\mathrm{T_{c}}}}{\varepsilon\big{(}\widehat{\mathrm{k}} \big{)}}, \tag{12}\] where \(\varepsilon\big{(}\widehat{\mathrm{k}}\big{)}\) is dispersion of the conduction band. Equation (12) is the equation we solved to find the superconducting critical temperature of the layered systems at different values of \(\mathrm{U}\). ## 3 Results and discussions ### Electronic structure The calculations of the electronic properties are performed to yield equilibrium (relaxed) atomic positions with the fixed structural volume of the AFM configuration. As one can see from Fig. 2 and SI Section 1 (and for bulk - SI, Section 3), at first sight the dependence of density of states (DOS) of the films with dominant Fe-3d contribution on their thickness is not pronounced, though as we show below the difference in the DOS near the Fermi energy plays crucial role. As it follows from Fig. 2(a) and Fig. S9(a) in SI, DFT results for the band structure of bulk FeSe are in agreement with those from calculations previously reported [62], suggesting the validity of our approach and results. Using the computational setting of the bulk system, we find that the band structure of the 5L system is quite similar to that of bulk FeSe. Also, for the 1L case, our results are in agreement with those obtained in prior DFT calculations [68] and experimental data (see Supplementary Information in Ref. [38]) which reports signatures of flat bands. Moreover, the flat bands obtained in DFT calculation [68] survive also in our DFT+U results (Fig. 2 and Figs. S2-S5). The corresponding Fermi surfaces in the 1L case are shown in Figure 3 (for other systems, see SI, Sections 2,4). Importantly, DFT+U calculations give only electron pockets around the M points and hole pockets are absent (contrary to the DFT case). Figure 2: The total and projected Fe (3d) and Se (2p) DOS for the 1L FeSe system at different values of U. According to BCS [69] and other theories, the DOS at the Fermi level N(0) is the key quantity that defines a possibility to have the superconducting condensate (the critical temperature and the gap in the BCS theory are proportional to exp (\(-1/2\pi\lambda\)N(0)), where \(\lambda\) is the coupling constant; see also Eq.(12), where one can express momentum integration in terms of the energy integration with corresponding DOS). Thus, it is important for opening superconducting gap whether there is high DOS around the Fermi level. It is clear from Fig. 2 that N(0), with main contribution coming from the Fe(3d) states, is rather high for all 1L - 5L systems. Thus, the dominating in the DOS 3d states of Fe, and spin waves they "generate", may play the key role in forming the superconducting condensate in the FeSe films. ### Superconducting critical temperature Using equation (12), we calculated the transition temperature in 1L-5L FeSe films at different values of U. The results presented in Fig. 4 show that T\({}_{\rm c}\) grows with increasing U and decreasing number of layers in the film. The results for critical temperature for 1L FeSe are in a rather good agreement with the experimental Tc \(\sim\) 65K for 1L FeSe grown on the STO substrate [18] for all values of U. Also, the results for the bulk FeSe at different U's are not too different from the experimental critical temperature (\(\sim\) 8K), thus supporting meaningfulness of the approximations that we have used in this work. One can understand the reason for difference in critical temperature for systems with different number of layers through their normalized (per layer) total DOS (shown in Fig.5, and for bulk in Fig. 6, see also SI, Section 5). Namely, the difference can be attributed to "melting" of the peak at \(\sim\)0.07eV and to shift of the "main" peak at \(\sim\)0.03eV as the number of layers increases (left Fig. 5). The above can semi-quantitatively explain similarity of the results within two groups (1L-3L Figure 3: The Fermi surface of the 1L FeSe system at different values of U. and 4L-5L) of the considered systems. It must be noted that we have obtained rather high critical temperatures, even though the d\({}_{\rm xy}\) orbital (claimed in several works mentioned above to be the dominant one) is not close to the Fermi level (similar DFT results for the spectrum were obtained, e.g., in Ref. [70]). The main reason for this is collective effect of several spin-wave modes generated by different Fe d-orbitals and much closer in energy average position of the d-bands to the Fermi energy (\(\sim\) 30meV) as compared to the QSGW results (\(\sim\)150meV) [80]. Another important result is the lack of strong sensitivity of T\({}_{\rm c}\) to the Coulomb repulsion parameter, as long as it lies in the range of "realistic" U's\(\sim\)1-4eV, which suggests reliability of the results. We expect that substrate might modify more significantly the average-per-layer electronic DOS of 1L and 2L systems and not the spin-wave contribution to T\({}_{\rm c}\), at least in the 3L-5L systems. On the other hand, as results of our comparative analysis show the spin-wave contribution is basically of the same order of magnitude for all systems, even if they are put on a substrate. Figure 4: Dependence of the critical temperature on U for different FeSe systems. Figure 5: **Top:** normalized (per layer) total DOS in the case of different layered systems. **Bottom:** the same for different d-orbitals. **Figure 6. Top:** normalized total DOS in the case of bulk FeSe with different U values (U=1.0 eV, 2.3 eV, 4.0 eV). **Bottom:** the same for different d-orbitals. Conclusions Through first-principles calculations of the electronic structure and many-body calculations of the transition temperature for the 1L-5L FeSe films, we find that the main contribution to the superconducting pairing comes from the Fe 3d orbital electrons. The difference in this part of the projected DOS for the different thicknesses of the FeSe film is mainly responsible for the difference T\({}_{\rm c}\) in the studied systems. We find that the critical temperature grows with increasing value of the Coulomb repulsion parameter U. In particular, we find that T\({}_{\rm c}\)\(\sim\)33 K - 53 K for 1L system which is in reasonable agreement with the experimental data for 1L FeSe on STO (65K). Similarly, for the same values of U the results for the bulk system (\(\sim\) 9.4 K - 21K) is also in a reasonable agreement with experiments (\(\sim\)8K). These results suggests that the rather simple model used in this work may catch the main effects of electron correlation (which concerns mostly the FeSe subsystem, i.e. beyond the substrate). Moreover, we demonstrate in the above that antiferromagnetic fluctuations can produce rather high superconducting T\({}_{\rm c}\)'s in FeSe films, even without the inclusion of contributions (phonons, doping) from the substrate. In summary, in this work for the first time we explored scenario of the spin-wave exchange mechanism of superconductivity for FeSe systems with different number of layers. As a test of the approach, we applied it to the bulk FeSe and found a rather good agreement of the critical temperature with experimental data. Thus, we expect a similar agreement in the case of 2L-5L systems, which are expected to be much less affected by the substrate (doping and phonons) than the 1L system. Regarding the 1L FeSe, we show that independently of the substrate spin-wave fluctuations can give a significant contribution to the superconducting critical temperature. Although the used model is rather simple, it helps obtain a meaningful **estimation of the difference of the role of correlations** in the superconductivity in FeSe, as function of layer thickness - a task that has so far not been accomplished with desired accuracy by more refined models. No doubt the accuracy of the obtained results have to be further tested by comparing them with future experimental data, for both isolated systems and systems on substrates. ## Acknowledgements The work of the UCF group was supported partially by DOE grant DE-FG02-07ER46354, while that of JS was supported by the China Scholarship Council (CSC) and of GQ by National Natural Science Foundation of China (NSFC) grant No. 11874083. ## Data availability statement All data that support the findings of this study are included within the article. Others relevant or raw data are available from the authors upon reasonable request.
2301.08069
Geometric tool kit for higher spin gravity (part I): An introduction to the geometry of differential operators
These notes provide an introduction to the algebra and geometry of differential operators and jet bundles. Their point of view is guided by the leitmotiv that higher-spin gravity theories call for higher-order generalisations of Lie derivatives and diffeomorphisms. Nevertheless, the material covered here may be of general interest to anyone working on topics where geometrical (coordinate-free, global, generic) and mathematically rigorous definitions of differential operators are required.
Xavier Bekaert
2023-01-19T13:29:34Z
http://arxiv.org/abs/2301.08069v2
# Geometric tool kit for higher spin gravity (part I): ###### Abstract These notes provide an introduction to the algebra and geometry of differential operators and jet bundles. Their point of view is guided by the leitmotiv that higher-spin gravity theories call for higher-order generalisations of Lie derivatives and diffeomorphisms. Nevertheless, the material covered here may be of general interest to anyone working on topics where geometrical (coordinate-free, global, generic) and mathematically rigorous definitions of differential operators are required. ###### Contents * 1 Introduction * 1.1 Plan * 1.2 Teaser * 1.3 Disclaimer * 2 Enveloping geometry: Differential operators as higher vector fields * 3 Differential structures of order zero: points and zeroth-order (co)jets * 3.1 Maximal ideal: What's the point? * 3.2 Zeroth-order jet bundle * 3.3 Multiplicative functionals: Again, what's the point? * 3.4 Zeroth-order cojet bundle * 3.5 Pullback of functions * 4 Differential structures of order one: tangent (co)vectors * 4.1 Tangent vector fields as derivations * 4.1.1 Derivations * 4.1.2 Vector fields on a manifold * 4.1.3 Flow of a vector field * 4.1.4 Lie derivative of vector fields via infinitesimal flow and via adjoint action * 4.2 Tangent vectors * 4.3 Cotangent vectors * 4.4 Tangent tensors * 4.4.1 Symmetric contravariant tensor fields as functions on the cotangent bundle * 4.4.2 Poisson algebra of symmetric contravariant tensor fields * 5 Differential structures of order one: first-order (co)jets * 5.1 Functions as differential operators of order zero * 5.2 First-order differential operators * 5.2.1 Vector fields as first-order differential operators * 5.2.2 Semidirect sum of Lie algebras * 5.2.3 Some general facts * 5.3 First-order jets * 5.4 First-order cojets * 6 Differential structures of higher order: (co)jets * 6.1 Higher-order differential operators * 6.1.1 Rings over an algebra * 6.1.2 Algebraic definition of differential operators * 6.1.3 Filtration * 6.1.4 Almost-commutative algebra of differential operators * 6.1.5 Differential operators as almost-linear operators * 6.1.6 Differential operators as infinitesimal automorphisms * 6.1.7 No-go theorem against (naive) higher-spin diffeomorphisms * 6.1.8 Yes-go theorem on (formal) higher-spin diffeomorphisms * 6.2 Higher-order jets * 6.2.1 Higher-order contact ideals * 6.2.2 Truncated polynomials and formal power series * 6.2.3 Higher-order jet bundles * 6.3 Higher-order cojets * 6.3.1 Higher-order cojets as generalised functions * 6.3.2 Higher-order cojet bundles * 6.3.3 Differential operators as cojet fields * 6.4 Pushforward of differential operators and cojets Introduction The recurrent theme behind the higher-spin extension of usual spacetime symmetries can be summarised as follows: _Replace everywhere possible "vector fields" by "differential operators"_. For instance, a by now standard recipe for constructing the higher-spin generalisation of a spacetime symmetry algebra (say of isometries or conformal transformations) is by allowing for higher-derivative differential operators as extra symmetry generators beyond vector fields. More precisely, higher-spin algebras of rigid symmetries are higher-order extensions of the isometry (or conformal) algebra obtained by taking its enveloping algebra (with respect to some irreducible representation). The gauging, a la Cartan, of such higher-spin algebras leads to higher-spin gravity theories.1 Furthermore, another standard recipe in higher-spin gravity is to pack the tower of metric-like fields of all ranks into a single generating function on the cotangent bundle, which can be interpreted as the symbol of a differential operator. Accordingly, higher-spin gravity calls for higher-order generalisations of Lie derivatives and diffeomorphisms. This issue motivates this introduction to the mathematics (algebra and geometry) of differential operators. Footnote 1: Many pedagogical reviews on higher-spin gravity are available by now: advanced ones [1] as well as introductory ones [2]. Two books of conference proceedings also offer a panorama of this research area [3]. Coordinate-free formulations of jet bundles2 and differential operators3 are reviewed here in a self-contained way. The leitmotiv behind these notes is to view differential operators (respectively, jet fields) as natural higher-derivative generalisations of vector fields (respectively, differential one-forms). Although this material is standard, the way some notions are presented and some side observations might be original, to the best of the author's knowledge. Footnote 2: A classical textbook on the subject is [4]. Other textbooks covering the geometry of jet bundles are [5, 6, 7] while shorter introductions are [8]. Footnote 3: An inspiring source on the algebro-geometric approach to smooth manifolds, differential operators and jet bundles is the classical textbook [9]. A concise pedagogical introduction is [10]. ### Plan The plan of this paper is as follows: Section 2 initiates these notes with a general outlook on differential operators as natural generalisations of vector fields. Pragmatic readers may jump directly to Section 3 where the technical part is actually starting. The section 3 aims to immediately introduce the reader to the modern algebraic approach to geometry where one shifts the focus from the manifold itself to the algebra of functions on it. In particular, the one-to-one correspondence between the points of a manifold and the maximal ideals (or, dually, the multiplicative functionals) of the corresponding algebra of functions is reviewed. This abstract detour proves to be necessary in order to address the geometry of differential operators later on. Similarly, the modern definition of vector fields on a manifold as derivations of the algebra of functions is adopted in Section 4. The one-to-one correspondence between diffeomorphisms and automorphisms of the algebra of functions is also discussed. The (co)tangent bundles are adressed along similar lines and the structure of symplectic (respectively, Schouten) algebra for the space of functions on the cotangent bundle (respectively, symbols of differential operators) is presented. First-order differential operators and the algebraic structure they span are introduced in Section 5. This section prepares the ground for the transition from first-order to higher-order differential structures in the next section. First-order (co)jets are also introduced. Section 6 is the main goal of these notes. There, jets (respectively, cojets) are introduced as higher-order generalisations of cotangent (respectively, tangent) vectors. Various standard definitions of differential operators are reviewed such as the elegant algebraic definition due to Grothendieck. In order gain some geometrical intuition and stress the analogy with vector fields, a definition of differential operators as sections of cojet bundles is presented. Finally, the almost-commutative structure of the algebra of differential operators is emphasised. ### Teaser Wishfully thinking, higher-spin gravity might provide a fresh viewpoint on some modern geometrical notions perhaps less familiar to theoretical physicists, as well as a new playground for applications of these conceptual tools. In order to proceed step by step, most of these notes will be devoted to an introduction for physicists to some modern (though, nowadays, textbook) viewpoints and developments in differential geometry, some of which could turn out to be useful in higher-spin gravity. More specifically, these lecture notes will be divided into four parts providing, respectively, introductions to the following subjects: * [Part I] the geometry of differential operators * [Part II] the universal enveloping algebras of Lie algebroids * [Part III] the general theory of connections on fibre bundles * [Part IV] higher-order frames and jet groups ### Disclaimer A first disclaimer is that these lecture notes are not intended for the pragmatists. They will be disappointed by the content since the material reviewed here does not necessarily provide tools for performing new calculations. In fact, a second disclaimer is that, despite its title, these lecture notes will not review nor address any practical problem in higher-spin gravity. The hope behind these lecture notes is rather that some better understanding of the geometry of higher-order differential operators and the corresponding potential generalisations of connections might contribute to foster some advances in higher-spin gravity. For instance, the subtleties encountered around the proper functional classes of infinitesimal vs finite automorphisms of the algebra of differential operators might have to do with difficulties faced by higher-spin interactions, such as their elusive (non-)locality properties. Also, the non-tensorial nature of cojets might be at the root of the problem of minimal coupling higher-spin gauge fields to gravitational backgrounds, in the metric-like formulation. Enveloping geometry: Differential operators as higher vector fields To start these notes, let us summarise its main emphasis and viewpoint in non-technical terms. As is well-known, one of the leitmotives of modern theoretical physics is that fundamental interactions are governed by symmetry principles. Accordingly, one will try to motivate via symmetries the relevance of some higher-order structures on a manifold, such as the (co)jets that generalise the (co)tangent vectors and probe the differential structure beyond first-order. More specifically, higher-order differential operators will be introduced as natural generalisations (or better, as refinements, in the sense that they probe further the smooth structure of the manifold) of vector fields seen as infinitesimal symmetries of a smooth manifold. Geometric symmetries, _i.e._ finite or infinitesimal symmetries of a smooth manifold such as diffeomorphisms or vector fields, have an algebraic counterpart as automorphisms or, respectively, derivations of the algebra of functions on the (possibly fibred) manifold, _i.e._ linear transformations which are compatible with the pointwise product structure of this algebra. Indeed, it is well known that vector fields can be thought of, _geometrically_, as infinitesimal diffeomorphisms (_i.e._ infinitesimal symmetries of the manifold) or, _algebraically_, as derivations (_i.e._ infinitesimal symmetries of the algebra of functions on the manifold). More generally, the notion of infinitesimal symmetries of a fibred manifold is nowadays captured at the algebraic level by Lie-Rinehart algebras (_e.g._ the algebra of first-order differential operators) and at the geometrical level by Lie algebroids, _i.e._ vector bundles endowed with an anchor and a Lie bracket on their space of sections (a paradigmatic example being the tangent bundle whose sections are precisely the vector fields). In field theory, a standard generalisation of geometrical symmetries are projective representations (the transformation of a function comes not only from a transformation of the coordinates but also with an extra multiplication by a factor) that correspond infinitesimally to first-order differential operators (sums of a vector field plus a function). The latter can be seen as endomorphisms (_i.e._ linear transformations) compatible with the first-order structure of the space of functions. However, from this algebraic perspective there is no compelling reason to stop at first order if one considers smooth functions. In this sense, higher-order differential operators can be thought of as infinitesimal symmetries of the space of functions probing further the smooth structure of the manifold. There is actually a systematic procedure for performing this higher-order generalisation: passing to the universal enveloping algebra of the corresponding Lie-Rinehart algebra of infinitesimal symmetries. For instance, the associative algebra of differential operators (\(\Leftrightarrow\) infinitesimal higher-order symmetries) arises as the universal enveloping algebra of the Lie algebra of vector fields (\(\Leftrightarrow\) infinitesimal diffeomorphisms). In more physical terms, the algebra of differential operators can be seen as a non-commutative algebra of functions on the quantum phase space, whose classical limit is a commutative algebra of functions on phase space. In more mathematical terms, the associative algebra of differential operators is called "almost-commutative", in the sense that the product of two differential operators has order strictly higher than their commutator. In other words, differential operators commute modulo lower order terms. The various steps of increasing abstraction that one performs when passing from the manifold itself to its almost-commutative algebra of differential operators can be summarised as follows: \[\begin{array}{c}\text{Manifold}\stackrel{{\text{duality}}}{{ \Longleftrightarrow}}\text{ Commutative algebra of functions}\\ \stackrel{{\text{derivations}}}{{\Longleftrightarrow}}\text{ Lie-Rinehart algebra of 1st-order differential operators}\\ \stackrel{{\text{ enveloping}}}{{\Longleftrightarrow}}\text{ Almost-commutative algebra of differential operators}\end{array}\] The arrow "duality" illustrates the modern algebraic view on geometry where a manifold is equivalently described in terms of the commutative algebra of functions on this manifold. The arrow "derivations" stands for the addition of vector fields to the previous algebra of functions, thereby leading to the vector space of 1st-order differential operators endowed with a structure of Lie algebra via the commutator. The arrow "enveloping" corresponds to the step where one removes the bound on the number of derivatives, thereby obtaining the associative algebra of differential operators. But one can also proceed backward and reconstruct step by step the manifold itself from its almost-commutative algebra of differential operators by considering subalgebras as follows: Almost-commutative algebra of differential operators \[\begin{array}{c}\text{1st-order}\stackrel{{\text{sub.}}}{{ \Longleftrightarrow}}\text{ Lie-Rinehart algebra of 1st-order differential operators}\\ \\ \text{0th-order}\stackrel{{\text{sub.}}}{{\Longleftrightarrow}} \text{ Commutative algebra of functions}\\ \\ \text{max. ideals}\stackrel{{\text{max. ideals}}}{{\Longleftrightarrow}} \text{ Manifold}\end{array}\] However, there is a price to pay to interpret differential operators (with derivatives higher than one) as infinitesimal symmetries of the algebra of functions: one should relax the compatibility with the pointwise product, in so far as higher-order differential operators are _not_ derivations of the commutative algebra of functions. Nevertheless, in a sense they preserve its module structure. Still, one may consider the commutative algebra of functions to be just the tip of the iceberg, a small subalgebra of the almost-commutative algebra of differential operators. While it is by now standard to admit that all the geometric information about a manifold is equivalently encoded into the algebraic properties of its commutative algebra of functions (on the manifold), one might go one step further and stress that it is also encoded into the almost-commutative algebra of differential operators on the manifold, _i.e._ to adopt the following point of view: \[\begin{array}{rcl}\mbox{Manifold}&\Longleftrightarrow&\mbox{Commutative algebra of functions}\\ &\Longleftrightarrow&\mbox{Lie-Rinehart algebra of vector fields}\\ &\Longleftrightarrow&\mbox{Almost-commutative algebra of differential operators}\end{array} \tag{1}\] For the sake of simplicity, rather than looking at finite automorphisms one may rather start by focusing on the infinitesimal automorphisms of the algebras, _i.e._ their derivations. On the one hand, all derivations of the commutative algebra of functions on a manifold are outer and provide an algebraic definition of vector fields on the manifold. On the other hand, all derivations of the Lie algebra of vector fields are inner. The Lie bracket of two vector fields identifies with their commutator and defines the Lie derivative of a vector field along another one. Analogously, all derivations of the almost-commutative algebra of differential operators are inner, at least locally. In this sense, the adjoint action of any differential operator on the associative algebra of differential operators provides a natural higher-order generalisation of the Lie derivative along a vector field. However, as can be expected from the equivalence between all the above notions (in a sense made mathematically precise via the generalised Milnor exercises reviewed in the following sections), the almost-commutative algebra of differential operators must have (essentially) the same collection of symmetries as the commutative algebra of functions. Locally, the automorphisms of the almost-commutative algebra of differential operators are indeed in one-to-one correspondence with the automorphisms of the commutative algebra of functions. Moreover, diffeomorphisms are (exactly) in one-one-correspondence with automorphisms of the commutative algebra of functions. This means that the automorphisms of the almost-commutative algebra of differential operators remain too narrow to look for a higher-order generalisation of diffeomorphisms. Pursuing the above analogy and motivated by higher-spin gravity, one may consider somewhat larger classes of automorphisms that are called higher-spin diffeomorphisms. In more physical terms, higher-spin diffeomeorphisms can be thought of as the quantum version of canonical transformations.4 Indeed, in a sense higher-spin diffeomeorphisms generalise the usual diffeo morphisms of a manifold in much the same way that canonical transformations generalise the diffeomorphisms of the configuration space in classical mechanics. Turning back to differential operators, let us repeat that they can be seen as derivations of the almost-commutative algebra of functions on the quantum phase space. Since the derivations of commutative algebras of functions can be interpreted as vector fields on the corresponding manifold, one may in this sense assert that differential operators are nothing but some specific class of Hamiltonian vector fields on the quantum phase space. Moreover, differential operators are _infinitesimal_ higher-spin diffeomorphisms, so with this terminology in mind one might also view them as higher-spin vector fields. These side comments may be taken as possible justifications for the provocative subtitle of this section ("Differential operators as higher vector fields") since there are several ways in which differential operators can be seen as generalisations of vector fields. This whole philosophical discussion can be summarised in the table 1. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Geometry & Manifold & Zeroth-order & First-order & Higher-order & Infinite-order \\ & (smooth) & structure & structure & structure & structure \\ \hline Algebra & Maximal & Commutative & Lie-Rinehart & Almost-comm. & Associative \\ & spectrum & algebra & algebra & algebra & algebra \\ \hline Elements & Points & Functions & Vector fields & Differential & Beyond diff. \\ & & & & operators & operators \\ \hline Infinitesimal & Infinitesimal & Vector & 1st-order & Differential & Beyond diff. \\ automorphisms & diffeos & fields & diff. ops & operators & operators \\ \hline Automorphisms & Diffeos & Diffeos & Diffeos & Diffeos & Higher-spin \\ & & & & diffeos \\ \hline \end{tabular} \end{table} Table 1: Algebraic/geometrical structures and their symmetries Differential structures of order zero: points and zeroth-order (co)jets To any manifold is associated the algebra of functions on it. But the converse is also true, in the sense that the manifold can sometimes be defined purely in terms of its algebra of functions, as the infinite family of maximal ideals of this commutative algebra. Indeed, a possible definition of a point is as a maximal ideal of the algebra of functions on the manifold. This definition is, unfortunately, very abstract. But it has the virtues of being at the same time purely algebraic and coordinate-free. This point of view proves to be fruitful to address the geometry of differential operators who lack some intuitive support (contrarily to vector fields, which one can always think as infinitesimal flows or as vectors tangent to a congruence of parametrised curves). ### Maximal ideal: What's the point? In order to get a flavor of how this abstract definition of a point is equivalent to more intuitive ones, consider a smooth manifold \(M\) of finite dimension \(n\). Associated with this manifold \(M\) comes the commutative algebra \(C^{\infty}(M)\) of smooth functions on \(M\) taking values in a field \(\mathbb{K}\) (in practice \(\mathbb{R}\) here, but one may also replace it by \(\mathbb{C}\)). The pointwise product will be denoted by \(\cdot\) and is defined by the equality \[(f\cdot g)|_{m}=f|_{m}\,g|_{m}\,,\qquad\forall m\in M\,, \tag{2}\] where \(f,g:M\to\mathbb{K}\) denote two functions on \(M\) and the symbol \(|_{m}\) stands for the evaluation at the point \(m\). When considered as a commutative algebra, the set \(C^{\infty}(M)\) of smooth functions will often be called the **structure algebra**, because it encodes algebraically the geometric structure of the manifold. Consider a given point \(m\in M\). The commutative subalgebra \[{\cal I}^{0}(m)\,:=\,\{\,f\in C^{\infty}(M)\,:\,f|_{m}=0\,\} \tag{3}\] of functions \(f\) vanishing at \(m\) is an ideal of the structure algebra \(C^{\infty}(M)\) that will be called the **contact ideal of order zero** at \(m\). Indeed, the pointwise product \(f\cdot g\) of a function \(f\in{\cal I}^{0}(m)\) vanishing at \(m\) with any function \(g\in C^{\infty}(M)\) is an element \(f\cdot g\in{\cal I}^{0}(m)\), since \((f\cdot g)|_{m}\,=\,f|_{m}\,g|_{m}\,=\,0\,\). The quotient \[J^{0}_{m}M\,:=\,C^{\infty}(M)\,/\,{\cal I}^{0}(m) \tag{4}\] is called the **zeroth-order jet space** at the point \(m\). By definition, it is the algebra of functions on \(M\) quotiented by the equivalence relation: \(f\sim g\Leftrightarrow f-g\in\mathcal{I}^{0}(m)\,\). In other words, two functions \(f\) and \(g\) are equivalent if and only if they take the same value at \(m\): \(f\sim g\Leftrightarrow f|_{m}=g|_{m}\,\). Such an equivalence class \(\llbracket f\rrbracket\in J^{0}_{m}M\), is called a **jet of order zero** or **zeroth-order jet** (or simply 0-jet) at \(m\). The equivalence class \(\llbracket f\rrbracket\) is completely determined by the value of \(f\) at \(m\), so it is simply determined by a number \(f|_{m}\in\mathbb{K}\). Therefore, the 0-jet space (4) at \(m\) is one-dimensional and isomorphic to \(\mathbb{K}\). The field \(\mathbb{K}\) is a vector space of dimension one over itself. Therefore the contact ideal \(\mathcal{I}^{0}(m)\) is of codimension one inside \(\mathcal{C}^{\infty}(M)\) since the quotient (4) is of dimension one. Moreover, a standard theorem of abstract algebra [11, Chap.1] states that a proper ideal of a commutative algebra is maximal iff the quotient of the commutative algebra by this ideal is isomorphic to a field. Therefore, the isomorphism \(J^{0}_{m}M\cong\mathbb{K}\), between the quotient of the commutative algebra \(C^{\infty}(M)\) by the contact ideal \(\mathcal{I}^{0}(m)\) and the field \(\mathbb{K}\), shows that the contact ideal \(\mathcal{I}^{0}(m)\) is of codimension one and maximal in \(C^{\infty}(M)\). Retrospectively, the **point**\(m\in M\) can be identified with the maximal (or, equivalentely, codimension-one) ideal \(\mathcal{I}^{0}(m)\) of the commutative algebra \(C^{\infty}(M)\) of functions on the manifold. The collection of all maximal ideals of a commutative algebra \(\mathcal{A}\) is called the **spectrum of maximal ideals** of this commutative algebra and will be denoted \(\mathfrak{m}(\mathcal{A})\). For a compact manifold, the spectrum \(\mathfrak{m}\big{(}\,C^{\infty}(M)\,\big{)}\cong M\) of maximal ideals of the commutative algebra of smooth functions on \(M\) provides an algebraic reconstruction of the manifold \(M\).5 Footnote 5: For non-compact manifolds, extra “points at infinity” actually arise as maximal ideals, _c.f._ the “ghosts” discussed in [9, Chap.8]. **Example 1**: **(Polynomials) :** Consider the vector space \(\mathbb{K}^{n}\) with Cartesian coordinates \((y^{1},\cdots,y^{n})\) and the commutative subalgebra \(\mathbb{K}[y^{1},\cdots,y^{n}]\subset C^{\infty}(\mathbb{K}^{n})\) of _polynomial_ functions on \(\mathbb{K}^{n}\) where \(\mathbb{K}\) is an algebraically closed field. By a theorem of Hilbert (often called the "weak form of Nullstellensatz", see _e.g._[11, Ex.17,Chap.5]), the maximal ideals \(\mathcal{I}^{0}(x^{1},\cdots,x^{n})\subset\mathbb{K}[y^{1},\cdots,y^{n}]\) are the zeroth-order contact ideals spanned by polynomials vanishing at \((y^{1},\cdots,y^{n})=(x^{1},\cdots,x^{n})\), _i.e._ \[\mathcal{I}^{0}(x^{1},\cdots,x^{n})\,:=\,\left\{\,P(y^{1},\cdots,y^{n})=\sum_{ a=1}^{n}(y^{a}-x^{a})\,P_{a}(y^{1},\cdots,y^{n})\,\right\}, \tag{5}\] where \(P_{a}\) (\(a=1,\cdots,n\)) are polynomials. The spectrum \(\mathfrak{m}(\,\mathbb{K}[y^{1},\cdots,y^{n}]\,)\) of maximal ideals is thus isomorphic to \(\mathbb{K}^{n}\), which should be seen as an affine space (indeed the origin is not singled out any more in this construction). In a coordinate-free and basis-independent way, one would consider a vector space \(V\) of finite dimension \(n\in\mathbb{N}\) and the symmetric algebra \(\odot(V^{*})\) over its dual (_i.e._ the commutative algebra of totally symmetric multinear forms on \(V\)). The spectrum of its maximal ideals can be taken as a (rather abstract) definition of the _affine_ space \(A\) modeled on the _vector_ space \(V\), as would be done in algebraic geometry. The symmetric algebra \(\odot(V^{*})\) is then interpreted as the commutative algebra of polynomial functions on the affine space \(A\). One recovers the previous construction by choosing a basis \(\{\mathtt{e}_{a}\}\) of \(V\) and denoting the components of vectors in this basis as the Cartesian coordinates \(y^{a}\) (_i.e._\(y=y^{a}\mathtt{e}_{a}\in V\)). Then \(V\) identifies with \(\mathbb{K}^{n}\) and the symmetric algebra \(\odot(V^{*})\) of symmetric multilinear forms \(\alpha\) over \(V^{*}\), \[\alpha(y)=\sum_{r\geqslant 0}T_{a_{1}\cdots a_{r}}\mathtt{e}^{*a_{1}}\odot \cdots\odot\mathtt{e}^{*a_{r}}\,, \tag{6}\] identifies with the commutative algebra \(\mathbb{K}[y^{1},\cdots,y^{n}]\) of polynomials \[P(y)=\sum_{r\geqslant 0}T_{a_{1}\cdots a_{r}}\,y^{a_{1}}\cdots y^{a_{r}}\,. \tag{7}\] From now on, one will treat explicitly the case \(\mathbb{K}=\mathbb{R}\) (although many statements hold for \(\mathbb{K}=\mathbb{C}\) as well). A commutative algebra over \(\mathbb{R}\) with a unique maximal ideal will be called a **local algebra**. According to the algebraic geometry viewpoint, it corresponds to an algebra of real functions on a space made of a single point. Although this case may look somewhat exotic, later on it will appear repeatedly as fibres in jet bundles. A paradigmatic example of local algebra is the commutative algebra of formal power series. **Example 2** (Formal power series): **Consider the commutative algebra \(\mathbb{R}[\varepsilon^{1},\cdots,\varepsilon^{n}]\) of formal power series in \(n\) variables with real coefficients. In contrast with the algebra \(\mathbb{R}[\varepsilon^{1},\cdots,\varepsilon^{n}]\) of polynomials, the algebra \(\mathbb{R}[\varepsilon^{1},\cdots,\varepsilon^{n}]\) of formal power series is a local algebra: its unique maximal ideal is the subalgebra \(\mathfrak{m}(\,\mathbb{R}[\varepsilon^{1},\cdots,\varepsilon^{n}]\,)\) of formal power series with vanishing constant term (_i.e._ with terms which are at least linear). This fact may look surprising at first sight, but the analogues of the zeroth-order contact ideals \(\mathcal{I}^{0}(x^{1},\cdots,x^{n})\) with \((x^{1},\cdots,x^{n})\neq(0,\cdots,0)\) are _not6_ proper ideals of \(\mathbb{R}[\varepsilon^{1},\cdots,\varepsilon^{n}]\). This is related to the fact all formal power series with non-vanishing constant term are invertible. The algebra of formal power series can be defined in a coordinate-free way by considering a vector space \(V\) of finite dimension \(n\in\mathbb{N}\) and the commutative algebra of Taylor series at the origin of smooth functions on \(V\). This commutative algebra will be denoted either \(\overline{\odot}(V^{*})\) (to stress that it is a completion of the symmetric algebra \(\odot(V^{*})\,\)) or \(J_{0}^{\infty}V\) (for consistency with notations to be introduced later). Two remarks are in order. The linear dual of the vector space \(\odot(V)\) is isomorphic to the vector space mentioned above Footnote 6: For instance, consider the case \(n=1\) of formal power series in the single variable \(\varepsilon\). The space \(\mathcal{I}^{0}(1)\) of formal power series of the form \(f(\varepsilon)=(\varepsilon-1)g(\varepsilon)\) with \(g(\varepsilon)\in\mathbb{R}[\varepsilon]\) (_i.e._\(g(\varepsilon)=\sum_{r=0}^{\infty}g_{r}\,\varepsilon^{r}\)) is not a proper ideal in \(\mathbb{R}[\varepsilon]\). In fact, the formal power series \(g(\varepsilon)=(1-\varepsilon)^{-1}=\sum_{r=0}^{\infty}\varepsilon^{r}\in \mathbb{R}[\varepsilon]\) is such that \((\varepsilon-1)g(\varepsilon)=-1\), hence \(\mathcal{I}^{0}(1)=\mathbb{R}[\varepsilon]\). \[\overline{\odot}(V^{*})\,\cong\,\big{(}\,\odot\,(V)\,\big{)}^{*}\,. \tag{8}\] The commutative algebra \(\overline{\odot}(V^{*})=J_{0}^{\infty}V\) is local, hence it must be interpreted as an algebra of functions over a single point: the origin of \(V\). Again, one recovers the previous construction by choosing a basis \(\{\mathtt{e}_{a}\}\) of \(V\) and denoting the components of vectors in this basis as Cartesian coordinates \(y^{a}\). Then the commutative algebra \(\overline{\odot}(V^{*})\) is spanned by linear forms over \(\odot(V)\) taking the form (6) where infinite sums are allowed and identifies with the commutative algebra \(\mathbb{K}[\![y^{1},\cdots,y^{n}]\!]\) of formal power series taking the form (7) where infinite sums are allowed. ### Zeroth-order jet bundle The **zeroth-order jet bundle**\(J^{0}M=\bigcup_{m}J^{0}_{m}M\) is the reunion of all \(0\)-jet spaces. Let \(x^{\mu}\) be local coordinates on \(M\). Then local coordinates on \(J^{0}M\) are \((x^{\mu},\phi)\). Retrospectively, one may define the algebra of smooth functions on \(M\) as the space \(\Gamma(J^{0}M)\cong C^{\infty}(M)\) of global sections of the \(0\)-jet bundle \(J^{0}M\). Indeed a function \(\phi\) can be identified with a global section of the bundle \(J^{0}M\) since such a section is precisely specified by the values \(\phi(x)\) at each point. The zeroth-order jet bundle \(J^{0}M\) defined above is actually a trivial line bundle over \(M\): \[J^{0}M\,\cong\,M\times\mathbb{R}\,. \tag{9}\] Geometrically, the image \(\sigma(M)\subset J^{0}M\) of a global section \(\sigma\in\Gamma(J^{0}M)\) of the zeroth-order jet bundle is nothing but the graph of the corresponding function \(\phi\in C^{\infty}(M)\). ### Multiplicative functionals: Again, what's the point? In some sense, the dual notion of a maximal ideal \(\mathcal{I}\in\mathfrak{m}(\mathcal{A})\) of a commutative algebra \(\mathcal{A}\) with unit7 over a field \(\mathbb{K}\) is a **multiplicative linear form**, _i.e._ a morphism \(\alpha:\mathcal{A}\to\mathbb{K}\) of commutative algebras, that is to say a form on \(\mathcal{A}\) which (i) is linear and (ii) preserves the product, Footnote 7: All associative algebras will be assumed to possess a unit element in this paper. \[\alpha(\lambda_{1}\phi_{1}+\lambda_{2}\phi_{2})=\lambda_{1}\alpha(\phi_{1})+ \lambda_{2}\alpha(\phi_{2})\,,\qquad\alpha(\phi_{1}\cdot\phi_{2})=\alpha(\phi _{1})\alpha(\phi_{2}) \tag{10}\] for any two elements \(\phi_{1}\) and \(\phi_{2}\) of \(\mathcal{A}\) and any two scalars \(\lambda_{1}\) and \(\lambda_{2}\) of \(\mathbb{K}\). To be more precise, the kernel of such a multiplicative linear form is a maximal ideal of codimension-one. Moreover, the kernel determines uniquely the multiplicative linear form. These facts can be seen as follows. **Proof:** Firstly, the kernel of an algebra morphism is an ideal. In particular, the kernel of a multiplicative linear form is indeed an ideal. Secondly, consider a nontrivial multiplicative linear form \(\alpha\in\operatorname{Hom}(\mathcal{A},\mathbb{K})\), in other words a non-zero element \(\alpha\in\mathcal{A}^{*}\) of the linear dual of the algebra \(\mathcal{A}\) which is also a morphism of commutative algebras, _i.e._ it satisfies (10). Since \(\alpha\) is nontrivial by assumption, it is clearly surjective, \(\operatorname{Im}\alpha=\mathbb{K}\). Therefore, its kernel \(\operatorname{Ker}\alpha\subset\mathcal{A}\) is an ideal of codimension-one (and hence maximal) since \(\mathcal{A}/\operatorname{Ker}\alpha\cong\operatorname{Im}\alpha=\mathbb{K}\). Any codimension-one ideal is necessarily a maximal proper ideal. Thirdly, let \(1_{\mathcal{A}}\in\mathcal{A}\) denote the unit element. Since \(\mathbb{K}\) is a field, the algebra morphism property of \(\alpha\) implies that \(\alpha(1_{\mathcal{A}})=1\) (or \(\alpha=0\)). Any element \(\phi\in\mathcal{A}\) decomposes as a sum \(\phi=\lambda\,1_{\mathcal{A}}\,+\,v\) where \(\lambda\in\mathbb{K}\) and \(v\in\mathrm{Ker}\,\alpha\). Finally, one has \(\alpha(\phi)=\lambda\). When \(\mathcal{A}\) can be interpreted "geometrically" as an algebra of \(\mathbb{K}\)-valued functions on a "manifold" whose "points" are the multiplicative linear forms on \(\mathcal{A}\), then one can interpret the latter as the functional "evaluation at the corresponding point".8 Footnote 8: See [9, Chap.3] for a criterion on commutative algebras over \(\mathbb{K}=\mathbb{R}\) to be geometric. ### Zeroth-order cojet bundle A linear form \(\alpha:C^{\infty}(M)\to\mathbb{K}\) on the structure algebra \(C^{\infty}(M)\) whose kernel is the contact ideal \(\mathcal{I}^{0}(m)\) at the point \(m\in M\) will be called a **zeroth-order cojet** (or \(0\)**-cojet** for short) at \(m\in M\). The one-dimensional space of zeroth-order cojets at \(m\) will be denoted \(D^{0}_{m}M\). The **zeroth-order cojet bundle**\(D^{0}M=\bigcup_{m}D^{0}_{m}M\) is the reunion of all \(0\)-cojet spaces. By definition, the zeroth-order cojets at \(m\) are linear functionals (on the space of smooth functions) that vanish on the contact ideal \(\mathcal{I}^{0}(m)\), so one can see them as generalised functions (aka distributions) on \(M\) with support at \(m\). More explicitly, let \[\delta_{m}\,:\,C^{\infty}(M)\to\mathbb{R}\,:\,f\mapsto f|_{m} \tag{11}\] denote the functional "**evaluation at the point**" \(m\), _i.e._ the Dirac distribution at \(m\) defined by \(\langle\,\delta_{m}\,,\,f\,\rangle\,=\,f|_{m}\) for any \(f\in C^{\infty}(M)\). This functional is multiplicative, _i.e._ it is a morphism of commutative algebras. Indeed, the definition (2) of the pointwise product of two functions \(f\) and \(g\) is precisely such that the evaluation at each point \(m\) is an algebra morphism sending the product of functions to the product of real numbers. Moreover, the kernel of \(\delta_{m}\) is obviously the maximal ideal \(\mathcal{I}^{0}(m)\). The space \(D^{0}_{m}M\) of \(0\)-cojets at \(m\) is isomorphic to the dual of the space \[J^{0}_{m}M=C^{\infty}(M)/\mathcal{I}^{0}(m)\cong\mathbb{R} \tag{12}\] of \(0\)-jets at \(m\), that is to say \[D^{0}_{m}M\cong(J^{0}_{m}M)^{*}\cong\mathbb{R}\,. \tag{13}\] In fact, the space \(D^{0}_{m}M\) can be thought of as the one-dimensional space spanned by the evaluation functional \(\delta_{m}\). In this vector space, the Dirac distribution \(\delta_{m}\) is singled out as the only multiplicative linear form among them (because there is a unique linear form \(\alpha:J^{0}_{m}M\to\mathbb{R}\) such that \(\alpha(1)=1\)). The set \(\mathrm{Hom}\big{(}\,C^{\infty}(M)\,,\,\mathbb{R}\,\big{)}\) of multiplicative linear forms on the commutative algebra of smooth functions on \(M\) provides an algebraic reconstruction of the manifold \(M\). In fact, there is a bijection \[\delta_{\bullet}\,:\,M\stackrel{{\sim}}{{\to}}\mathrm{Hom}\big{(}\,C ^{\infty}(M)\,,\,\mathbb{R}\,\big{)}\,:\,m\mapsto\delta_{m} \tag{14}\] between the set of multiplicative linear forms on the structure algebra \(C^{\infty}(M)\) and the set of points of a smooth manifold \(M\). This fact is often called the "Milnor exercise" [12, Problem 1-C] (for a short proof, see _e.g._[13, Corollary 35.9]). **Algebraic vs geometric formulation of points** For compact manifolds, the following notions are equivalent: \(\bullet\) a point \(m\) of a smooth manifold \(M\), \(\bullet\) a multiplicative functional on the commutative algebra \(C^{\infty}(M)\) whose kernel is the contact ideal \(\mathcal{I}^{0}(m)\) of order zero at \(m\): the evaluation functional \(\delta_{m}\in D^{0}_{m}M\) \(\bullet\) a maximal ideal of the commutative algebra \(C^{\infty}(M)\): the contact ideal \(\mathcal{I}^{0}(m)\) of order zero at \(m\). For smooth non-compact manifolds, only the first two notions are equivalent. ### Pullback of functions Let \(F:M\to N\) be a map from the manifold \(M\) (source) to the manifold \(N\) (target). The **pullback of a function**\(g\in C^{\infty}(N)\) on the target by the smooth map \(F:M\to N\) is the function on the source defined as the precomposition of \(g\) by \(F\) \[F^{*}g:=g\circ F\in C^{\infty}(M)\,. \tag{15}\] The **pullback by**\(F\) is the following map between the corresponding structure algebras \[F^{*}\,:\,C^{\infty}(N)\to C^{\infty}(M)\,:\,g\mapsto g\circ F\,. \tag{16}\] It is the dual of the map \(F:M\to N\) in the sense that it is an algebra homomorphism between the structure algebras of the corresponding manifolds with the direction of the arrow in \(F^{*}\) reversed with respect to the arrow in \(F\): it is a map from the structure algebra \(C^{\infty}(N)\) of the target manifold to the structure algebra \(C^{\infty}(M)\) of the source manifold. The pullback operation itself, \[{}^{*}\,:\,\mathrm{Hom}_{\mathrm{Smooth\ Man}}(M,N)\to\mathrm{Hom}_{\mathrm{ Comm.Alg}}\big{(}\,\mathcal{C}^{\infty}(N)\,,\,\mathcal{C}^{\infty}(M)\,\big{)}\,: \,F\mapsto F^{*}\,, \tag{17}\] is an antihomomorphism (_i.e._ it reverses the order of multiplication) from the associative algebra of maps between smooth manifolds to the associative algebra of morphisms between their structure algebras, \((F\circ G)^{*}=G^{*}\circ F^{*}\). If \(F\) is bijective (_e.g._ a diffeomorphism) then \(F^{*}\) also is. More precisely, \((F^{-1})^{*}=(F^{*})^{-1}\) as can be checked from the previous property. More generally, if \(F\) is surjective (_e.g._ a fibration) then \(F^{*}\) is injective. Along this line, an algebraic definition of a diffeomorphism of a smooth manifold \(M\) (or between two manifolds) is as an automorphism of its structure algebra \(C^{\circ}(M)\) (respectively, isomorphism) of their structure algebras. This algebraic definition is justified by the following theorem, sometimes called "Milnor exercise" with a slight abuse of terminology.9 Footnote 9: See _e.g._[14] for a version of this theorem with extremely mild assumptions. **Theorem (Grabowski) :** _A map \(\Phi:C^{\infty}(N)\stackrel{{\sim}}{{\to}}C^{\infty}(M)\) between two structure algebras is an isomorphism of commutative algebras iff it is the pullback of a diffeomorphism \(F:M\stackrel{{\sim}}{{\to}}N\) between these two manifolds, i.e. \(\Phi=F^{*}\)._ Therefore, the group of geometric diffeomorphisms of the manifold \(M\) and the group of automorphisms of its structure algebra \(C^{\circ}(M)\) are isomorphic, \(Diff(M)\cong Aut(\,C^{\infty}(M)\,)\). The above theorem admits a generalisation where the algebra isomorphism (respectively, diffeomorphism) is replaced with an algebra homomorphism (respectively, smooth map), _c.f._[13, Corollary 35.10], for which the original Milnor exercise [12, Problem 1-C] corresponds to the case when \(M\) is a single point and \({\cal C}^{\infty}(M)=\mathbb{R}\). Differential structures of order one: tangent (co)vectors As we saw in the previous section, diffeomorphisms of a smooth manifold \(M\) can be seen algebraically as automorphisms of the structure algebra \(C^{\infty}(M)\). The infinitesimal automorphisms of a commutative algebra are its derivations. Vector fields are the infinitesimal generators of flows on a manifold, _i.e._ one-parameter groups of diffeomorphisms. Therefore, it is natural to consider the algebraic definition of vector fields on a manifold as derivations of the commutative algebra of functions. ### Tangent vector fields as derivations #### 4.1.1 Derivations Let \(V\) be a vector space. A linear map \(D:V\to V\) is sometimes called an **endomorphism** of the vector space \(V\). The space \(\mbox{End}(V)\) of endomorphisms of \(V\) is an associative algebra for the composition \(\circ\,\). Any associative algebra \({\cal A}\) with product \(\star\) can be endowed with a Lie algebra structure via the commutator \([\begin{array}{c}\stackrel{{\star}}{{,}}\end{array}]\) as Lie bracket, in which case it will be called the **commutator algebra** and denoted \({\mathfrak{A}}\) when it is necessary to distinguish it from its associative counterpart. In particular, the general linear algebra \({\mathfrak{gl}}(V):={\mathfrak{E}}{\mathfrak{n}}{\mathfrak{d}}(V)\) is the space \(\mbox{End}(V)\) of endomorphisms endowed with a structure of Lie algebra via the commutator \([\begin{array}{c}\stackrel{{\circ}}{{,}}\end{array}]\). A **derivation** of an algebra \({\cal A}\) is a linear map \(D:{\cal A}\to{\cal A}\) obeying to the Leibniz rule: \[D(a\star b)\,=\,D(a)\star b\,+\,a\star D(b)\,,\qquad\forall a,b\in{\cal A}\,. \tag{18}\] The subspace \({\mathfrak{o}}{\mathfrak{e}}{\mathfrak{l}}({\cal A})\subset{\mathfrak{gl}}({ \cal A})\) of derivations is a Lie subalgebra of the general linear algebra of endomorphisms of the vector space \({\cal A}\). All **representations of a Lie algebra \({\mathfrak{g}}\) on an algebra \({\cal A}\)** will be assumed to be morphisms of Lie algebras from \({\mathfrak{g}}\) to \({\mathfrak{o}}{\mathfrak{e}}{\mathfrak{l}}({\cal A})\). In particular, the **adjoint representation** of the commutator algebra of an associative algebra \({\cal A}\) is the morphism of Lie algebras \[ad\,:\,{\mathfrak{A}}\to{\mathfrak{o}}{\mathfrak{e}}{\mathfrak{r}}({\cal A}) \,:\,a\mapsto ad_{a} \tag{19}\] where \[ad_{a}(b)\,:=\,[\,a\,\stackrel{{\star}}{{,}}\,b\,]=a\star b-b \star a\,, \tag{20}\] is a morphism of Lie algebras from the commutator algebra \({\mathfrak{A}}\) to the Lie algebra \({\mathfrak{o}}{\mathfrak{e}}{\mathfrak{r}}({\cal A})\) of derivations, _i.e._ \[[\,ad_{a_{1}}\,\stackrel{{\circ}}{{,}}\,ad_{a_{2}}\,]=ad_{[\,a_{1 }\,;\,a_{2}\,]}\,. \tag{21}\] The image \[\mathsf{inn}(\mathcal{A}):=ad_{\mathfrak{A}}\subset\mathfrak{der}(\mathcal{A}) \tag{22}\] of the adjoint representation (19) is called the **Lie subalgebra of inner derivations**. One has the chain of inclusions: \[\mathsf{inn}(\mathcal{A})\subset\mathfrak{der}(\mathcal{A})\subset\mathfrak{ gl}(\mathcal{A})\,. \tag{23}\] To be more explicit, an inner derivation is a derivation \(D\in\mathfrak{der}(\mathcal{A})\) of the form \(D=ad_{a}\) for some element \(a\in\mathcal{A}\). All other derivations, _i.e._\(D\in\mathfrak{der}(\mathcal{A})\) but \(D\notin ad_{\mathcal{A}}\), are called **outer derivations**. All nontrivial derivations of a commutative algebra \(\mathcal{A}\) are outer. They can in fact be interpreted as vector fields on the spectrum \(\mathfrak{m}(\mathcal{A})\) of maximal ideals. **Example (polynomial vs formal vector fields) :** Consider the affine (vs vector) space \(A\) (respectively, \(V\)) isomorphic to \(\mathbb{R}^{n}\) with the Cartesian coordinates \((y^{1},\cdots,y^{n})\). The Lie algebra of derivations of the commutative algebra \(\mathcal{A}=\mathbb{R}[y^{1},\cdots,y^{n}]\) (vs \(\mathcal{A}=\mathbb{R}[\![y^{1},\cdots,y^{n}]\!]\) ) of polynomial (vs formal) functions on this affine space (respectively, at the origin of this vector space) is called the **Lie algebra of polynomial (vs formal) vector fields**. Its elements take the form \(\hat{X}=X^{a}(y)\frac{\hat{\sigma}}{\hat{\sigma}y^{a}}\) with components \(X^{a}(y)\) which are polynomial (vs formal power series), _i.e._\(\mathfrak{der}(\mathcal{A})\,\cong\,\mathcal{A}\otimes V\) as vector spaces, where \(V\) is the vector space on which the affine space \(A\) is modeled. #### 4.1.2 Vector fields on a manifold In purely algebraic and coordinate-free terms, a (tangent) **vector field**\(\hat{X}\) on \(M\) can be defined as a derivation of the commutative algebra \(C^{\infty}(M)\) of functions on \(M\).10 In more concrete terms, a vector field is a linear map Footnote 10: Vector fields are hatted in order to emphasise their algebraic definition as differential operators and anticipate their higher-order generalisation. \[\hat{X}\,:\,C^{\infty}(M)\to C^{\infty}(M)\,:\,f\mapsto\hat{X}[\![f]\!] \tag{24}\] where \(\hat{X}[\![f]\!]\) denotes the action11 of the vector field \(\hat{X}\) on the function \(f\), which is such that Footnote 11: This action will sometimes also be denoted by \(\hat{X}f\) for short. \[\hat{X}[\![f\cdot g]=\hat{X}[\![f]\!]\cdot g+f\cdot\hat{X}[\![g]\,,\quad\forall f,g\in C^{\infty}(M)\,. \tag{25}\] Alternatively, it will also be identified with the **Lie derivative of a function \(f\) along the vector field**\(\hat{X}\), in which case it is denoted \[\mathcal{L}_{\hat{X}}f\,:=\,\hat{X}[\![f]\!]\,. \tag{26}\] In local coordinates, this reads: \[\big{(}\,\hat{X}[f]\,\big{)}(x)\,=\,X^{\mu}(x)\,\partial_{\mu}f(x)\,. \tag{27}\] Accordingly, the local expression of vector fields is \(\hat{X}\,=\,X^{\mu}(x)\,\partial_{\mu}\) where the collection of vector fields \(e_{\mu}:=\partial_{\mu}\) will be called the **vector fields of a coordinate basis**. The vector space spanned by vector fields is usually denoted \(\mathfrak{X}(M)\), but it will often be denoted in this text by \(\mathcal{T}(M)\) for consistency with other choices of notation. Nevertheless, when the space \(\mathcal{T}(M)\) of vector fields is endowed with a structure of Lie algebra via the commutator bracket, it will be denoted \(\mathfrak{X}(M)=\mathfrak{ev}\big{(}\,C^{\infty}(M)\,\big{)}\) to emphasise its Lie algebra structure. In fact, when vector fields are seen as derivations, the **Lie bracket between two vector fields**\(\hat{X}\) and \(\hat{Y}\) is simply a fancy name for their commutator \([\hat{X}\,\stackrel{{\circ}}{{,}}\,\hat{Y}]\) with respect to the composition product \(\circ\). In this context, it will be denoted \([\hat{X}\,,\hat{Y}]\) for simplicity. #### 4.1.3 Flow of a vector field A vector field \(\hat{X}\) on the manifold \(M\) is said complete if it generates an action of the additive group \(\mathbb{R}\) on the structure algebra \(C^{\infty}(M)\), _i.e._ a group morphism \[\exp(\bullet\,\mathcal{L}_{\hat{X}})\,:\,\mathbb{R}\to Aut(\,C^{\infty}(M)\,) \,:\,t\mapsto\exp(\,t\,\mathcal{L}_{\hat{X}}\,)\,, \tag{28}\] where \(\exp(\,\mathbb{R}\,\mathcal{L}_{\hat{X}})\subset Aut(\,\mathcal{C}^{\infty}( M)\,)\) denotes a one-parameter group of automorphisms \(\exp(\,t\,\mathcal{L}_{\hat{X}})\in Aut(\,\mathcal{C}^{\infty}(M)\,)\) of the structure algebra \(\mathcal{C}^{\infty}(M)\), \[\exp(\,t\,\mathcal{L}_{\hat{X}}\,)\,:\,C^{\infty}(M)\stackrel{{ \sim}}{{\to}}C^{\infty}(M)\,:\,f\mapsto f_{t}=\exp(\,t\,\mathcal{L}_{\hat{X}} \,)\,[f]\,. \tag{29}\] Geometrically, these automorphisms (29) of the structure algebra \(C^{\infty}(M)\) correspond to diffeomorphisms of the manifold \(M\), \[\exp(\,t\,\hat{X})\,:\,M\stackrel{{\sim}}{{\to}}M\,:\,x^{\mu} \mapsto x_{t}^{\prime\mu}(x)=\exp(\,t\,\mathcal{L}_{\hat{X}}\,)\,[x^{\mu}]\,\,, \tag{30}\] where we identified a point with its coordinates in a patch in order to relate explicitly the two viewpoints (29)-(30). Any complete vector field \(\hat{X}\) defines an action of the additive group \(\mathbb{R}\) on the manifold \(M\), _i.e._ a group morphism \[\exp(\bullet\hat{X})\,:\,\mathbb{R}\to Diff(M)\,:\,t\mapsto\exp(\,t\hat{X})\,, \tag{31}\] called the **(global) flow on the manifold \(M\) generated by the (complete) vector field \(\hat{X}\)**.12 Explicitly, the relation between these two equivalent views (algebraic vs geometric) of diffeomorphisms is the following relation between the change of functions versus the change of coordinates: \(f_{t}(x^{\mu})=f(x^{\mu}_{t})\). Footnote 12: For technical details and proofs related to geometric flows, see _e.g._[15]. **Remark 1:** Note that, the Lie derivative has been removed in the notation (30) with respect to (29), in order to distinguish between these two (equivalent) formulations: the (algebraic) formulation as automorphisms \(\exp(\,t\,{\cal L}_{\hat{X}})\in Aut(\,C^{\infty}(M)\,)\) of the structure algebra and the (geometric) formulation as diffeomorphisms \(\exp(\,t\,\hat{X})\in Diff(M)\) of the manifold. However, one will often be sloppy in the sequel and use the same notation for these equivalent notions. **Remark 2:** If the vector field \(\hat{X}\) on \(M\) is incomplete, then the one-parameter group of automorphisms that it generates is not defined for some values of the parameter \(t\in\mathbb{R}\). In practice, this means that the corresponding trajectories go to infinity for some finite value of the parameter. Nevertheless, any vector field \(\hat{X}\in\mathfrak{X}(M)\) is integrable to a local flow \(\exp(\bullet\hat{X})\,:\,I\to Diff(N)\) defined for an open subset \(I\subseteq\mathbb{R}\) (_e.g._ an open interval \(I=]a,b[\,)\) and a submanifold \(N\subseteq M\). In the sequel, the adjective "complete" will often be dropped since local considerations are implicitly understood. #### 4.1.4 Lie derivative of vector fields via infinitesimal flow and via adjoint action The algebraic point of view suggests a natural definition for the **Lie derivative of a vector field \(\hat{Y}\) along another vector field \(\hat{X}\)** (denoted by \({\cal L}_{\hat{X}}\hat{Y}\)) through the Leibniz rule: \({\cal L}_{\hat{X}}(\hat{Y}f)=({\cal L}_{\hat{X}}\hat{Y})f+\hat{Y}({\cal L}_{ \hat{X}}f)\). This leads to \[({\cal L}_{\hat{X}}\hat{Y})f={\cal L}_{\hat{X}}(\hat{Y}f)-\hat{Y}({\cal L}_{ \hat{X}}f)=\hat{X}(\hat{Y}f)-\hat{Y}(\hat{X}f)=(\hat{X}\circ\hat{Y}-\hat{Y} \circ\hat{X})f \tag{32}\] so that the Lie derivative of a vector field \(\hat{Y}\) along another vector field \(\hat{X}\) is indeed equal to the Lie bracket: \({\cal L}_{\hat{X}}\hat{Y}=[\hat{X},\hat{Y}]\). Another perspective on the Lie derivative of a vector field \(\hat{Y}\) along a vector field \(\hat{X}\) is that the flow generated by the vector field \(\hat{X}\) induces a one-parameter group \(Ad_{\exp t\,\hat{X}}\in Aut(\,\mathfrak{X}(M)\,)\) of automorphisms of the Lie algebra \(\mathfrak{X}(M)\) of vector fields: \[Ad_{\exp t\,\hat{X}}\,:\,\mathfrak{X}(M)\stackrel{{\sim}}{{\to}} \mathfrak{X}(M)\,:\,\hat{Y}\mapsto\hat{Y}_{t}^{\hat{X}}=\exp(\,t\hat{X})\circ \hat{Y}\circ\exp(-t\hat{X})\,. \tag{33}\] In fact, the infinitesimal transformation \(\frac{d}{dt}(\hat{Y}_{t}^{\hat{X}})|_{t=0}\) of a vector field \(\hat{Y}\) induced by the flow generated by the vector field \(\hat{X}\) is the geometric definition of the Lie derivative of \(\hat{Y}\) along \(\hat{X}\). It leads to \({\cal L}_{\hat{X}}\hat{Y}=\frac{d}{dt}(\hat{Y}_{t})|_{t=0}=[\hat{X},\hat{Y}]\) as it should. The expression for the components of the Lie derivative of the vector field \(\hat{Y}^{\mu}\) along the vector field \(X^{\nu}\) reads \[(\mathcal{L}_{\hat{X}}\hat{Y})^{\mu}=X^{\nu}\partial_{\nu}Y^{\mu}-Y^{\nu} \partial_{\nu}X^{\mu}\,. \tag{34}\] The Lie derivative itself \[\mathcal{L}\,:\,\mathfrak{X}(M)\stackrel{{\sim}}{{\to}} \mathfrak{inn}\Big{(}\,\mathfrak{X}(M)\,\Big{)}\,:\,\hat{X}\mapsto\mathcal{L} _{\hat{X}} \tag{35}\] is nothing but the adjoint representation of the Lie algebra \(\mathfrak{X}(M)\) of vector fields on itself. In particular, it is indeed a Lie algebra morphism in the sense that \[[\mathcal{L}_{\hat{X}}\,\stackrel{{\circ}}{{,}}\mathcal{L}_{ \hat{Y}}]=\mathcal{L}_{[\hat{X},\hat{Y}]}\,. \tag{36}\] Although this algebraic definition of the Lie derivative \(\mathcal{L}=ad\) on \(\mathfrak{X}(M)\) may look somewhat tautological, let us stress that nevertheless the Lie derivatives \(\mathcal{L}_{\hat{X}}\in\mathfrak{inn}\Big{(}\,\mathfrak{X}(M)\,\Big{)}\) along vector fields \(\hat{X}\in\mathfrak{X}(M)\) are extremely important since they exhaust all infinitesimal symmetries of the Lie algebra of vector fields. More precisely, a classical theorem of Takens [16] asserts that all derivation of the Lie algebra of vector fields are inner derivations: \[\mathfrak{inn}\Big{(}\,\mathfrak{X}(M)\,\Big{)}=\mathfrak{der}\Big{(}\, \mathfrak{X}(M)\,\Big{)}\,. \tag{37}\] As one can see, when vector fields are seen as differential operators, the geometric notions of Lie bracket and Lie derivatives of vector fields are redundant with the commutator bracket and the adjoint representation. However, one knows that Lie derivatives also have a geometrical interpretation as infinitesimal diffeomorphisms pulled back to the point of origin. The conceptual distinction between the Lie bracket and the commutator bracket may be pertinent in ordinary gravity theories such as general relativity due to its clear geometric roots. However the identification between Lie bracket and commutator bracket might suggest a generalisation of the infinitesimal symmetries of ordinary gravity to higher-spin gravity where one should better leave some of our geometrical prejudices (since it is not obvious what the analogue of the role played by diffeomorphisms in the general covariance principle should be in higher-spin gravity). ### Tangent vectors Abstractly, the **tangent space**\(T_{m}M\) at a point \(m\) of a manifold \(M\) can be defined as the space of equivalence classes of vector fields, where two vector fields \(\hat{X}\) and \(\hat{Y}\) are equivalent if they produce the same result, at the point \(m\), on any given function. An element of the tangent space \(T_{m}M\) is called a tangent vector at \(m\). The equivalence class will be denoted \(\hat{X}|_{m}\) and called the value of the vector field at the point \(m\). One has \[\hat{X}|_{m}=\hat{Y}|_{m}\qquad\Longleftrightarrow\qquad\hat{X}[f]|_{m}=\hat{Y} [f]|_{m}\,,\quad\forall f\in C^{\infty}(M)\,. \tag{38}\] Equivalently, a tangent vector \(v\) at \(m\) can be defined as a linear form \(v:C^{\infty}(M)\to\mathbb{R}\) on the structure algebra \(C^{\infty}(M)\) that obeys the Leibnitz rule \[v[\![f\cdot g]\,=\,v[\![f]\!]\,g|_{m}\,+\,f|_{m}\,v[g]\,,\quad\forall f,g\in C ^{\infty}(M)\,. \tag{39}\] The relation between the latter and the former definitions is that such a linear map takes the form \(v=\hat{X}|_{m}=\delta_{m}\circ\hat{X}\) for some vector field \(\hat{X}\in\mathfrak{der}\big{(}\,C^{\infty}(M)\,\big{)}=\mathfrak{X}(M)\), where \(\delta_{m}:C^{\infty}(M)\to\mathbb{R}\) is the evaluation functional at the point \(m\). The equivalence relation in the first definition arises from the fact that the kernel of \(\delta_{m}\) is the zeroth-order contact ideal \(\mathcal{I}^{0}(m)\). Let \(e_{\mu}:=\partial_{\mu}\) be the coordinate basis vector fields in some local coordinates \(x^{\mu}\). Consider the \(n\) linear forms \(e_{\mu}|_{m}\) on \(C^{\infty}(M)\) acting on any function \(f\) as \(e_{\mu}|_{m}f:=(\partial_{\mu}f)|_{m}\). They can be interpreted as the **coordinate basis of the tangent space \(T_{m}M\)** at \(m\). The point \(m\) will sometimes be replaced with its coordinates \(x\). As usual, one may thus define the **tangent bundle**\(TM=\bigcup_{m}T_{m}M\) as the union of all tangent spaces. Local coordinates of this bundle are \((x^{\mu},X^{\nu})\) where \(X^{\nu}\) are the components of a tangent vector (in the coordinate basis located at the point of coordinates \(x^{\mu}\)). A change of coordinate \(x^{\mu}\mapsto x^{\prime\mu}\) on the base \(M\) induces the transformation law \(X^{\nu}\mapsto X^{\prime\nu}=\frac{\partial\tau^{\nu}}{\partial x^{\mu}}\,X^{\mu}\) in the fibre. One can reinterpret the vector space \(\mathfrak{X}(M)\) of vector fields as the vector space of global sections \(\Gamma(TM)\) of the tangent bundle. A classical theorem of Pursell and Shanks provides a Lie-algebraic analogue of the generalised Milnor exercise of Subsection 3.5, where the commutative algebra of functions is replaced with the Lie algebra of vector fields [17]. **Theorem (Pursell & Shanks) :** _A map \(\Phi:\mathfrak{X}(M)\stackrel{{\sim}}{{\to}}\mathfrak{X}(N)\) between the Lie algebras of vector fields on two manifolds is an isomorphism of Lie algebras iff it is the pushforward of a diffeomorphism \(F:M\stackrel{{\sim}}{{\to}}N\) between these two manifolds, i.e. \(\Phi=F_{*}\)._ This means that smooth manifolds somehow realise the perfect dream of group theorists: they provide objects characterised uniquely by their symmetries. Indeed, two smooth manifolds are isomorphic iff they have the same algebra of infinitesimal symmetries (_i.e._ vector fields). ### Cotangent vectors The dual space \(T_{m}^{*}M\) of linear forms \(\alpha\) on the tangent space \(T_{m}M\) is the **cotangent space** at \(m\). An element of the cotangent space \(T_{m}^{*}M\) is called a **cotangent vector**, or **tangent covector**, or a **one-form**, at the point \(m\). The **cotangent bundle** is the manifold \(T^{*}M=\bigcup_{m}T_{m}^{*}M\) and it has local coordinates \((x^{\mu},p_{\nu})\) where a change of coordinate \(x^{\mu}\mapsto x^{\prime\mu}\) on the base \(M\) induces the transformation law \(p_{\mu}\mapsto p_{\mu}^{\prime}=\frac{\partial x^{\nu}}{\partial x^{\prime\mu} }\,p_{\nu}\) in the fibre. The sections of the cotangent bundle are called **differential one-forms**\(\alpha(x)\) of \(\Omega^{1}(M):=\Gamma(T^{*}M)\). Their components take the form \(\alpha_{\mu}(x)\) in the **dual basis to the coordinate basis**, usually written \(dx^{\mu}\), which is a basis of the cotangent space \(T_{m}^{*}M\). This standard notation is motivated by the definition of the **differential of functions** as the linear map \[d\,:\,C^{\infty}(M)\to\Omega^{1}(M)\,:\,f\mapsto df \tag{40}\] where the **differential of the function**\(f\in C^{\infty}(M)\) is the differential one-form \(df\in\Omega^{1}(M)\) defined via its action on any vector field \(\hat{X}\in\mathfrak{X}(M)\), \[\langle\,df\,,\,\hat{X}\,\rangle:=\hat{X}[\![f]\,. \tag{41}\] In local coordinates, the right-hand-side of (41) is given by (27). Therefore, \(df=dx^{\mu}\partial_{\mu}f\) in the coordinate basis since the left-hand-side of (41) then reads \[\langle\,dx^{\mu}\,\partial_{\mu}f\,,\,X^{\nu}\,\partial_{\nu}\rangle\,=\, \partial_{\mu}f\,X^{\nu}\,\langle\,dx^{\mu}\,,\,\partial_{\nu}\rangle\,=\,X^ {\mu}\,\partial_{\mu}f \tag{42}\] by definition of the dual basis, \(\langle\,dx^{\mu}\,,\,\partial_{\nu}\rangle=\delta^{\mu}_{\nu}\,\). ### Tangent tensors We will focus on totally symmetric tensors for later purpose. #### 4.4.1 Symmetric contravariant tensor fields as functions on the cotangent bundle Since the tangent space is a vector space one may also define in a natural way the bundle of **symmetric contravariant tensors**\(\bigcirc TM=\bigcup_{m}\bigcirc T_{m}M\) together with the space \(\Gamma(\bigcirc TM)\) of **symmetric contravariant tensor fields**. To indicate the rank, one adds an upper index, _e.g._ the bundle of contravariant tensors of rank one is the bundle \(\bigcirc^{1}TM\cong TM\). Local coordinates of the bundle \(\bigcirc^{r}TM\) are \((x^{\mu},T^{\mu_{1}\dots\mu_{r}})\) and the components of a symmetric tensor field of rank \(r\) are written as \(T^{\mu_{1}\dots\mu_{r}}(x)\) in the coordinate basis \(\partial_{\mu_{1}}\bigcirc\dots\bigcirc\partial_{\mu_{r}}\). Therefore, local coordinates of the infinite-dimensional bundle \(\odot TM\) are \[(x^{\mu},T,T^{\mu},\dots,T^{\mu_{1}\dots\mu_{r}},\dots)\,. \tag{43}\] An \(\mathbb{N}\)**-graded** vector space \(V\) is a collection \(\{\,V_{i}\,\}_{i\in\mathbb{N}}\) of vector spaces such that \(V_{i}\cap V_{j}=\{0\}\) for \(i\neq j\) and \(V=\oplus_{i\in\mathbb{N}}V_{i}\). An element of \(V_{i}\) is called a **homogeneous element of grading \(i\)**. An algebra \(\mathcal{A}\) with product \(\star\) is \(\mathbb{N}\)-graded if it is so as vector space and if its product \(\star\) obeys \(\mathcal{A}_{i}\star\mathcal{A}_{j}\subseteq\mathcal{A}_{i+j}\). Moreover, if the algebra possesses a unit element \(1\in\mathcal{A}\), then one further requires that \(1\in\mathcal{A}_{0}\). The space \(\Gamma(\odot TM)\) of symmetric contravariant tensor fields endowed with the symmetric product (inherited from the fibres) is a commutative \(\mathbb{N}\)-graded algebra (with the rank of tensors as grading). A function on the cotangent bundle which is a homogeneous polynomial of degree \(r\) in the fibre will be called a **symbol of degree \(r\) on the manifold \(M\)**. The commutative algebra of symbols will be denoted by \(\mathcal{S}(M)\subset C^{\infty}(T^{*}M)\). It is \(\mathbb{N}\)-graded by the polynomial degree in the Cartesian coordinates on the cotangent spaces, which will be called the **polynomial degree in the momenta**. The symmetric contravariant tensor fields of rank \(r\) can be identified with symbols of degree \(r\). This is clear in coordinates via the identification of the coordinate basis \(\partial_{\mu}\) with the fibre coordinates \(p_{\mu}\) and of the symmetric product \(\odot\) with the obvious product of the variables \(p_{\mu}\), thus one has the isomorphism \(\Gamma(\odot TM)\cong\mathcal{S}(M)\) of \(\mathbb{N}\)-graded commutative algebras. Indeed, an element \(T=\sum\frac{1}{r!}\,T^{\mu_{1}\dots\mu_{r}}(x)\,\partial_{\mu_{1}}\odot\dots \odot\partial_{\mu_{r}}\) of \(\odot\mathcal{T}(M)\) can be mapped to a function \(T(x,p)=\sum\frac{1}{r!}\,T^{\mu_{1}\dots\mu_{r}}(x)\,p_{\mu_{1}}\cdots p_{\mu_ {r}}\) on \(T^{*}M\) which is a smooth function in \(x\) and a polynomial in \(p\). The symmetric product \(\odot\) in \(\Gamma(\odot TM)\) given by \[(T_{1}\,\odot\,T_{2})^{\mu_{1}\dots\mu_{r_{1}+r_{2}}}(x)\,=\,\frac{(r_{1}+r_{2 })!}{r_{1}!\,r_{2}!}\,T_{1}^{(\mu_{1}\dots\mu_{r_{1}}}(x)\,T_{2}^{\mu_{r_{1}+1 }\dots\mu_{r_{2}}})(x) \tag{44}\] is mapped in \(\mathcal{S}(M)\) to the pointwise product of functions on the cotangent bundle. Throughout these notes, curved brackets over a set of indices denote complete symmetrisation over all this indices, with weight one, _i.e._\(T^{(\mu_{1}\dots\mu_{r})}=T^{\mu_{1}\dots\mu_{r}}\). Via the Leibniz rule, one can deduce the Lie derivative of the symmetric contravariant tensor field \(T=T^{\mu_{1}\dots\mu_{r}}\partial_{\mu_{1}}\odot\dots\odot\partial_{\mu_{r}}\) along the vector field \(\hat{X}=X^{\nu}\partial_{\nu}\) from the expression of the Lie derivative of vector fields \(\hat{Y}=Y^{\nu}\partial_{\nu}\). In components, it takes the form \[(\mathcal{L}_{\hat{X}}T)^{\mu_{1}\dots\mu_{r}} = X^{\nu}\partial_{\nu}T^{\mu_{1}\dots\mu_{r}}-\,\sum_{k=1}^{r} \partial_{\nu}X^{\mu_{k}}T^{\mu_{1}\dots\mu_{k-1}\nu\mu_{k+1}\dots\mu_{r}} \tag{45}\] \[= X^{\nu}\partial_{\nu}T^{\mu_{1}\dots\mu_{r}}-r\,\partial_{\nu}X^ {(\mu_{1}}T^{\mu_{2}\dots\mu_{r})\nu}\,. \tag{46}\] The Lie derivative can be written in a suggestive way through the canonical Poisson bracket on the commutative algebra \(C^{\infty}(T^{*}M)\) of functions on the cotangent bundle (as detailed in the next subsection). #### 4.4.2 Poisson algebra of symmetric contravariant tensor fields A **Poisson bracket**\(\{\,\ \}\) for a commutative algebra \({\cal A}\) with product \(\cdot\) is a Lie bracket which is also a (bi)derivation, _i.e._\(\{x,y\cdot z\}=y\cdot\{x,z\}+\{x,y\}\cdot z\) for any \(x,y,z\in{\cal A}\,\). A **Poisson algebra** is both a commutative algebra and a Lie algebra endowed respectively with an associative product and a Poisson bracket. An algebra \({\cal A}\) is said **central** if its center coincides with the field \({\mathbb{K}}\) of \({\cal A}\). A Poisson algebra is central as a Lie algebra iff its Poisson bracket is non-degenerate (up to scalars elements). A **Poisson manifold** is a manifold whose algebra of functions is a Poisson algebra. A **symplectic manifold** is a Poisson manifold whose algebra of functions is central as a Lie algebra or, equivalently, whose Poisson bracket is non-degenerate (up to constant functions). The cotangent bundle has a canonical structure of symplectic manifold, whose coordinate-free definition will be introduced later on. The **canonical Poisson bracket** is the usual Poisson bracket of Hamiltonian mechanics and reads in coordinates as \[\{\,\ \}_{{}_{C}}\ :=\,\overbrace{\partial}^{\overbrace{\partial}}^{ \overbrace{\partial}}\overleftarrow{\partial p_{\mu}}-\overbrace{\partial p_ {\mu}}^{\overbrace{\partial}}\,\overleftarrow{\partial x^{\mu}}\,, \tag{47}\] where the arrows indicate on which factor they act. Locally, it acts as follows \[\{\,P(x,p)\,,Q(x,p)\,\}_{{}_{C}}=\frac{\partial P}{\partial x^{\mu}}\frac{ \partial Q}{\partial p_{\mu}}-\frac{\partial P}{\partial p_{\mu}}\frac{ \partial Q}{\partial x^{\mu}}\,. \tag{48}\] The **suspension**\(V\)[1] of an \({\mathbb{N}}\)-graded vector space \(V\) is the graded vector space whose grading is obtained by shifting down the grading of \(V\) by one: \(V[1]\,_{i}=V_{i+1}\). For instance, the suspension \({\cal A}\)[1] of an associative algebra is an an \({\mathbb{N}}\)-graded algebra iff the product \(\star\) of the initial algebra \({\cal A}\) obeys \[{\cal A}_{m+1}\star{\cal A}_{n+1}\subseteq{\cal A}_{m+n+1}\quad\Leftrightarrow \quad{\cal A}_{i}\star{\cal A}_{j}\subseteq{\cal A}_{i+j-1}\,. \tag{49}\] A Poisson algebra \({\cal P}\) which is \({\mathbb{N}}\)-graded as a commutative algebra and such that its suspension \({\cal P}\)[1] is \({\mathbb{N}}\)-graded as a Lie algebra is called a **Schouten algebra**.13 Concretely, this condition means that the commutative product \(\cdot\) of \({\cal P}\) is such that \({\cal P}_{i}\cdot{\cal P}_{j}\subseteq{\cal P}_{i+j}\) while the Poisson bracket \(\{\,,\,\}\) is such that \(\{{\cal P}_{i},{\cal P}_{j}\}\subseteq{\cal P}_{i+j-1}\). Footnote 13: Note that a Gerstenhaber algebra is the supercommutative analogue of a Schouten algebra. **Example (Symbols) :** The canonical Poisson bracket endows the space \({\cal S}(M)\) of symbols with a structure of Schouten algebra (since the canonical Poisson bracket decreases the grading by one) which is furthermore central. Due to the isomorphism \({\cal S}(M)\cong\Gamma(\odot TM)\) between the vector space of symbols and the vector space of symmetric contravariant tensor fields, one may infer that the latter can be endowed with a structure of Schouten algebra. **Example (Symmetric contravariant tensor fields) :** It can be checked by direct computation that the Lie derivative (46) of the symmetric contravariant tensor fields encoded into the function \(T(x,p)=\sum\frac{1}{r!}\,T^{\mu_{1}\ldots\mu_{r}}(x)p_{\mu_{1}}\cdots p_{\mu_{r}}\) on the cotangent bundle \(T^{*}M\) merely corresponds to the Poisson bracket with the function \(X(x,p)=X^{\nu}(x)\,p_{\nu}\) corresponding to the vector field \(\hat{X}=X^{\mu}(x)\partial_{\mu}\): \[({\cal L}_{\hat{X}}T)(x,p)\,=\,\left\{T(x,p)\,,\,X(x,p)\right\}_{C}\,. \tag{50}\] More generally, the canonical Poisson bracket of functions on \(T^{*}M\) with polynomial dependence in the fibre coordinates induces the **Schouten bracket of symmetric contravariant tensor fields** (see _e.g._[18] and refs therein) \[\{\;,\;\}_{S} : \Gamma(\odot^{r_{1}}TM)\otimes\,\Gamma(\odot^{r_{2}}TM)\to \Gamma(\odot^{r_{1}+r_{2}-1}TM) \tag{51}\] \[: T_{1}^{\,\nu_{1}\ldots\nu_{r_{1}}}(x)\,\otimes\,T_{2}^{\,\nu_{1 }\ldots\nu_{r_{2}}}(x)\;\longmapsto\;\left\{\,T_{1}\,,T_{2}\,\right\}_{S}^{\, \nu_{1}\ldots\nu_{r_{1}+r_{2}-1}}(x)\,,\] where \[\{\,T_{1}\,,T_{2}\,\}_{S}^{\,\nu_{1}\ldots\nu_{r_{1}+r_{2}-1}}\; := r_{2} \partial_{\mu}T_{1}^{\,(\nu_{1}\ldots\nu_{r_{1}}}T_{2}^{\,\nu_{r_ {1}+1}\ldots\nu_{r_{1}+r_{2}-1})\mu} \tag{52}\] \[-\;r_{1}\;T_{1}^{\,\mu(\nu_{1}\ldots\nu_{r_{1}-1}}\partial_{\mu} T_{2}^{\,\nu_{r_{1}}\ldots\nu_{r_{1}+r_{2}-1})}\,.\] As one can see, together with the symmetric product \(\odot\), the Schouten bracket \(\{\;,\;\}_{S}\) (which decreases the rank by one) endows the algebra \(\Gamma(\odot TM)\) of symmetric contravariant tensors with a structure of Schouten algebra. A diffeomorphism of a symplectic manifold preserving the Poisson bracket is called a **symplectomorphism**. Algebraically, a symplectomorphism can be defined as an automorphism of the Poisson algebra of functions on the symplectic manifold, _i.e._ it is an automorphism for both its commutative and its Lie algebra structures. A derivation of the Poisson algebra of functions on a symplectic manifold, with respect to _both_ the pointwise product and the Poisson bracket, is called a **symplectic vector field**. A symplectic vector field corresponds to an infinitesimal symplectomorphism since it is a vector field on a symplectic manifold that preserves the Poisson bracket. Let \(h\) be a function on a symplectic manifold. A **Hamiltonian vector field** for the function \(h\), then called its **Hamiltonian**, is a symplectic vector field \(\hat{X}_{h}\) whose corresponding Lie derivative is the inner derivation \[{\cal L}_{\hat{X}_{h}}\,=\,\{\,h\}\,, \tag{53}\] _i.e._\(\hat{X}_{h}[g]=\{g,h\}\) for all functions \(g\) on the symplectic manifold. A **symplectic/Hamiltonian flow** is a one-parameter group of symplectomorphisms generated by a symplectic/Hamiltonian vector field. For instance, a Hamiltonian flow on a symplectic manifold \({\cal M}\) is a one-parameter group of symplectomorphisms \(\exp(t\hat{X}_{h})\) of \({\cal M}\) generated by a Hamiltonian vector field \(\hat{X}_{h}\in{\cal T}({\cal M})\) defined by a Hamiltonian \(h\in C^{\infty}({\cal M})\). All symplectic flows are locally Hamiltonian because, locally, all symplectic vector fields (_i.e._ derivations of the Poisson algebra of functions on a symplectic manifold) are Hamiltonian vector fields (_i.e._ inner derivations).14 This remains true for the Schouten algebra of symbols on \(M\)[19]: locally, all derivations of the Schouten algebra \({\cal S}(M)\) are inner, _i.e._ Hamiltonian vector fields \(\hat{X}_{h}\) with \(h\in{\cal S}(M)\). Footnote 14: To be more precise, the number of linearly independent outer derivations of the Poisson algebra of functions on \({\cal M}\) is equal to the first Betti number of \({\cal M}\) (which vanishes for a topologically trivial manifold \(M\)). Differential structures of order one: first-order (co)jets In quantum mechanics (and, more generally, in classical or quantum field theory), infinitesimal space-time symmetries must sometimes be represented as differential operators of order one rather than vector fields (this happens, for instance, when one considers projective representations). Therefore, field theory is a physical motivation for considering first-order differential operators as a natural generalisation of vector fields, discussed in this section. ### Functions as differential operators of order zero Any function \(f\) on a manifold \(M\) can also be seen as a linear operator \(\hat{f}\) on the structure algebra \(C^{\infty}(M)\) acting by multiplication, _i.e._ mapping any function \(g\) on \(M\) to the function \[\hat{f}[\,g]\,:=\,f\cdot g\,, \tag{54}\] where \(\cdot\) is the pointwise product. Retrospectively, the function \(f\) can be reconstructed from the corresponding linear operator \(\hat{f}\) via the action of the operator on the constant function equal to unity, \(\hat{f}[1]=f\cdot 1=f\,\). The structure algebra \(C^{\infty}(M)\) of the manifold \(M\) is a \(C^{\infty}(M)\)-(bi)module via the pointwise product. A \(C^{\infty}(M)\)-linear operator on \(C^{\infty}(M)\) is called a **differential operator of order zero** on \(M\). The space of differential operators of order zero will be denoted by \({\cal D}^{0}(M)\). Due to the above remark, any function \(f\) on \(M\) can be seen as a differential operator \(\hat{f}\) of order zero on \(M\), and conversely. In other words, \({\cal D}^{0}(M)\cong C^{\infty}(M)\). For the sake of simplicity, from now on differential operators of order zero will sometimes be identified with functions. ### First-order differential operators A **differential operator of order one** on \(M\) can be defined as a linear operator \(\hat{X}\) on \(C^{\infty}(M)\) such that its commutator with any differential operator \(\hat{f}\) of order zero is also a differential operator of order zero: \([\hat{X},\hat{f}]\in{\cal D}^{0}(M)\). The space of differential operators on \(M\) of order one will be denoted by \({\cal D}^{1}(M)\). The commutator closes on first-order differential operators. Therefore, the space of first-order differential operators \({\cal D}^{1}(M)\) can be endowed with a structure of Lie algebra with the commutator as Lie bracket. In such case, it will sometimes be denoted \({\mathfrak{D}}^{1}(M)\) in order to emphasise its Lie algebra structure. #### 5.2.1 Vector fields as first-order differential operators An equivalent definition of a vector field on \(M\) is as a differential operator \(\hat{X}\) of order one such that its action on any constant function on \(M\) is zero, _e.g._\(\hat{X}[1]=0\). **Proof:** One can check that if an operator \(\hat{X}\) is such that \(\hat{X}[1]=0\) and, moreover, \([\hat{X},\hat{f}]\) is a zeroth-order operator for any function \(f\), then its action on a function \(f\) (that is to say \(\hat{X}[f]\)) identifies with its adjoint action (in other words \([\hat{X},\hat{f}]\)) on the associated operator \(\hat{f}\). Indeed, \[\hat{X}[f]=\hat{X}[f\cdot 1]=\hat{X}\big{[}\,\hat{f}[1]\,\big{]}=(\hat{X} \circ\hat{f})[1]=(\hat{X}\circ\hat{f}-\hat{f}\circ\hat{X})[1]=\Big{(}\,[\hat{X },\hat{f}]\Big{)}\,[1]\,. \tag{55}\] In a similar way, one can then obtain the property that \(\hat{X}\) is a derivation as a consequence of the Jacobi identity for the commutator. The previous definitions lead to the following unique (and coordinate-independent) decomposition of any first order differential operator into a sum \(\hat{X}=\hat{X}_{0}+\hat{X}_{1}\) of a zeroth-order differential operator \(\hat{X}_{0}\), associated to the function \(X_{0}:=\hat{X}[1]\), and a vector field \(\hat{X}_{1}:=\hat{X}-\hat{X}_{0}\). #### 5.2.2 Semidirect sum of Lie algebras A **semidirect sum of Lie algebras** is a Lie algebra decomposing as the linear sum of a Lie subalgebra \(\mathfrak{h}\subset\mathfrak{g}\) and a Lie ideal \(\mathfrak{i}\subset\mathfrak{g}\). It is sometimes denoted in the following ways: \[\mathfrak{g}\,=\,\mathfrak{h}\in\mathfrak{i}\,=\,\mathfrak{i}\ni\mathfrak{h}\,. \tag{56}\] Concretely, the Lie bracket obeys \[[\mathfrak{h},\mathfrak{h}]\subset\mathfrak{h}\,,\quad[\mathfrak{i},\mathfrak {i}]\subset\mathfrak{i}\,,\quad[\mathfrak{h},\mathfrak{i}]\subset\mathfrak{i}\,. \tag{57}\] Therefore the linear decomposition \(\mathfrak{g}=\mathfrak{h}\in\mathfrak{i}\) is actually a direct sum of \(\mathfrak{h}\)-modules. In fact, one can define equivalently a semidirect sum as a Lie algebra \(\mathfrak{g}\) with a Lie subalgebra \(\mathfrak{h}\subset\mathfrak{g}\) such that the adjoint representation \(ad^{\mathfrak{h}}:\mathfrak{g}\to\mathfrak{o}\mathfrak{e}(\mathfrak{g})\) of the Lie algebra \(\mathfrak{g}\) on itself, restricted to the adjoint representation \(ad^{\mathfrak{h}}|_{\mathfrak{h}}:\mathfrak{h}\to\mathfrak{o}\mathfrak{e}( \mathfrak{g})\) of the Lie subalgebra \(\mathfrak{h}\subset\mathfrak{g}\) on the whole Lie algebra \(\mathfrak{g}\) is fully reducible. It decomposes as the direct sum \(ad^{\mathfrak{g}}|_{\mathfrak{h}}=ad^{\mathfrak{h}}\oplus r\) of 1. the adjoint representation \(ad^{\mathfrak{h}}:\mathfrak{h}\to\mathfrak{o}\mathfrak{e}(\mathfrak{h})\) of the Lie subalgebra \(\mathfrak{h}\) on itself, and 2. a representation \[r\,:\,\mathfrak{h}\to\mathfrak{o}\mathfrak{e}\mathfrak{r}(\mathfrak{i})\,:\,v \mapsto r_{v}\] (58) of the Lie subalgebra \(\mathfrak{h}\) on the Lie ideal \(\mathfrak{i}\) with \(r_{v}(w):=[v,w]\) for \(v\in\mathfrak{h}\) and \(w\in\mathfrak{i}\). Conversely, given two Lie algebras \(\mathfrak{h}\) and \(\mathfrak{i}\) together with a representation (58) of the Lie algebra \(\mathfrak{h}\) on the Lie algebra \(\mathfrak{i}\) one can form their semidirect sum \(\mathfrak{g}=\mathfrak{h}\in_{r}\mathfrak{i}\) by endowing their linear sum with a Lie algebra structure via the Lie bracket \[[\,v_{1}\oplus w_{1}\,,\,v_{2}\oplus w_{2}\,]_{\mathfrak{g}}=[\,v_{1}\,,\,v_{2 }\,]_{\mathfrak{h}}\,\oplus\,\Big{(}\,r_{v_{1}}(w_{2})-r_{v_{2}}(w_{1})+[\,w_{ 1}\,,\,w_{2}\,]_{\mathfrak{i}}\,\Big{)} \tag{59}\] for all \(v_{1},v_{2}\in\mathfrak{h}\) and \(w_{1},w_{2}\in\mathfrak{i}\). **Example (First-order differential operators) :** For instance, the Lie algebra of first-order differential operators splits into the semidirect sum \[\mathfrak{D}^{1}(M)\cong\mathfrak{X}(M)\in C^{\infty}(M) \tag{60}\] of the Lie subalgebra \(\mathfrak{X}(M)\) of vector fields and the Abelian ideal \(\mathfrak{D}^{0}(M)\cong C^{\infty}(M)\) of differential operators of order zero. #### 5.2.3 Some general facts A theorem of Grabowski and Poncin provides an analogue of the generalised Milnor exercise for the Lie algebra of first-order differential operators [20]. **Theorem (Grabowski & Poncin) :** _A map \(\Phi:\mathfrak{D}^{1}(M)\stackrel{{\sim}}{{\to}}\mathfrak{D}^{1} (N)\) between the Lie algebras of first-order differential operators on two manifolds is an isomorphism of Lie algebras iff there is a diffeomorphism \(F:M\stackrel{{\sim}}{{\to}}N\) between these two manifolds._ On the one hand, the Lie algebra \(\mathfrak{X}(M)\) of vector fields may be endowed with a structure of left \(\mathcal{C}^{\infty}(M)\)-module via the composition product \(\circ\) (which will often be implicit in the sequel) since the operator \(\hat{f}\circ\hat{X}\) is again a vector field if \(f\) is a function and \(\hat{X}\) is a vector field. The previous product will be called **pointwise product** of a function \(f\) and a vector field \(\hat{X}\), and it will be denoted as \(f\cdot\hat{X}:=\hat{f}\circ\hat{X}\,\). However, the Lie bracket of vector fields is not \(\mathcal{C}^{\infty}(M)\)-linear, rather it obeys to the Leibniz rule \[[\,\hat{X}\,,\,f\cdot\hat{Y}\,]\,=\,\hat{X}[f]\cdot\hat{Y}\,+\,f\cdot[\hat{X},\hat{Y}]\, \tag{61}\] for \(\mathfrak{X}(M)\) seen as a left \(\mathcal{C}^{\infty}(M)\)-module.15 Footnote 15: By default, the modules will be considered to be left modules therefore the word “left” will often be implicit. On the other hand, the space \(\mathcal{D}^{1}(M)\) of first-order differential operators is a left and right \(C^{\infty}(M)\)-module, _i.e._ a \(C^{\infty}(M)\)-bimodule, since \(\hat{f}\circ\hat{X}\circ\hat{g}\) is a first-order differential operator if \(f\) and \(g\) are functions while \(\hat{X}\) is a first-order differential operator. Notice that this composition product \(\hat{X}\circ\,\hat{g}\) with a function on the right is no more pointwise (_i.e._ it does not only depend on the values of the objects at the same point) since it includes a derivative of the function \(f\). More explicitly, for a first order differential operator decomposing into the sum \(\hat{X}=\hat{X}_{1}+\hat{X}_{0}\) of a vector field \(\hat{X}_{1}\) and a zeroth-order differential operator \(\hat{X}_{0}\) one has \[\hat{X}\circ\hat{f}\,=\,[\hat{X}\stackrel{{\circ}}{{,}}\hat{f}]\,+ \,\hat{f}\circ\hat{X}\,=\,\hat{X}_{1}[f]\cdot\,+f\cdot\hat{X}\,. \tag{62}\] ### First-order jets Let \(m\) be a point of \(M\). Consider the commutative ideal \({\cal I}^{1}(m)\) of \(C^{\infty}(M)\) spanned by the functions \(f\) such that \(f|_{m}=0\) and \(df|_{m}=0\), _i.e._ the functions whose value and first derivatives vanish at the point \(m\). It will be called **contact ideal of order one**. The quotient of the contact ideal of order zero by the contact ideal of order one is isomorphic to the cotangent space: \[T^{*}_{m}M\,\cong\,{\cal I}^{0}(m)\,/\,{\cal I}^{1}(m)\,, \tag{63}\] as can be expected since one retains only the information about the first derivative of functions at \(m\) and not about the value of the function (at the same point \(m\)). The contact ideal of order one \({\cal I}^{1}(m)\subset C^{\infty}(M)\) defines an equivalence relation whose equivalence classes \([f]\), denoted by \(j^{1}_{m}f\), are called **jets of order one** or **first-order jets** (or simply **1-jets**) of functions. More concretely, two functions \(f\) and \(g\) are said to have the same jet \(j^{1}_{m}f\) of order one at \(m\) if they only differ by an element of the commutative ideal \({\cal I}^{1}(m)\), _i.e._ if these two functions together with their first derivatives have the same value at \(m\). The point \(m\) is sometimes called the source of the jet \(j^{1}_{m}f\). The quotient \[J^{1}_{m}M\,:=\,C^{\infty}(M)\,/\,{\cal I}^{1}(m) \tag{64}\] is called the **first-order jet space** at \(m\). From the isomorphism (63), it follows that the cotangent space is isomorphic to the quotient of the first-order jet space by the zeroth-order one: \[T^{*}_{m}M\,\cong\,J^{1}_{m}M\,/\,J^{0}_{m}M\,. \tag{65}\] The **first-order jet bundle** is the manifold \(J^{1}M:=\bigcup_{m}J^{1}_{m}M\) and it has local coordinates \((x^{\mu},\phi,\phi_{\nu})\). As one may suspect from the coordinate expression and the above isomorphisms, the first-order jet bundle is isomorphic to the direct sum over \(M\) of the zeroth-order jet bundle \(J^{0}M=M\times\mathbb{R}\) and the cotangent bundle \(T^{*}M\), _i.e._ \[J^{1}M\cong J^{0}M\oplus T^{*}M\,. \tag{66}\] The first-order jet bundle is the paradigmatic example of "contact manifold" but one will not dweve on this notion. A compact notation for a 1-jet is as a polynomial of degree one in an auxiliary variables \(\varepsilon^{\mu}\), _i.e._\(\phi(\varepsilon)=\phi+\varepsilon^{\mu}\phi_{\mu}\). A section of the 1-jet bundle \(J^{1}M\) is called a 1**-jet field** therefore compactly summarised into the local expression \[\phi(x;\varepsilon)=\phi(x)+\varepsilon^{\mu}\phi_{\mu}(x)\,. \tag{67}\] The commutative algebra of 1-jet fields will be denoted \({\cal J}^{1}(M):=\Gamma(J^{1}M)\). The **first-order prolongation of a function**\(f\in C^{\infty}(M)\) is a 1-jet field \(j^{1}f\in{\cal J}^{1}(M)\) whose value at a point \(m\) is the 1-jet \(j^{1}_{m}f\) of the function. Its local expression is \[(j^{1}f)(x;\varepsilon)=f(x)+\varepsilon^{\mu}\partial_{\mu}f(x), \tag{68}\] _i.e._\((j^{1}f)(x;0)=f(x)\) and \((j^{1}f)_{\mu}(x)=\partial_{\mu}f(x)\). This defines the **first-order prolongation**\(j^{1}:C^{\infty}(M)\to{\cal J}^{1}(M)\). In more geometrical terms, the 1-jet \(j^{1}_{m}f\) at a point \(m\) can be interpreted as the equivalence classes of sections of the structure bundle that touch each other till order 1. In other words, they are (nowhere vertical) codimension-one submanifolds that intersect and are tangent at the point \((\,m\,,f|_{m}\,)\in M\times{\mathbb{R}}\). ### First-order cojets The dual space \(D^{1}_{m}M:=(J^{1}_{m}M)^{*}\) of linear forms on the first-order jet space \(J^{1}_{m}M\) is called the **1-colet space** at \(m\). In local coordinates, a compact notation for a **1-colet** (or **first-order cojet**) \(X\) is as a polynomial of degree one in the auxiliary variables \(\frac{\partial}{\partial\varepsilon^{\mu}}\), _i.e._\(X(\,\partial_{\varepsilon})=X^{\mu}\frac{\partial}{\partial\varepsilon^{\mu} }+X\). Concretely, in this representation a 1-colet \(X\) acts on a 1-jet \(\phi\) at the same point, as a first-order differential operator with respect to the auxiliary coordinate \(\varepsilon\) followed by an evaluation at \(\varepsilon=0\): \[\langle X,\phi\,\rangle=\,\big{(}\,X(\partial_{\varepsilon})\,\phi(\varepsilon )\,\big{)}\,\big{|}_{\varepsilon=0}=X^{\mu}\phi_{\mu}+X\phi\,. \tag{69}\] since the representative of \(\phi\) is the polynomial \(\phi(\,\varepsilon)=\phi+\varepsilon^{\mu}\phi_{\mu}\) of degree one. The **1-colet bundle**\(D^{1}M=\bigcup_{m}D^{1}_{m}M\) has local coordinates \((x^{\mu},X,X^{\nu})\). A section of the 1-colet bundle \(D^{1}M\) will be called a **1-colet field** and is compactly summarised into a generating function \(X(x;\partial_{\varepsilon})=X^{\mu}(x)\frac{\partial}{\partial\varepsilon^{ \mu}}+X(x)\). The space \(Gamma(D^{1}M)\) of such sections will be denoted by the same symbol \({\cal D}^{1}(M)\) as the space of differential operators of order one since they are isomorphic. Notice that any 1-colet field can be interpreted as a map \(X:{\cal J}^{1}(M)\to C^{\infty}(M)\) where the explicit action of a 1-colet field \(X(x,\partial_{\varepsilon})\) in \({\cal D}^{1}(M)\) on a 1-jet field \(\phi(x;\varepsilon)\) in \({\cal J}^{1}(M)\) is given by (69), where the dependence on \(x\) would be implicit. Retrospectively, first-order differential operators on \(M\) can be defined as linear operators \[\hat{X}\,:\,C^{\infty}(M)\to C^{\infty}(M)\,:\,f\mapsto\hat{X}[f] \tag{70}\] which factor through the 1-jet bundle \(J^{1}M\) in the sense that \[\hat{X}=X\circ j^{1}\,, \tag{71}\] where \(X:{\cal J}^{1}(M)\to C^{\infty}(M)\) is a 1-cojet field in \({\cal D}^{1}(M)\) and \(j^{1}:C^{\infty}(M)\to{\cal J}^{1}(M)\) is the first-order prolongation. Concretely this means that a first-order differential operator whose local expression is \(\hat{X}=X^{\mu}(x)\frac{\partial}{\partial x^{\mu}}+X(x)\) obviously defines a 1-cojet field \(X(x;\partial_{\varepsilon})=X^{\mu}(x)\frac{\partial}{\partial\varepsilon^{ \mu}}+X(x)\). Moreover, the action of \(\hat{X}=X\circ j^{1}\) on \(f\) is indeed \[(\hat{X}f)(x) = X^{\mu}(x)\,\partial_{\mu}f(x)\,+\,X(x)\,f(x) \tag{72}\] \[= \big{(}\,X(x;\partial_{\varepsilon})\,(j^{1}f)(x;\varepsilon)\, \big{)}\,\big{|}_{\varepsilon=0}\] \[= \big{\langle}\,X\,,\,j^{1}f\,\big{\rangle}(x)\,.\] where one made use of (68) to obtain the second line. Differential structures of higher order: (co)jets The time is ripe to generalise the previous constructions and aim at our goal: provide a global and coordinate-free definition of higher-order differential operators (as well as their relatives, such as cojet fields, etc). ### Higher-order differential operators #### 6.1.1 Rings over an algebra Let \({\cal A}\) and \({\cal B}\) be two associative algebras with respective zeros \(0_{\cal A}\) and \(0_{\cal B}\), and respective units \(1_{\cal A}\) and \(1_{\cal B}\), and respective products \(\star_{\cal A}\) and \(\star_{\cal B}\). An injective morphism \(i:{\cal A}\hookrightarrow{\cal B}\) of algebras, \[i(a_{1}\star_{\cal A}a_{2})=i(a_{1})\star_{\cal B}i(a_{2})\,,\qquad\forall a_{1 },a_{2}\in{\cal A}\,, \tag{73}\] will be called a **unit map**.16 An associative algebra \({\cal B}\) endowed with a unit map \(i:{\cal A}\hookrightarrow{\cal B}\) will be called a **ring \({\cal B}\) over the algebra \({\cal A}\)** (or an \({\cal A}\)-**ring** for short). Equivalently, \({\cal B}\) admits a subalgebra isomorphic to \({\cal A}\): the image \(i({\cal A})\subseteq{\cal B}\). The unit map endows an \({\cal A}\)-ring \({\cal B}\) with a structure of \({\cal A}\)-bimodule where an element \(a\in{\cal A}\) acts by (left or right) multiplication by \(i(a)\in{\cal B}\). Conversely, an \({\cal A}\)-bimodule structure on \({\cal B}\) such that \(a\bullet 1_{\cal B}=0_{\cal B}\) iff \(a=0_{\cal A}\), endows the associative algebra \({\cal B}\) with a structure of \({\cal A}\)-ring, by considering the action of \({\cal A}\) on the unit element \(1_{\cal B}\) (_i.e._ defining the unit map by \(i(a):=a\bullet 1_{\cal B}\) for all \(a\in{\cal A}\)). Footnote 16: This terminology is standard in the context of bialgebroids, where the assumption of injectivity is dropped. The terminology originates from the fact that, in particular, the unit map somehow relates the two units in the sense that \(i(1_{\cal A})=1_{\cal B}\)). Note that the injectivity can be assumed without loss of generality, in the sense that one can always focus on the quotient algebra \({\cal A}/\ker\,i\). **Equivalent formulations of \({\cal A}\)-rings** Given two associative algebras \({\cal A}\) and \({\cal B}\), the following notions are equivalent: 1. an \({\cal A}\)-ring \({\cal B}\) defined by an injective algebra morphism \(i:{\cal A}\hookrightarrow{\cal B}\). 2. an algebra \({\cal B}\) containg a copy of \({\cal A}\), _i.e._ the image \(i({\cal A})\subseteq{\cal B}\). 3. an \({\cal A}\)-bimodule structure on \({\cal B}\) such that the action of the nonvanishing elements of \({\cal A}\) on the unit element of \({\cal B}\) is nonvanishing. **Example (Endomorphism algebra) :** Let \({\cal A}\) be an associative algebra whose product \(\cdot\) is non-degenerate (in the sense that \(f\cdot g=0\) for all \(g\in{\cal A}\) iff \(f=0\)). Any element \(f\) of \({\cal A}\) defines an \({\cal A}\)-linear map \(\hat{f}\in{\rm End}({\cal A})\) acting on \({\cal A}\) by multiplication by \(f\), _i.e._ in a compact notation \(\hat{f}:=f\cdot\) or, more explicitly, \(\hat{f}:g\mapsto f\cdot g\). The left multiplication provides a canonical representation \[\hat{\bullet}\,:\,{\cal A}\hookrightarrow{\rm End}({\cal A})\,:\,f\mapsto\hat{ f}\,, \tag{74}\] of \({\cal A}\) on itself which is faithful (since the product is non-degenerate, a property which will always be assumed implictly below). #### 6.1.2 Algebraic definition of differential operators Let \({\cal B}\) be an associative algebra with a commutative subalgebra \({\cal A}\subset{\cal B}\). An element \(a\in{\cal B}\) such that, for any set \(\{f_{1},f_{2},\cdots,f_{k}\}\subset{\cal A}\) of \(k\) elements in the commutative subalgebra \({\cal A}\), the \(k\)th commutator with each of them belongs to \({\cal A}\), _i.e._ \[[\,[\,\ldots\,[a\,,\,f_{1}]\,,\,f_{2}]\ldots\,,\,f_{k}]\in{\cal A}\,, \tag{75}\] will be called a **differential element of order \(k\) with respect to the commutative subalgebra**\({\cal A}\). The motivation for this terminology is the particular case of scalar-valued linear operators on a commutative algebra \({\cal A}\). A (scalar-valued linear) **differential operator of order \(k\) acting on the commutative algebra**\({\cal A}\) is an endomorphism \(\hat{X}\in{\rm End}({\cal A})\) such that, for any set \(\{f_{1},f_{2},\cdots,f_{k}\}\subset{\cal A}\) of \(k\) elements in the commutative algebra \({\cal A}\) (identified with the commutative algebra of zeroth-order differential operators on \({\cal A}\)), the \(k\)th commutator with each of them belongs to \({\cal A}\), in the sense that \[[\,[\,\ldots\,[\hat{X}\,,\,\hat{f}_{1}]\,,\,\hat{f}_{2}]\ldots\,,\,\hat{f}_{k }]\in{\cal D}^{0}({\cal A})\,. \tag{76}\] This abstract definition of differential operators is by now standard and was introduced in the sixties by Grothendieck in his seminal _Elements de geometrie algebrique_[21, Section 16.8]. This provides a purely algebraic (and coordinate-free) definition of a **differential operator of order \(k\) on the manifold**\(M\) as a (scalar-valued) linear differential operator of order \(k\) acting on the structure algebra \(C^{\infty}(M)\) of functions on \(M\). A convenient representation of a differential operator \(\hat{X}\) of order \(k\) is through its **normal symbol in some local coordinates** \[X_{\rm normal}(x,p)=\sum_{r=0}^{k}\frac{1}{r!}\,X^{\mu_{1}\ldots\mu_{r}}(x)\,p _{\mu_{1}}\ldots p_{\mu_{r}} \tag{77}\] which is a symbol of degree \(k\) corresponding to the "normal ordering" of the operator \[\hat{X}(x,\partial)=\sum_{r=0}^{k}\frac{1}{r!}\,X^{\mu_{1}\ldots\mu_{r}}(x)\, \partial_{\mu_{1}}\ldots\partial_{\mu_{r}}\,, \tag{78}\] where all derivatives are on the right and all coordinates on the left.17 Note that this choice of ordering is not a coordinate-independent statement. Footnote 17: Strictly speaking, this is not the normal ordering prescription in quantum mechanics if \(x\)’s are treated as position operators. Rather, here \(x\)’s are treated as creation operators while \(\partial\)’s are treated as annihilation operators. Despite the inaccurate terminology, we will use the adjective “normal” (order, symbol, etc) in this case, following the common usage in deformation quantisation. The multi-index notation consists in denoting the collection \(\mu_{1}\ldots\mu_{r}\) of \(r\) symmetrised indices as \(\mu(r)\). Moreover, there will be an implicit sum over repeated multi-indices and over the number \(r\) of them. More precisely, one will stick to the weight one convention, _i.e._ \[S^{\mu(r)}T_{\mu(r)}:=\sum_{r}\frac{1}{r!}\,S^{\mu_{1}\ldots\mu_{r}}\,T_{\mu_{ 1}\ldots\mu_{r}}\,.\] Adopting the multi-index convention, one could have written (78) simply as a suggestive generalisation of vector fields \(\hat{X}=X^{\mu(r)}(x)\,\partial_{\mu(r)}\) by introducing the convenient notation \(\partial_{\mu(r)}:=\partial_{\mu_{1}}\ldots\partial_{\mu_{r}}\,\). The normal symbol (77) is obtained from the action of the operator (78) on the exponential function (which would be a "plane wave" in quantum mechanics), _i.e._ \[X_{\rm normal}(x,p)\,:=\,\exp(-p_{\mu}x^{\mu})\,\hat{X}[\,\exp(p_{\mu}x^{\mu} )\,]\,. \tag{79}\] The **principal symbol of a differential operator** (78) is its leading (highest order) part \[X_{\rm principal}(x,p)=\frac{1}{r!}\,X^{\mu_{1}\ldots\mu_{k}}(x)\,p_{\mu_{1}} \ldots p_{\mu_{k}}\,, \tag{80}\] which admits a coordinate-independent definition. The space of differential operator of order \(k\) will be denoted by \({\cal D}^{k}(M)\), while the space of all differential operators (of any finite order) will be denoted by \({\cal D}(M)\). The collection \(\{\partial_{\mu_{1}}\ldots\partial_{\mu_{r}}\,|\,r\leq k\}\) of commuting differential operators is a finite generating set of the \(C^{\infty}(M)\)-module \({\cal D}^{k}(M)\) which will be called the **coordinate basis**. **Example (Grothendieck algebra) :** The algebra morphism \[\hat{\bullet}\,:\,{\cal A}\hookrightarrow{\cal D}({\cal A})\,:\,f\mapsto\hat{ f}\,, \tag{81}\] which reinterprets elements \(f\) of a commutative algebra \({\cal A}\) as zeroth-order differential operators \(\hat{f}\) on \(\mathcal{A}\), via left multiplication, endows the Grothendieck algebra \(\mathcal{D}(\mathcal{A})\) of differential operators on \(\mathcal{A}\) with the structure of \(\mathcal{A}\)-ring. The corestriction \(\dot{\bullet}:\mathcal{A}\stackrel{{\sim}}{{\to}}\mathcal{D}^{0}( \mathcal{A})\) provides an isomorphism between the commutative algebra \(\mathcal{A}\) and the subalgebra \(\mathcal{D}^{0}(\mathcal{A})\) of zeroth-order differential operators on \(\mathcal{A}\). **Example (Differential operators on a manifold) :** The canonical embedding of commutative algebras from the structure algebra \(C^{\infty}(M)\) into the algebra \(\mathcal{D}(M)\) of differential operators on the manifold \(M\), which reinterprets functions as zeroth-order differential operators, \[\dot{\bullet}\,:\,C^{\infty}(M)\hookrightarrow\mathcal{D}(M)\,:\,f\mapsto\hat{ f}\,, \tag{82}\] will be called the **unit map on the algebra of differential operators on the manifold \(M\)**. It induces a canonical isomorphism of commutative algebras from \(C^{\infty}(M)\) to \(\mathcal{D}^{0}(M)\), \[\dot{\bullet}\,:\,C^{\infty}(M)\stackrel{{\sim}}{{\to}}\mathcal{ D}^{0}(M)\,:\,f\mapsto\hat{f}\,. \tag{83}\] #### 6.1.3 Filtration An \(\mathbb{N}\)**-filtered** vector space \(V\) is a collection \(\{V_{i}\}\) of vector spaces where \(i\in\mathbb{N}\) and such that \(V_{i}\subset V_{j}\) for \(i<j\) and \(V=\bigcup_{i\in\mathbb{N}}V_{i}\). An algebra \(\mathcal{A}\) is \(\mathbb{N}\)-filtered if it is so as a vector space and if its product \(\star\) obeys \(\mathcal{A}_{i}\star\mathcal{A}_{j}\subseteq\mathcal{A}_{i+j}\). Moreover, if the algebra \(\mathcal{A}\) has a unit element, then one further requires that \(1\in\mathcal{A}_{0}\). The component \(\mathcal{A}_{0}\) of degree zero of any \(\mathbb{N}\)-filtered algebra \(\mathcal{A}\) is always a subalgebra. In other words, the \(\mathbb{N}\)-filtered algebra \(\mathcal{A}\) is a ring over \(\mathcal{A}_{0}\) (in the absence of left divisors of zero). The **filtered algebra associated to a graded algebra**\(\mathcal{B}=\oplus_{i\in\mathbb{N}}\mathcal{B}_{i}\) is the filtered algebra which will be denoted \(\mathcal{B}_{\leqslant}=\bigcup_{i\in\mathbb{N}}\mathcal{B}_{\leqslant i}\) and which is defined via the direct sums \(\mathcal{B}_{\leqslant i}=\oplus_{j\leqslant i}\mathcal{B}_{j}\,.\) Conversely, the **graded algebra associated to a filtered algebra**\(\mathcal{A}=\bigcup_{i\in\mathbb{N}}\mathcal{A}_{i}\) is denoted \(\mathrm{gr}\mathcal{A}=\oplus_{i\in\mathbb{N}}\,\mathrm{gr}\mathcal{A}_{i}\) and defined via the quotients \(\mathrm{gr}\mathcal{A}_{i}=\mathcal{A}_{i}/\mathcal{A}_{i-1}\). The equivalence class \([a]\in\mathrm{gr}\mathcal{A}\) of an element \(a\in\mathcal{A}\) of a filtered algebra is an element of the associated graded algebra called the **principal symbol** of \(a\). This defines an infinite collection of surjective linear maps \[\sigma_{i}\,:\,\mathcal{A}_{i}\twoheadrightarrow\mathrm{gr}\mathcal{A}_{i}\,: \,a\mapsto[a]\,, \tag{84}\] which will be collectively denoted (with a slight abuse of notation) as \[\sigma\,:\,\mathcal{A}\twoheadrightarrow\mathrm{gr}\mathcal{A}\,:\,a\mapsto[ a]\,. \tag{85}\] **Example (Polynomials) :** For instance, the algebra \(\mathbb{C}\mathbb{R}^{n*}\) of polynomial functions on the affine (respectively, vector) space \(\mathbb{R}^{n}\) is the filtered (respectively, graded) algebra of polynomials of \(n\) variables. The distinction between these two cases comes from the fact that translations do not preserve the grading while they preserve the filtration. The principal symbol of a polynomial of degree \(k\) is identified with its homogeneous piece of highest degree \(k\). #### 6.1.4 Almost-commutative algebra of differential operators The \(\mathbb{N}\)-graded algebra \(\mathrm{gr}\mathcal{A}\) associated to an \(\mathbb{N}\)-filtered associative algebra \(\mathcal{A}\) is commutative iff the commutator obeys to \([\mathcal{A}_{i},\mathcal{A}_{j}]\subseteq\mathcal{A}_{i+j-1}\). In such case, the \(\mathbb{N}\)-filtered associative algebra \(\mathcal{A}\) is called **almost commutative**. Equivalently, an almost-commutative algebra \(\mathcal{A}\) is an \(\mathbb{N}\)-filtered (_i.e._\(\mathcal{A}_{i}\,\mathcal{A}_{j}\subseteq\mathcal{A}_{i+j}\)) associative algebra whose filtration is such that the suspension \(\mathfrak{A}\)[1] of its commutator algebra \(\mathfrak{A}\) is a filtered Lie algebra (_i.e._\([\mathfrak{A}_{i+1},\mathfrak{A}_{j+1}]\subseteq\mathfrak{A}_{i+j+1}\)). This equivalent definition makes clear that the \(\mathbb{N}\)-graded algebra \(\mathrm{gr}\mathcal{A}\) of an almost-commutative algebra \(\mathcal{A}\) is endowed with a canonical structure of Schouten algebra, where the Poisson bracket \(\{\,\ \}\) is inherited from the commutator bracket \([\ ;\ ]\) via the principal symbol: \[\{\sigma(a),\sigma(b)\}:=\sigma\Big{(}\left[\begin{array}{cc}a&\mathbf{ \ast},&b\end{array}\right]\Big{)}\,. \tag{86}\] The Schouten algebra \(\mathrm{gr}\mathcal{A}\) associated to an almost commutative algebra \(\mathcal{A}\) will be called the **classical limit of the almost-commutative algebra**. Accordingly, some authors use the term "quantum" (respectively "classical") Poisson algebra \(\mathcal{A}\) as a synonym for "almost-commutative" (respectively, "Schouten") algebra [20]. Note that an almost-commutative algebra is central iff its classical limit is central. Another equivalent way to characterise an almost-commutative algebra is as an associative algebra \(\mathcal{A}\) with a commutative subalgebra \(\mathcal{A}_{0}\subset\mathcal{A}\) such that all elements of \(\mathcal{A}\) are differential with respect to \(\mathcal{A}_{0}\). Indeed, such an algebra \(\mathcal{A}\) is filtered by the order of elements and one can check that this filtration obeys to \([\mathcal{A}_{i},\mathcal{A}_{j}]\subseteq\mathcal{A}_{i+j-1}\). **Example (Grothendieck algebra) :** The composition product \(\circ\) endows the space \(\mathcal{D}(\mathcal{A}_{0})\) of all differential operators acting on a commutative algebra \(\mathcal{A}_{0}\) with a structure of almost-commutative algebra. This algebra will be called the **Grothendieck algebra of differential operators acting on the commutative algebra \(\mathcal{A}_{0}\)**. These abstract constructions can be illustrated in the two simpler cases of polynomial and formal power series, before turning back to smooth functions. **Example (Polynomial differential operators) :** The Grothendieck algebra \(\mathcal{D}(A)\,:=\,\mathcal{D}\big{(}\,\odot(V^{*})\,\big{)}\) of ### Equivalent formulations of almost-commutative algebras The following notions are equivalent: 1. an almost-commutative algebra, 2. an \(\mathbb{N}\)-filtered associative algebra \(\mathcal{A}\) such that one of the following equivalent properties holds: 1. its associated \(\mathbb{N}\)-graded algebra \(\mathrm{gr}\mathcal{A}\) is commutative, 2. its commutator decreases the degree by one: \([\mathcal{A}_{i},\mathcal{A}_{j}]\subseteq\mathcal{A}_{i+j-1}\), 3. the suspension \(\mathfrak{A}[1]\) of its commutator algebra \(\mathfrak{A}\) is a filtered Lie algebra: \([\mathfrak{A}_{i+1},\mathfrak{A}_{j+1}]\subseteq\mathfrak{A}_{i+j+1}\). 3. an associative algebra \(\mathcal{A}\) with a commutative subalgebra \(\mathcal{A}_{0}\subset\mathcal{A}\) such that all elements of \(\mathcal{A}\) are differential with respect to \(\mathcal{A}_{0}\). differential operators acting on the commutative algebra of polynomial functions on the affine space \(A\) modeled on the vector space \(V\) of finite dimension \(n\) is the almost-commutative algebra of **polynomial differential operators** on the affine space \(A\). The Grothendieck algebra \(\mathcal{D}(A)\) of polynomial differential operators is called the **Weyl algebra**. The Weyl algebra is simple (_i.e._ it has no nontrivial ideal) and central (_i.e._ its center is the field \(\mathbb{K}\) of the underlying commutative algebra \(\mathcal{A}\)), thus all its derivations are inner (_i.e._ they arise from the adjoint action of some element of the algebra). The polynomial differential operators of order \(k\) take the form \(\hat{X}=\sum_{r=0}^{k}\frac{1}{r!}X^{a_{1}\cdots a_{r}}(y)\frac{\partial}{ \partial y^{a_{1}}}\cdots\frac{\partial}{\partial y^{a_{r}}}\) with coefficients \(X^{a_{1}\cdots a_{r}}(y)\) which are polynomials in \(y\)'s. The associated graded algebra \(\mathrm{gr}\,\mathcal{D}\big{(}\,\mathbb{R}[\![y^{a}]\,\big{)}\) is isomorphic to the commutative algebra \(\mathbb{R}[\![y^{a},p_{b}]\!]\) of polynomials on the cotangent bundle \(T^{*}\mathbb{R}^{n}\cong\mathbb{R}^{n}\oplus\mathbb{R}^{n*}\). **Example (Formal differential operators) :** The Grothendieck algebra \(\mathcal{D}\big{(}\,\overline{\odot}(V^{*})\,\big{)}\) of differential operators acting on the commutative algebra \(\overline{\odot}(V^{*})\) of formal power series at the origin of the vector space \(V\) of finite dimension \(n\) is the almost-commutative algebra of **formal differential operators** at the origin of the vector space \(V\). A formal differential operator of order \(k\) takes the form \(\hat{X}=\sum_{r=0}^{k}\frac{1}{r!}X^{a_{1}\cdots a_{r}}(y)\frac{\partial}{ \partial y^{a_{1}}}\cdots\frac{\partial}{\partial y^{a_{r}}}\) with coefficients \(X^{a_{1}\cdots a_{r}}(y)=\sum_{q=0}^{\infty}\frac{1}{q!}X^{a_{1}\cdots a_{r}}_{ b_{1}\cdots b_{q}}\,y^{b_{1}}\cdots y^{b_{q}}\) which are formal power series at the origin. The graded algebra \(\mathrm{gr}\mathcal{D}\big{(}\,\mathbb{R}[\![y^{a}]\,\big{)}\) in the case of formal power series is isomorphic to the commutative algebra \(\mathbb{R}[\![y^{a},p_{b}]\!]\) of formal power series in the Cartesian coordinates \(y^{a}\) on the base but of polynomials in the vertical coordinates \(p_{b}\) on \(T^{*}\mathbb{R}^{n}\). In coordinate-independent terms, one has the isomorphism \(\mathrm{gr}\mathcal{D}(\mathcal{A})\cong\overline{\odot}(V^{*})\otimes \odot(V)\) of commutative algebras. The Grothendieck algebra of differential operators acting on the commutative algebra \(C^{\infty}(M)\) of smooth functions on a manifold \(M\) is the almost-commutative algebra \({\cal D}(M)\) of smooth differential operators on the manifold \(M\) endowed with the composition product \(\circ\). Among other thing, it is an \(\mathbb{N}\)-filtered associative algebra, _i.e._ the two following properties hold: \[{\cal C}^{\infty}(M)\cong{\cal D}^{0}(M)\subset{\cal D}^{1}(M)\subset{\cal D}^{2 }(M)\subset\ldots\subset{\cal D}^{k}(M)\subset{\cal D}^{k+1}(M)\subset\ldots \tag{87}\] and \[{\cal D}^{k}(M)\circ{\cal D}^{l}(M)\subset{\cal D}^{k+l}(M)\,. \tag{88}\] The above infinite sequence (87) of inclusions provides an abstract definition of \({\cal D}(M)\) as the direct limit. Remember that the map \[\hat{\bullet}\,:\,{\cal C}^{\infty}(M)\stackrel{{\sim}}{{ \rightarrow}}{\cal D}^{0}(M)\,:\,f\mapsto\hat{f}\,, \tag{89}\] which associates to any function \(f\) its corresponding differential operator \(\hat{f}:g\mapsto f\cdot g\) of order zero, is an isomorphism of commutative algebras mapping the unit function \(1\) to the identity operator \(\hat{1}\) and the pointwise product \(f\cdot g\) of functions to the composition product \(\hat{f}\circ\hat{g}\) of differential operators of order zero (_i.e._\(\widehat{f\cdot g}=\hat{f}\circ\hat{g}\)). The commutator algebra \({\mathfrak{D}}(M)\) is the space \({\cal D}(M)\) of differential operators endowed with a structure of Lie algebra through the commutator as Lie bracket \([\hat{X},\hat{Y}]\) between two differential operators \(\hat{X}\) and \(\hat{Y}\).18 Notice that Footnote 18: Sometimes, one will not specify which product structure (associative or Lie) is chosen and one will colloquially refer to it as the algebra of differential operators. \[[\,{\mathfrak{D}}^{k}(M)\,,\,{\mathfrak{D}}^{l}(M)\,]\subset{\mathfrak{D}}^{k+ l-1}(M)\,. \tag{90}\] As one can see, the first-order differential operators span the non-abelian Lie subalgebra \({\mathfrak{D}}^{1}(M)\) while the zeroth-order differential operators (_i.e._ the functions) span the abelian Lie subalgebra \({\mathfrak{D}}^{0}(M)\cong C^{\infty}(M)\). The \(\mathbb{N}\)-filtered associative algebra \({\cal D}(M)\) is almost-commutative due to (90) and one has the following canonical isomorphisms \[\mbox{gr}\,{\cal D}(M)\cong{\cal S}(M)\cong\Gamma(\Circ TM)\,, \tag{91}\] of Schouten algebras. In more concrete terms, there is a one-to-one correspondence between principal symbols (85) of differential operators and symmetric contravariant tensor fields \(X^{\mu_{1}\ldots\mu_{k}}(x)\) relating the two commutative products (the pointwise product of functions on the cotangent space to the symmetrised product of symmetric contravariant tensor fields) and the two Poisson brackets (the canonical Poisson bracket of functions on the cotangent space to the Schouten bracket of symmetric contravariant tensor fields). #### 6.1.5 Differential operators as almost-linear operators A commutative algebra \(\mathcal{A}\) can be seen as a (bi)module over itself (via multiplication). An \(\mathcal{A}\)-linear map from \(\mathcal{A}\) (_i.e._ an operator on \(\mathcal{A}\) which is a morphism of \(\mathcal{A}\)-modules) to an \(\mathcal{A}\)-module is entirely determined by its action on the unit element \(1\in\mathcal{A}\). In particular, the operator \(\hat{f}=f\): is the unique \(\mathcal{A}\)-linear map from \(\mathcal{A}\) to itself, which maps the unit element to the function \(f\), _i.e._ such that \(\hat{f}:1\mapsto f\). **Equivalent formulations of zeroth-order differential operators** Consider a commutative algebra \(\mathcal{A}\). The following notions are equivalent: 1. a differential operator \(\hat{X}\in\mathcal{D}^{0}(\mathcal{A})\) of order zero on \(\mathcal{A}\), 2. an endomorphism \(\hat{X}\in\operatorname{End}_{\mathcal{A}}(\mathcal{A})\) of the \(\mathcal{A}\)-module \(\mathcal{A}\), _i.e._ an operator which is \(\mathcal{A}\)-linear in the sense that \(\hat{X}[f\cdot g]=f\cdot\hat{X}[g]\) for any \(f,g\in\mathcal{A}\), 3. an endomorphism \(\hat{X}\in\operatorname{End}(\mathcal{A})\) of the vector space \(\mathcal{A}\) which is such that \(\hat{X}\circ\hat{f}=\hat{f}\circ\hat{X}\) for any \(f\in\mathcal{A}\). The definition of zeroth-order differential operators as \(\mathcal{A}\)-linear endomorphisms motivates a recursive definition of higher-order differential operators. A differential operator \(\hat{X}\in\mathcal{D}^{k}(\mathcal{A})\) on \(\mathcal{A}\) of order \(k\) can be equivalently defined as an endomorphism \(\hat{X}\in\operatorname{End}(\mathcal{A})\) of the vector space \(\mathcal{A}\) which will be said **almost \(\mathcal{A}\)-linear**, in the sense that \([\hat{X},\hat{f}]\in\mathcal{D}^{k-1}(\mathcal{A})\) for any \(f\in\mathcal{A}\), _i.e._ it is \(\mathcal{A}\)-linear up to lower-order terms. In other words, the Grothendieck algebra of differential operators acting on the commutative algebra \(\mathcal{A}\) can be defined as the algebra of almost \(\mathcal{A}\)-linear operators on \(\mathcal{A}\). A ring \(\mathcal{B}\) over a commutative algebra \(\mathcal{A}\) such that the image of the unit map \(i:\mathcal{A}\hookrightarrow\mathcal{B}\) lies in the center of \(\mathcal{B}\), is called an \(\mathcal{A}\)**-algebra**. Any commutative \(\mathcal{A}\)-ring is an \(\mathcal{A}\)-algebra. **Example (Algebra over a field) :** In the particular case when \(\mathcal{A}=\mathbb{K}\) is a field, a \(\mathbb{K}\)-algebra coincides with the usual notion of an associative algebra over \(\mathbb{K}\). An \(\mathbb{N}\)-filtered algebra \(\mathcal{A}\), which is such that its associated \(\mathbb{N}\)-graded algebra \(\mathrm{gr}\mathcal{A}\) is an \(\mathcal{A}_{0}\)-algebra over its component \(\mathcal{A}_{0}\) of degree zero, will be called an **almost \(\mathcal{A}_{0}\)-algebra**. In terms of the commutator, the condition reads \([{\cal A}_{i},{\cal A}_{0}]\subseteq{\cal A}_{i-1}\). Any almost-commutative algebra \({\cal A}\) is an almost \({\cal A}_{0}\)-algebra, but the converse is not true in general. The recursive definition of the Grothendieck algebra can be summarised as follows: the Grothendieck algebra \({\cal D}({\cal A})\) of differential operators on the commutative algebra \({\cal A}\) is the maximal subalgebra of the \({\cal A}\)-ring \({\rm End}({\cal A})\) of endomorphisms of the vector space \({\cal A}\) which is an almost \({\cal A}\)-algebra. #### 6.1.6 Differential operators as infinitesimal automorphisms Let us now introduce the notation \[{\cal L}_{\hat{X}}f\,:=\,\hat{X}[f]\,. \tag{92}\] Motivated by higher-spin gravity, it will be called the **higher-spin Lie derivative \({\cal L}_{\hat{X}}f\) of a function \(f\) along a differential operator \(\hat{X}\)** for reasons that will become clear later. The corresponding Lie algebra morphism (_i.e._\([{\cal L}_{\hat{X}},{\cal L}_{\hat{Y}}]={\cal L}_{[\hat{X},\hat{Y}]}\)) \[{\cal L}\,:\,{\mathfrak{D}}(M)\,\rightarrow{\mathfrak{gl}}\Big{(}\,C^{\infty }(M)\,\Big{)}\,:\,\hat{X}\mapsto{\cal L}_{\hat{X}} \tag{93}\] will be called the **fundamental representation of the algebra of differential operators**. The commutative algebra of functions is the representation space of this fundamental representation. Let us stress the obvious point that the higher-spin Lie derivative along a differential operator is _not_ a derivation of the commutative algebra of functions, except when the differential operator is a vector field (a tautology from the algebraic definition of vector fields as derivations), _i.e._\({\cal L}_{\hat{X}}\notin{\mathfrak{der}}\Big{(}\,C^{\infty}(M)\,\Big{)}\) for \(\hat{X}\notin{\mathfrak{X}}(M)\). This is clear from the definition of the higher-spin Lie derivative but our choice of notation and terminology might obscure this point. Nevertheless, one will stick to this choice because it suggests a natural generalisation for the Lie derivative of differential operators. We define the **higher-spin Lie derivative \({\cal L}_{\hat{X}}\hat{Y}\) of a differential operator \(\hat{Y}\) along another differential operator \(\hat{X}\)** from compatibility with the Leibniz rule \({\cal L}_{\hat{X}}\big{(}\,\hat{Y}[f]\,\big{)}=({\cal L}_{\hat{X}}\hat{Y})[f]+ \hat{Y}[{\cal L}_{\hat{X}}f]\), _i.e._ we set \[({\cal L}_{\hat{X}}\hat{Y})[f] := {\cal L}_{\hat{X}}\big{(}\,\hat{Y}[f]\,\big{)}-\hat{Y}[{\cal L}_ {\hat{X}}f] \tag{94}\] \[=\hat{X}\big{[}\,\hat{Y}[f]\,\big{]}\,-\,\hat{Y}\big{[}\,\hat{X}[f ]\,\big{]}\] (95) \[=[\hat{X}\,\,{}^{\circ}_{\hat{Y}}\hat{Y}][f] \tag{96}\] where (92) was used to get (95). The last line (96) results into the mere identification of the higher-spin Lie derivative of a differential operator \(\hat{Y}\) along the differential operator \(\hat{X}\) with their commutator, _i.e._\({\cal L}_{\hat{X}}\hat{Y}=[\hat{X}\,\,{}^{\circ}_{\hat{Y}}\hat{Y}]\). This generalisation of the Lie derivative leads to the following notation for the **adjoint representation of the algebra of differential operators (on itself) \[{\cal L}\,:\,{\mathfrak{D}}(M)\,\to{\sf inn}\Big{(}\,{\cal D}(M)\,\Big{)}\,:\, \hat{X}\mapsto{\cal L}_{\hat{X}} \tag{97}\] via inner derivations of the almost-commutative algebra of differential operators. Locally, all derivations of the almost-commutative algebra \({\cal D}(M)\) of differential operators are inner derivations.19 Therefore, these higher-spin Lie derivatives essentially exhaust all infinitesimal symmetries of the almost-commutative algebra of differential operators. The associative algebra of differential operators is comparable to the Lie algebra of vector fields: they are both self-referential objects in that they coincide with their own collection of infinitesimal symmetries. Strictly speaking, this is only true locally for \({\cal D}(M)\) while this is true globally for \({\mathfrak{X}}(M)\). Footnote 19: To be more precise, the number of linearly independent outer derivations of the almost-commutative algebra of differential operators on the manifold \(M\) is equal to the first Betti number of \(M\)[19]. Let us stress an important subtlety and potential source of confusion related to the fact that any function \(f\in C^{\infty}(M)\) can be seen as a differential operator \(\hat{f}\in{\cal D}^{0}(M)\) of order zero. As a differential operator, \(\hat{f}\) transforms into the adjoint representation, its higher-spin Lie derivative along \(\hat{X}\in{\cal D}^{k}(M)\) is \({\cal L}_{\hat{X}}\hat{f}=[\hat{X},\hat{f}]\in{\cal D}^{k-1}(M)\). When \(\hat{X}\) is a higher-order differential operator, then the higher-spin Lie derivative of \(\hat{f}\) along \(\hat{X}\) is _not_ a differential operator of order zero, _i.e._\({\cal L}_{\hat{X}}\hat{f}\notin{\cal D}^{0}(M)\) in general. However, one may extract a function by acting on the unit element. More precisely, the relation between the fundamental and adjoint representation is as follows: \({\cal L}_{\hat{X}}f=({\cal L}_{\hat{X}}\hat{f})\)[1]. #### 6.1.7 No-go theorem against (naive) higher-spin diffeomorphisms The almost-commutative algebra of differential operators and the Schouten algebra of principal symbols admit somewhat few finite symmetries, while there is a plethora of infinitesimal symmetries (see Section 7 of [20] and Section 8 of [19] for more details). The finite automorphisms of the almost-commutative algebra \({\cal D}(M)\) of differential operators are very scarce with respect to the infinitesimal automorphisms. In fact, the finite automorphisms of the almost-commutative algebra \({\cal D}(M)\) essentially coincide with internal Abelian gauge symmetries (as in Maxwell electromagnetism) and standard diffeomorphisms of the manifold \(M\) (as in general relativity), for which the commutative subalgebra \({\cal D}^{0}(M)\) of functions is an invariant subspace. While the infinitesimal automorphisms (the higher-spin Lie derivative) are perfectly well defined (and just a fancy name for the adjoint action of differential operators) however their formal exponentiation is not (see _e.g._[22, Sect.2] for more details) in general, except for the inner automorphisms generated by first-order differential operators. This prevents a too naive definition of "higher-spin diffeomorphisms" (attempting to overcome this obstacle was the focus of [22]). One may expect these subtleties to be related to the elusive (non)locality properties of higher-spin gravity. Let us stress that this feature is not specific to the almost-commutative algebra \({\cal D}(M)\) of differential operators, the same holds for the automorphisms of the Schouten algebra \({\cal S}(M)\) of symbols. **Theorem (Grabowski & Poncin) :**_Any one-parameter group of automorphisms of the associative algebra \({\cal D}(M)\) of differential operators (respectively, of the symplectic algebra \({\cal S}(M)\) of symbols) is generated by a first-order differential operator (respectively, by a symbol of degree one)._ By contraposition, this result of [20] can be expressed equivalently as a no-go theorem. **No-go theorem :**_Higher-spin Lie derivatives along a higher-order differential operator on \(M\) (respectively, higher-degree Hamiltonian vector fields on \(T^{*}M\)) cannot be integrated to one-parameter groups of automorphisms of the associative algebra \({\cal D}(M)\) of differential operators (respectively, of the symplectic algebra \({\cal S}(M)\) of symbols)._ Concretely, the obstruction is that the polynomiality in the momenta is _not_ preserved by the formal exponentiation of symplectic vector fields on \(T^{*}M\), even if this symplectic vector field is itself polynomial in the momenta. This problem was addressed at length in [22] so it will not be reviewed here. In any case, not much is actually known about automorphism groups of algebras of differential operators. These infinite-dimensional groups have surprising properties and remain difficult mathematical objects of current study, even in the polynomial case, as examplified by the following two conjectures. **Example (Kontswitch conjecture) :** It was argued in 2005 that the automorphism group of the Weyl algebra \({\cal D}(A)\), the associative algebra of polynomial differential operators on a finite-dimensional affine space \(A\), must be isomorphic to the automorphism group of the associated Poisson algebra \({\cal S}(A)\) of polynomial symbols [23], \[Aut\big{(}\,{\cal D}(A)\,\big{)}\,\cong\,Aut\big{(}\,{\cal S}(A)\,\big{)}\,, \tag{98}\] but it was only in 2018 that a complete proof of this conjecture was outlined [24]. Note that, by construction, the automorphism group of the Poisson algebra of polynomial symbols is isomorphic to the group of polynomial symplectomorphisms (_i.e._ polynomial diffeomorphisms preserving the symplectic structure of the cotangent bundle \(T^{*}A\)). The isomorphism (98) may come as a surprise because it is not true at the level of infinitesimal automorphisms: the Lie algebra of derivations of the associative algebra \({\cal D}(A)\) of polynomial differential operators is _not_ isomorphic to the Lie algebra of derivations of the Poisson algebra \({\cal S}(A)\) of polynomial symbols, \[{\tt o}{\tt er}\big{(}\,{\cal D}(A)\,\big{)}\,\not\cong\,{\tt o}{\tt er} \big{(}\,{\cal S}(A)\,\big{)} \tag{99}\] although they are isomorphic as vector spaces [23].20 Footnote 20: Note that all derivations of the Poisson algebra \({\cal S}(A)\) and of the associative algebra \({\cal D}(A)\) are inner (hence they are polynomial Hamiltonian vector fields and higher-spin Lie derivatives along polynomial differential operators, respectively). **Example (Dixmier conjecture) :** In 1968, Dixmier asked [25] if any endomorphism of the Weyl algebra of polynomial differential operators on a finite-dimensional affine space \(A\) is an automorphism. In other words, whether any endomorphism of the Weyl algebra is invertible. A positive answer to this question is referred to as the Dixmier conjecture. This conjecture is equivalent to its Poisson counterpart (sometimes called the Poisson conjecture [26]) : that any endomorphism of the Poisson algebra of polynomial symbols is an automorphism. If the Dixmier conjecture is true, this means that the endomorphism algebra \(\mbox{End}\big{(}\,{\cal D}(A)\,\big{)}\) and the automorphisms group \(Aut\big{(}\,{\cal D}(A)\,\big{)}\) of the Weyl algebra are isomorphic, as _associative algebras_. Let us summarise again the situation in the smooth case: Firstly, if one requires to preserve the filtration/graduation then the only inner derivations of the almost-commutative algebra \({\cal D}(M)\) of differential operators (respectively, of the Schouten algebra \({\cal S}(M)\) of principal symbols) which are integrable to one-parameter groups of automorphisms of the same filtered (respectively, graded) algebra(s), are the (complete) Lie derivatives along vector fields on \(M\). Secondly, even if one does not require _a priori_ to preserve the filtration/graduation, the only inner derivations of the associative algebra \({\cal D}(M)\) of differential operators (respectively, of the Poisson algebra \({\cal S}(M)\) of symbols) which are integrable to one-parameter groups of automorphisms of the same associative (respectively, Poisson) algebra(s), are again the (complete) Lie derivatives along vector fields on \(M\). In other words, the automorphisms of these associative/Poisson algebras must necessarily preserve their filtration/graduation and, consequently, higher-spin Lie derivatives along higher-derivative differential operators (respectively, Hamiltonian vector fields with higher-degree symbols as Hamiltonians) are _not_ integrable to one-parameter groups of automorphisms of the associative algebra \({\cal D}(M)\) of differential operators (respectively, of the Poisson algebra \({\cal S}(M)\) of symbols).21 If a Lie algebra22 is integrable to a Lie group, then one would expect (for any reasonable topology) that all its inner derivations are integrable to one-parameter groups of automorphisms of the Lie algebra (via the exponential map). Accordingly, an important corollary of these negative results is the strong no-go theorem of Grabowski and Poncin [20, Cor.4]: Footnote 21: A way out is that these inner derivations _can_ be integrated to one-parameter groups of automorphisms of larger algebras, see _e.g._ the two proposals in [22]. Footnote 22: Note that the so-called “Lie’s third theorem” stating that every finite-dimensional real Lie algebra \(\mathfrak{g}\) is associated to a real Lie group \(G\) does not hold in general for infinite-dimensional Lie algebras. **No-go theorem (Grabowski & Poncin) :**_The two infinite-dimensional Lie algebras, \(\mathfrak{D}(M)\) of differential operators and \({\cal S}(M)\) of symbols on a manifold \(M\), are not integrable to (infinite dimensional) Lie groups (of which they are the Lie algebras)._ This no-go theorem may be responsible for the elusive locality properties of higher-spin interactions. In any case, it makes manifest that if finite higher-spin symmetries are taken seriously, then they require to leave the realm of operators with bounded number in the derivatives (the landmark of locality). A toy-model example [22, Sect.6] of completion of the almost-commutative algebra of differential operators bypassing the no-go theorem of Grabowski and Poncin is reviewed in the next subsection. #### 6.1.8 Yes-go theorem on (formal) higher-spin diffeomorphisms One simple trick to go beyond differential operators is to make use of a formal deformation parameter, say \(\hbar\). Let \(V\) be a vector space over the field \(\mathbb{K}\). Then \(V[\![\hbar]\!]\) denotes the vector space of formal power series in \(\hbar\) with coefficients that are elements of \(V\). The vector space \(V[\![\hbar]\!]\) is a \(\mathbb{K}[\![\hbar]\!]\)-module. A \(\mathbb{K}[\![\hbar]\!]\)-linear map \(U\in\operatorname{End}_{\mathbb{K}[\![\hbar]\!]}\!\left(\,V[\![\hbar]\!]\, \right)\) is said \(\hbar\)**-linear** and is uniquely determined from its restriction \(T=U|_{V}\,:\,V\to V[\![\hbar]\!]\) to the subspace \(V\subset V[\![\hbar]\!]\) of power series independent of \(\hbar\). This restriction is a \(\mathbb{K}\)-linear map from \(V\) to \(V[\![\hbar]\!]\), hence it can be thought of as an element of the space \(\operatorname{End}(V)[\![\hbar]\!]\) of formal power series in \(\hbar\) with coefficients that are endomorphisms of the vector space \(V\). This leads to the isomorphism \(\operatorname{End}_{\mathbb{K}[\![\hbar]\!]}\!\left(\,V[\![\hbar]\!]\, \right)\cong\operatorname{End}(V)[\![\hbar]\!]\) of associative algebras. With a slight abuse of notation, the \(\mathbb{K}\)-linear map \(T\) and its unique \(\hbar\)-linear extension \(U\) will be denoted by the same symbol from now on. Let us assume that \(V\) is \(\mathbb{N}\)-filtered and denote by \(V\,\langle\!\langle\hbar\rangle\!\rangle\) the vector space spanned by formal power series in \(\hbar\) with coefficients which are of degree _smaller or equal_ to the power of \(\hbar\), \[v(\hbar)\in V\,\langle\!\langle\hbar\rangle\!\rangle\quad\Longleftrightarrow \quad v(\hbar)\,=\,\sum_{n=0}^{\infty}v_{n}\,\hbar^{n}\quad\text{with}\quad v _{n}\in V_{n}\,. \tag{100}\] It will be called the \(\hbar\)**-filtered extension of the \(\mathbb{N}\)-filtered space \(V\)**. Let us now assume that \(V\) is \(\mathbb{N}\)-graded and denote by \(V\,|\!|\hbar|\!|\) the vector space spanned by formal power series in \(\hbar\) with coefficients which are of grading _equal_ to the power of \(\hbar\). It will be called the \(\hbar\)**-graded extension of the \(\mathbb{N}\)-graded space \(V\)**. Consider an associative algebra \(\mathcal{A}\,\). The \(\hbar\)-linear extension of its product endows the vector space \(\mathcal{A}[\![\hbar]\!]\) with a structure of \(\mathbb{K}[\![\hbar]\!]\)-algebra. Furthermore, if the associative algebra is \(\mathbb{N}\)-filtered (respectively, \(\mathbb{N}\)-graded) then the \(\hbar\)-linear extension of the product of the associative algebra \(\mathcal{A}\) endows the vector space \(\mathcal{A}\,\langle\!\langle\hbar\rangle\!\rangle\subset\mathcal{A}[\![\hbar ]\!]\) (respectively, \(\mathcal{A}\,|\!|\hbar|\!|\subset\mathcal{A}[\![\hbar]\!]\)) with a structure of associative (sub)algebra. The elements of the form \(a(\hbar)=\hbar\;b(\hbar)\), where \(b(\hbar)\in\mathcal{A}[\![\hbar]\!]\,\), form a proper ideal \(\hbar\,\mathcal{A}[\![\hbar]\!]\subset\mathcal{A}[\![\hbar]\!]\). The \(\hbar\)-linear extension of the product of associative algebras \(\mathcal{A}\) is given by \[\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A} \,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\, \langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle \![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A} \,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle \![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A} \,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle \![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A} \,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle \![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A} \,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle \![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A} \,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A} \,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\, \langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\, \langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\, \langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\, \langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\, \langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\, \langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\, \langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\, \[\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\, \langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\, \mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\,\langle\![\hbar]\!]\,\mathcal{A}\, \[\langle\![\hbar]\!]\,\mathcal{A}\,\langle\! \({\cal A}\!\left[\!\left[\hbar\right]\!\right]\). Similarly, the elements of the same form but where \(b(\hbar)\in{\cal A}\left\langle\!\left\langle\hbar\right\rangle\!\right\rangle,\) form a proper ideal of \(\hbar\,{\cal A}\left\langle\!\left\langle\hbar\right\rangle\!\right\rangle\subset {\cal A}\left\langle\!\left\langle\hbar\right\rangle\!\right\rangle\). The quotient algebra of the \(\hbar\)-filtered extension of the filtered associative algebra \({\cal A}\) by the ideal \(\hbar\,{\cal A}\left\langle\!\left\langle\hbar\right\rangle\!\right\rangle\) is isomorphic to the \(\hbar\)-graded extension of the associated \(\mathbb{N}\)-graded algebra \({\cal B}={\rm gr}{\cal A}\), \[{\cal A}\left\langle\!\left\langle\hbar\right\rangle\!\right\rangle/\,\hbar\,{ \cal A}\left\langle\!\left\langle\hbar\right\rangle\!\right\rangle\,\cong\,{ \rm gr}{\cal A}\,\|\hbar\|\,. \tag{101}\] An \(\mathbb{N}\)-filtered associative algebra \({\cal A}\) is almost-commutative iff its \(\hbar\)-filtered extension is commutative modulo \(\hbar\), \[\left[\,{\cal A}\left\langle\!\left\langle\hbar\right\rangle\!\right\rangle,\,{ \cal A}\left\langle\!\left\langle\hbar\right\rangle\!\right\rangle\,\right] \subseteq\hbar\,{\cal A}\left\langle\!\left\langle\hbar\right\rangle\!\right\rangle. \tag{102}\] which is equivalent to say that the quotient algebra (101) is commutative. A Poisson algebra \({\cal B}\) is a Schouten algebra iff it is an \(\mathbb{N}\)-graded vector space such that \({\cal B}\,\|\hbar\|\) is an algebra for the \(\hbar\)-linear extension of the product of the commutative algebra \({\cal B}\) and the \(\hbar\)-linear extension of the Poisson bracket obeys \[\left\{\,{\cal B}\,\|\hbar\|\,,\,{\cal B}\left\langle\!\left\langle\hbar \right\rangle\!\right\rangle\,\right\}\subseteq\hbar\,{\cal B}\,\|\hbar\|\,. \tag{103}\] The quotient algebra of the \(\hbar\)-filtered extension of an almost commutative algebra \({\cal A}\) by the ideal \(\hbar\,{\cal A}\left\langle\!\left\langle\hbar\right\rangle\!\right\rangle\) is endowed with a structures of Poisson algebra via the commutative product induced from the associative product and via the Poisson bracket induced from \({1\over\hbar}[\,\,,\,]\). Furthermore, this quotient is isomorphic to the \(\hbar\)-graded extension of the associated Schouten algebra \({\cal B}={\rm gr}{\cal A}\). Let \({\cal A}\) be a commutative algebra. The \(\mathbb{K}\!\left[\!\left[\hbar\right]\!\right]\)-subalgebra \({\cal B}\subset{\rm End}({\cal A})\!\left[\!\left[\hbar\right]\!\right]\) of \(\hbar\)-linear endomorphisms of the vector space \({\cal A}\!\left[\!\left[\hbar\right]\!\right]\) which are \({\cal A}\)-linear modulo \(\hbar\), _i.e._ \[\left[\,{\cal B}\left\langle\!\left\langle\hbar\right\rangle\!\right\rangle,\,{ \cal D}^{0}({\cal A})\,\right]\subset\hbar\,{\cal B}\left\langle\!\left\langle \hbar\right\rangle\!\right\rangle\,, \tag{104}\] is the \(\hbar\)-filtered extension of the Grothendieck algebra of differential operators on \({\cal A}\), as can be checked by spelling out the condition (104) in powers of \(\hbar\). Remember that for any associative algebra \({\cal A}\), one can form the Lie algebra \({\mathfrak{A}}\) by endowing the vector space \({\cal A}\) with the commutator as Lie bracket. Consider an almost-commutative algebra \({\cal A}\), one can define the following representation of the Lie algebra on the associative algebra (105) where \[ad_{b}^{\hbar}\,:\,=\,{1\over\hbar}\,ad_{b} \tag{106}\] is an \(\hbar\) -linear derivation of the associative algebra. **Technical Lemma [22] :** Consider an almost-commutative algebra \({\cal A}\). Let us assume that the adjoint action \(ad|_{\mathfrak{A}_{1}}\,:\,\mathfrak{A}_{1}\to\mathfrak{ocr}(\mathcal{A})\) of the Lie subalgebra \(\mathfrak{A}_{1}\subset\mathfrak{A}\) of degree one is integrable in the sense that there are one-parameter groups of automorphisms of the almost-commutative algebra \(\mathcal{A}\) with elements of the form \(\exp(\,t\,ad_{a})\in Aut(\mathcal{A})\) for some \(a\in\mathcal{A}_{1}\). Then the action \(ad^{\hbar}\) of the Lie algebra \(\mathfrak{A}\big{\langle}\!\langle\hbar\rangle\!\big{\rangle}\) on the associative algebra \(\mathcal{A}\big{\langle}\!\langle\hbar\rangle\!\big{\rangle}\) is essentially integrable, in the sense that all elements \(a(\hbar)=\sum_{n=1}^{\infty}a_{n}\,\hbar^{n}\in\mathcal{A}\big{\langle}\! \langle\hbar\rangle\!\big{\rangle}\cap\hbar\,\mathcal{A}[\![\hbar]\!]\) generate one-parameter groups of automorphisms of the associative algebra \(\mathcal{A}\) of the form \(\exp(\,t\,ad^{\hbar}_{a})\in Aut(\mathcal{A}\,\big{\langle}\!\langle\hbar \rangle\!\big{\rangle})\) when the coefficient \(a_{1}\) is integrable in the previous sense. This lemma applies in particular for the Grothendieck algebra of differential oeprators on any commutative algebra. Therefore, a direct corollary is that the no-go theorem of Grabowski and Poncin can be bypassed by considering instead the \(\hbar\)-filtered completion \(\mathcal{D}(M)\big{\langle}\!\langle\hbar\rangle\!\big{\rangle}\) of the associative algebra \(\mathcal{D}(M)\) of differential operators. It is spanned by formal power series in \(\hbar\) with coefficients that are differential operators on \(M\) of order smaller or equal to the power of \(\hbar\), \[\hat{X}_{\hbar}\,=\,\sum_{r=0}^{\infty}\hat{X}_{r}\,\hbar^{r}\,,\qquad\hat{X}_ {r}\in\mathcal{D}^{r}(M)\,. \tag{107}\] They were called **almost-differential operators** on the manifold \(M\) in [22]. **Yes-go proposition :**_Any almost-differential operator \(\hat{X}_{\hbar}\in\mathcal{D}(M)\big{\langle}\!\langle\hbar\rangle\!\big{\rangle} \cap\hbar\,\mathcal{D}(M)[\![\hbar]\!]\) is locally integrable to a one-parameter group of automorphisms of the algebra \(\mathcal{D}(M)\big{\langle}\!\langle\hbar\rangle\!\big{\rangle}\) of almost-differential operators._ ### Higher-order jets #### 6.2.1 Higher-order contact ideals Consider a point \(m\) of \(M\) and let \(k\) be a non-negative integer (or infinity). The commutative ideal \(\mathcal{I}^{k}(m)\) of \(C^{\infty}(M)\) is spanned by the functions \(f\) such that \((\partial_{\mu_{1}}\ldots\partial_{\mu_{r}}f)|_{m}=0\) for \(0\leq r\leq k\) (respectively, for all \(r\) when \(k=\infty\)). It will be called the **contact ideal of order \(k\) at the point \(m\)**. Notice that, although one would make use of a specific coordinate system to write down these partial derivatives, the commutative ideal \(\mathcal{I}^{k}(m)\) does not depend on the choice of coordinates, as can be checked explicitly. Therefore, all the following notions will also be coordinate-free, although this may not be obvious at first sight. Since one deals with smooth functions, one may easily check that the order-\(k\) contact ideal is identical to the \((k+1)\)-fold pointwise power of the maximal ideal \(\mathcal{I}^{0}(m)\): \[\mathcal{I}^{k}(m)\,=\,\Big{(}\,\mathcal{I}^{0}(m)\,\Big{)}^{k+1}\,. \tag{108}\] While the maximal ideal \({\cal I}^{0}(m)\) captures algebraically the geometric notion of the point \(m\) on the manifold, the commutative ideal \({\cal I}^{k}(m)\) captures something more refined: the infinitesimal vicinity of the point \(m\) till order \(k\). One can see the structure algebra as the contact ideal of order \(-1\), _i.e._ introduce the notation \({\cal I}^{-1}(m):=C^{\infty}(M)\) in order to have the following infinite sequence of canonical inclusions \[\ldots\hookrightarrow{\cal I}^{k+1}(m)\hookrightarrow{\cal I}^{k}(m) \hookrightarrow\ldots\hookrightarrow{\cal I}^{1}(m)\hookrightarrow{\cal I}^{0 }(m)\hookrightarrow{\cal I}^{-1}(m)\,, \tag{109}\] of vector spaces whose inverse limit defines the contat ideal \({\cal I}^{\infty}(m)\) of infinite order at \(m\). Therefore \(C^{\infty}(M)\) is a \({\mathbb{Z}}\)-filtered vector space, with decreasing filtration, and the corresponding \({\mathbb{N}}\)-graded vector space is isomorphic to the symmetric tensor product \(\odot T^{*}_{m}M\) of the cotangent space, _i.e._ \[\odot T^{*k+1}_{m}M\,\cong\,{\cal I}^{k}(m)\,/\,{\cal I}^{k+1}(m)\,. \tag{110}\] The contact ideal \({\cal I}^{k}(m)\) of order \(k\) is a commutative ideal of the structure algebra \(C^{\infty}(M)\) so it defines an equivalence relation among functions \(f\), whose equivalence classes are denoted by \(j^{k}_{m}f\) and are called **jets of order \(k\) at the point \(m\)** (or simply \(k\)-jets) of functions. By definition, two functions \(f\) and \(g\) define the same \(k\)-jet \(j^{k}_{m}f\) at \(m\) iff all their derivatives till order \(k\) have the same value at \(m\). The quotient \[J^{k}_{m}M\,:=\,C^{\infty}(M)\,/\,{\cal I}^{k}(m) \tag{111}\] is the \(k\)**-jet space** at \(m\). The infinite-jet space \(J^{\infty}_{m}M\) will often be called simply "jet space at \(m\)" for short. Since the \(k\)-jet spaces are defined as the quotient of a commutative algebra by an ideal, they are commutative algebras as well. In order to emphasise this property, they will sometimes be referred to as \(k\)**-jet space algebras**. However, they will be more often called jet spaces because it is their structure of vector spaces which is usually the most relevant. #### 6.2.2 Truncated polynomials and formal power series Let \(V\) be a finite-dimensional vector space. Consider the symmetric algebra \(\odot(V^{*})\) (thought as the commutative algebra of polynomials on \(V\)) with maximal idea \({\mathfrak{m}}=\odot^{>0}(V^{*})\) of symmetric tensors of non-vanishing rank (thought as the contact ideal \({\cal I}(0)\) of polynomials vanishing at the origin). Its \((k+1)\)-th power \({\mathfrak{m}}^{k+1}\cong\odot^{>k}V^{*}\) is spanned by symmetric tensors of rank higher than \(k\) (thought as the \(k\)-contact ideal \({\cal I}^{k}(0)\) of polynomials vanishing till order \(k\) at the origin). The quotient \[J^{k}_{0}V\,:=\,\odot(V^{*})\,/\,\odot^{>k}\,(V^{*}) \tag{112}\] is the \(k\)-jet space algebra at the origin. The (inverse) limit \(J_{0}^{\infty}V\) of these quotients for \(k=\infty\) is a local algebra which can be thought of as the commutative algebra \(\overline{\odot}(V^{*})\) of formal power series at the origin of \(V\). A theorem of Borel states that any formal power series is the Taylor series of a smooth function [27]. Therefore, the commutative algebra of formal power series at the origin of the vector space \(V\) is isomorphic to the commutative algebra of smooth functions on \(V\) quotiented by the contact ideal of infinite order \[J_{0}^{\infty}V\,:=\,{\cal C}^{\infty}(V)\,/\,{\cal I}^{\infty}(0)\,. \tag{113}\] A finite-dimensional local algebra \({\cal A}\) is called a **Weil algebra**[28] (see also [13, Sec.35]). Any Weil algebra \({\cal A}\) is a direct sum \({\cal A}={\mathbb{R}}\oplus{\cal N}\), where the maximal ideal \({\cal N}\subset{\cal A}\) is nilpotent (_i.e._\({\cal N}^{k}=0\) for some finite \(k\in{\mathbb{N}}\)). For any Weil algebra \({\cal A}\), there exists a finite-dimensional vector space \(V\) such that \({\cal A}\) is a quotient of the local algebra \(\overline{\odot}(V^{*})\cong J_{0}^{\infty}V\) of formal power series at the origin, by a commutative ideal \({\cal I}\) of finite codimension. **Example (Truncated polynomials) :** The \(k\)-jet space algebra \(J_{0}^{k}V\) at the origin of the vector space \(V\) is the quotient \(J_{0}^{\infty}V\,/\,{\cal I}^{k+1}(0)\) of the local algebra \(J_{0}^{\infty}V\) of power series at the origin by the \((k+1)\)-fold power of its maximal ideal \({\cal I}(0)\). The \(k\)-jet space algebras \(J_{0}^{k}V\) are important examples of Weil algebras. Its elements are equivalence classes of power series modulo terms of order \(k+1\), which are sometimes called **truncated polynomials** of degree \(k\). In order to see this more concretely, let us introduce Cartesian coordinates \(\varepsilon^{\mu}\) on \({\mathbb{R}}^{n}\), then the \(k\)-jet space algebra \(J_{0}^{k}{\mathbb{R}}^{n}\) at the origin of \({\mathbb{R}}^{n}\) is isomorphic to the commutative algebra of truncated polynomials of degree \(k\) in the variable \(\varepsilon^{\mu}\), _i.e._ power series modulo terms of order \(k+1\), \[J_{0}^{k}{\mathbb{R}}^{n}\,\cong\,{\mathbb{R}}[\varepsilon^{\mu}]\,/\,\Big{(} \,{\mathfrak{m}}\big{(}\,{\mathbb{R}}[\varepsilon^{\mu}]\,\big{)}\,\Big{)}^{k +1}\,. \tag{114}\] A convenient representation for a \(k\)-jet at \(m\) of a smooth function is as the Taylor expansion of degree \(k\) (for \(k=\infty\), this representation is actually a formal power series). It is uniquely defined by all its derivatives at \(m\) till order \(k\). A compact expression for a \(k\)-jet is as an equivalence class of power series \(\phi(\varepsilon)\) in the auxiliary variable \(\varepsilon^{\mu}\): \[\phi(\varepsilon)+{\cal O}\big{(}\,|\varepsilon|^{k+1}\,\big{)}\,=\,\sum_{r=0 }^{k}\frac{1}{r!}\,\,\phi_{\mu_{1}\ldots\mu_{r}}(x)\,\varepsilon^{\mu_{1}} \ldots\varepsilon^{\mu_{r}}\quad\mbox{modulo}\quad|\varepsilon|^{k+1}\,. \tag{115}\] For \(k=\infty\), the extra term ("modulo...") can be consistently dropped in the previous expression. The equivalence class (115) can be seen as a Taylor series at \(m\) modulo terms of order \(\varepsilon^{k+1}\). The product is well defined on the above equivalence classes (truncated polynomials) but is not preserved by a choice of representatives (genuine polynomials) since the product of two polynomials of degree \(k\) is a polynomial of degree \(2k\) rather than \(k\). #### 6.2.3 Higher-order jet bundles The \(k\)-**jet bundle** is the manifold \(J^{k}M=\bigcup_{m}J^{k}_{m}M\) and it has local coordinates \[(\,x^{\nu}\,,\,\phi\,,\,\phi_{\mu}\,,\,\phi_{\mu_{1}\mu_{2}}\,,\,\ldots\,,\, \phi_{\mu_{1}\cdots\,\mu_{k}}\,)\,.\] The transformation law of a \(k\)-jet under the coordinate transformation \(x^{\mu}\mapsto x^{\prime\mu}(x)\) is as follows: \[\phi^{\prime}=\phi\,,\quad\phi^{\prime}_{\mu}=\frac{\partial x^{\nu}}{ \partial x^{\prime\mu}}\,\phi_{\nu}\,,\quad\phi^{\prime}_{\mu_{1}\mu_{2}}= \frac{\partial x^{\nu_{1}}}{\partial x^{\prime\mu_{1}}}\,\frac{\partial x^{ \nu_{2}}}{\partial x^{\prime\mu_{2}}}\,\phi_{\nu_{1}\nu_{2}}+\frac{\partial^ {2}x^{\nu}}{\partial x^{\prime\mu_{1}}\partial x^{\prime\mu_{2}}}\,\phi_{\nu }\,,\quad\cdots \tag{116}\] As one can see, the component \(\phi_{\mu_{1}\cdots\,\mu_{r}}\) with \(r\) indices does not transform as a rank-\(r\) symmetric covariant tensor because the lower rank components contribute in the transformation law of a given component of a \(k\)-jet. In other words, only the collection of all its non-trivial components \(\phi_{\mu}\), \(\phi_{\mu_{1}\mu_{2}}\),..., \(\phi_{\mu_{1}\cdots\,\mu_{k}}\) is a coordinate-free object (the rank-zero can of course be separated). There is an infinite tower of vector bundles fibrations: \[\ldots\twoheadrightarrow J^{k+1}M\twoheadrightarrow J^{k}M\twoheadrightarrow J^{2}M\twoheadrightarrow J^{1}M\twoheadrightarrow J^{0}M\cong M\times\mathbb{R}\,, \tag{117}\] the inverse limit of which provides an abstract definition of the infinite-jet bundle \(JM\).23 The kernel of the projection \(J^{k}M\twoheadrightarrow J^{k-1}M\) is the bundle of symmetric covariant tensors of rank \(k\), i.e. \(\bigcirc^{k}\!T^{*}M\), in agreement with the previous isomorphism for the quotients of contact ideals. Footnote 23: Although it is very tempting, one will refrain from calling the set of jets \(JM\) the “jet set” of the manifold \(M\). The sections of the \(k\)th jet bundle \(J^{k}M\) will be called \(k\)-**jet fields**. The commutative algebra of \(k\)-jet fields will be denoted \({\cal J}^{k}(M):=\Gamma(J^{k}M)\). As mentioned above, a convenient representative for a \(k\)-jet is a truncated polynomial of degree \(k\) (or a formal power series for \(k=\infty\)). A \(k\)-jet field \(\phi\) is therefore compactly written as a function of two variables \[\phi(x;\varepsilon)+{\cal O}(|\varepsilon|^{k+1})\,=\,\sum_{r=0}^{k}\frac{1}{ r!}\,\,\phi_{\mu_{1}\ldots\mu_{r}}(x)\,\varepsilon^{\mu_{1}}\ldots\varepsilon^{ \mu_{r}}\quad\mbox{modulo}\quad|\varepsilon|^{k+1}\,. \tag{118}\] A very important point is that, for a generic \(r\), the coefficient \(\phi_{\mu_{1}\ldots\mu_{r}}(x)\) in the \(k\)-jet field \(\phi(x;\varepsilon)\) is _not_ a symmetric covariant tensor fields of rank \(r\), although the notation may suggest this misleading interpretation. This subtlety is more obvious if ones considers the \(k\)th **prolongation** of a function \(f\) that is the section \(j^{k}f\) of the \(k\)th jet bundle \(J^{k}M\) whose local expression is the Taylor expansion of order \(k\): \[(j^{k}f)(x;\varepsilon) = \sum_{r=0}^{k}\frac{1}{r!}\ \partial_{\mu_{1}}\dots\partial_{\mu_{r} }f(x)\,\varepsilon^{\mu_{1}}\dots\varepsilon^{\mu_{r}}\quad\mbox{modulo}\quad| \varepsilon|^{k+1} \tag{119}\] \[= f(x+\varepsilon)\quad\mbox{modulo}\quad|\varepsilon|^{k+1}\,, \tag{120}\] where the last equality holds for any finite \(k\) by virtue of Taylor's theorem. Obviously, the coefficients \(\partial_{\mu_{1}}\dots\partial_{\mu_{r}}f(x)\) do _not_ transform as symmetric covariant tensor fields of rank \(r\) under general coordinate transformations. Notice that the \(k\)th prolongation is a linear map \(j^{k}:C^{\infty}(M)\to{\cal J}^{k}(M)\) which has the following concrete representation in coordinates \[j^{k}\,=\,\exp\left(\varepsilon^{\mu}\frac{\partial}{\partial x^{\mu}}\right) \,+\,{\cal O}\big{(}\,|\varepsilon|^{k+1}\,\big{)}\,, \tag{121}\] in agreement with (119). The infinite prolongation \(j^{\infty}:C^{\infty}(M)\to{\cal J}^{\infty}(M)\) plays an important role in the theory of partial differential equations (see _e.g._ the textbook [6]) and has a particularly simple representation \(j^{\infty}=\exp(\varepsilon^{\mu}\frac{\partial}{\partial x^{\mu}})\) in local coordinates as a formal power series in \(\varepsilon\). **Remark:** For analytic functions, their infinite prolongation can formally be understood as a mere translation: \((j^{\infty}f)(x;\varepsilon)=f(x+\varepsilon)\,\). However, this equality is not correct for generic smooth functions since the Taylor series of a smooth function at a point does not necessarily converge, and even if it converges this Taylor series is not necessarily equal to the value of the function at this point. Along the same lines, one should always keep in mind the theorem according to which there is an infinite collection of smooth functions defining the same \(\infty\)-jet at a given point. Jets have been introduced by mathematicians in order to make sense of locality (in the sense of classical field theory) without necessarily going into the details of functional space analysis. Indeed, the \(k\)-jet spaces of finite order \(k\) are finite-dimensional. Therefore, jet spaces allow to study (some aspects of) functions on a manifold via the basic tools of abstract algebra. Another motivation for introducing jets is that they allow to provide a coordinate-independent definition of higher-order partial differential equations. **Remark:** Note that the \(k\)-th derivative alone is _not_ a coordinate-free object, only the collection of all its derivatives till order \(k\) is a geometric object (since the coefficients \(\partial_{\mu_{1}}\dots\partial_{\mu_{r}}f(x)\) do _not_ transform as symmetric covariant tensor fields of rank \(r\) but are mixed under general coordinate transformations). However, as can be expected from the coordinate expression and as can be checked from the transformation law (116) under coordinate changes, the space of symmetric covariant tensors of rank \(k\) is isomorphic to the quotient of the contact ideal of order \(k-1\) by the contact ideal of order \(k\), _cf._ (110). In other words, the space of symmetric covariant tensors is the cokernel (_i.e._ the quotient of the codomain by the image) of the canonical inclusions (109). ### Higher-order cojets As will become clear, the algebraic definition of differential operators via commutators is equivalent to another possible definition of differential operators insisting on locality: differential operators are those operators on the vector space \(C^{\infty}(M)\) that are _local_ in the sense that their action at a point only depends on the smooth structure (_i.e._ the jet) of the given function at that point (in other words, the contact ideal is annihilated). To be more precise, the restriction \(\hat{X}|_{m}\) of a differential operator of order \(k\) at a single point \(m\in M\) is a linear form on the \(k\)-jet space \(J_{m}^{k}M\). This motivates the introduction of some specific terminology for the values of differential operators at a given point: cojets. #### 6.3.1 Higher-order cojets as generalised functions The dual of the \(k\)th jet space, _i.e._ the space \[D_{m}^{k}M\,:=\,(J_{m}^{k}M)^{*}\,,\qquad(k\in\mathbb{N}) \tag{122}\] of linear forms on the \(k\)-jet space \(J_{m}^{k}M\), will be called the space of \(k\)**-cojets** at \(m\). Equivalently, a \(k\)-cojet at \(m\) can be thought as the **value at the point \(m\) of a differential operator**\(\hat{X}\in\mathcal{D}^{k}(M)\) of order \(k\), defined as the linear form \[\hat{X}|_{m}:=\delta_{m}\circ\hat{X} \tag{123}\] on \(C^{\infty}(M)\), where \(\delta_{m}:C^{\infty}(M)\to\mathbb{R}\) is the evaluation functional at the point \(m\). Since the contact ideal \(\mathcal{I}^{k}(m)\) of order \(k\) at the point \(m\) belongs to the kernel of any such linear form \(\hat{X}|_{m}:C^{\infty}(M)\to\mathbb{R}\), one may indeed consider that the latter effectively defines a linear form on the \(k\)-jet space at \(m\), _c.f._ (111). Furthermore, a \(k\)-cojet at \(m\) can be also be defined as an equivalence classes of differential operators of order \(k\), where two differential operators \(\hat{X}\) and \(\hat{Y}\) are equivalent if they produce the same result, at the point \(m\), on any given function. This last definition agrees with the previous one, due to the relation (38). For \(k=\infty\), the above definitions require more care. The topological dual of the \(\infty\)-jet space, _i.e._ the space \[D_{m}M\,:=\,(J_{m}^{\infty}M)^{\prime}\,. \tag{124}\] of continuous linear forms on the \(\infty\)-jet space \(J_{m}^{\infty}M\), will be called the space of \(\infty\)-**cojets** at \(m\). In the language of distribution theory, a smooth function with compact support is called **Equivalent formulations of finite-order cojets** On a smooth manifold, the following notions are equivalent for any integer \(k\in\mathbb{N}\): 1. a \(k\)-cojet at a point \(m\), 2. a linear form on the \(k\)-jet space \(J^{k}_{m}M\), 3. the value \(\hat{X}|_{m}\) at \(m\) of a differential operator \(\hat{X}\in\mathcal{D}^{k}(M)\) of order \(k\). a **test function**. The subspace of test functions is sometimes denoted \(C^{\infty}_{c}(M)\subset C^{\infty}(M)\). One may consider its topological dual \(C^{\infty}_{c}(M)^{\prime}\) spanned by the continuous linear functionals on \(C^{\infty}_{c}(M)\). These functionals are called **generalised functions**[29] (or "distributions" [30]) on the manifold \(M\). The space of \(\infty\)-cojets at \(m\) can be defined as the space of generalised functions on \(M\) whose support is the point \(m\). A theorem of Schwartz states that a generalised function whose support is a single point decomposes a finite linear combination of derivatives of the Dirac distribution at this point [30, Th.35,Chap.3]. For instance, remember that the space \(D^{0}_{m}M\) of 0-cojets at \(m\), _i.e._ the vector space dual to \(J^{0}_{m}M=C^{\infty}(M)/\mathcal{I}^{0}(m)\cong\mathbb{R}\), can be thought of as the one-dimensional space spanned by the evaluation functional \(\delta_{m}\). More generally, the space \(D^{k}_{m}M\) of \(k\)-cojets at \(m\) is spanned by all derivatives of the Dirac distribution at \(m\) till order \(k\). Accordingly, in a local coordinate system a possible representation for a \(k\)-cojet \(X\) at a point \(m\in M\) of coordinates \(y^{\mu}\) is as a generalised function: \[X(x)=\sum_{r=0}^{k}\frac{1}{r!}\,X^{\mu_{1}\ldots\mu_{r}}(y)\,\frac{\partial}{ \partial x^{\mu_{1}}}\cdots\frac{\partial}{\partial x^{\mu_{r}}}\delta^{n}(y- x)\,, \tag{125}\] Indeed, via this realisation, the \(k\)-cojet acts on a test function \(\phi\) as follows: \[\langle X,\phi\rangle=\int dx\,X(x)\,\phi(x)=\sum_{r=0}^{k}\frac{1}{r!}\,X^{ \mu_{1}\ldots\mu_{r}}(y)\,\partial_{\mu_{1}}\cdots\partial_{\mu_{r}}\phi(y)\,. \tag{126}\] In a more algebraic language, a compact expression for a \(k\)-cojet \(X\) is as a polynomial of degree \(k\) in the auxiliary variable \(\partial_{\varepsilon}\): \[X(\partial_{\varepsilon})=\sum_{r=0}^{k}\frac{1}{r!}\,X^{\mu_{1}\ldots\mu_{r }}\,\frac{\partial}{\partial\varepsilon^{\mu_{1}}}\ldots\frac{\partial}{ \partial\varepsilon^{\mu_{r}}}\,, \tag{127}\] where the location of the cojet was not specified. Indeed, via this choice of notation, the \(k\)-cojet \(X(\partial_{\varepsilon})\) above acts on \(k\)-jets \(\phi(\varepsilon)\), realised as truncated polynomials, like a differential operator of order \(k\) (for the auxilliary coordinate \(\varepsilon\)) followed by an evaluation at \(\varepsilon=0\): \[\left\langle X,\phi\right\rangle=\left.X(\partial_{\varepsilon})\phi( \varepsilon)\,\right|_{\varepsilon=0}=\sum_{r=0}^{k}\frac{1}{r!}\,X^{\mu_{1} \ldots\mu_{r}}\,\phi_{\mu_{1}\ldots\mu_{r}}\,. \tag{128}\] **Equivalent formulations of cojets** On a smooth manifold, the following notions are equivalent: 1. a cojet at a point \(m\), 2. a continuous linear form on the \(\infty\)-jet space \(J^{\infty}_{m}M\), 3. the value \(\hat{X}|_{m}\) at \(m\) of a differential operator \(\hat{X}\in\mathcal{D}(M)\), 4. a generalised function with support at \(m\). #### 6.3.2 Higher-order cojet bundles The \(k\)**-cojet bundle**\(D^{k}M=\bigcup_{m}D^{k}_{m}M\) has local coordinates \[(x^{\nu},X,X^{\mu},X^{\mu_{1}\mu_{2}},\ldots,X^{\mu_{1}\cdots\mu_{k}}).\] The transformation law of a \(k\)-cojet under the coordinate transformation \(x^{\mu}\mapsto x^{\prime\mu}(x)\) is quite complicated for high \(k\), so we only give the transformation law of 2-cojets: \[X^{\prime}=X\,,\quad X^{\prime\mu}=\frac{\partial x^{\prime\mu}}{\partial x^{ \nu}}\,X^{\nu}+\frac{\partial^{2}x^{\prime\mu}}{\partial x^{\nu_{1}}\partial x ^{\nu_{2}}}\,X^{\nu_{1}\nu_{2}}\,,\quad X^{\prime\mu_{1}\mu_{2}}=\frac{ \partial x^{\prime\mu_{1}}}{\partial x^{\nu_{1}}}\,\frac{\partial x^{\prime \mu_{2}}}{\partial x^{\nu_{2}}}\,X^{\nu_{1}\nu_{2}}\,. \tag{129}\] More generally, all (but only) the higher rank components contribute in the transformation law of a given component of a \(k\)-cojet, thus its lower components \(X^{\mu_{1}\cdots\mu_{r}}\) of rank \(0<r<k\) do _not_ transform as rank-\(r\) symmetric contravariant tensors. In other words, only the collection of all its non-trivial components \(X^{\mu}\), \(X^{\mu_{1}\mu_{2}}\),..., \(X^{\mu_{1}\cdots\mu_{k}}\) is a single coordinate-free object. More accurately, only the rank-zero and rank-\(k\) components of a \(k\)-cojet can be separated in a coordinate-invariant way. An important motivation behind jet theory is to allow a geometric definition of differential operators. Contrarily to tangent and cotangent vectors, their higher-order generalisation (cojets and jets, respectively) do not have nice transformation properties under coordinate transformations, so their transformation laws have not been written down here in details although a tensor-like calculus can be designed to handle them [31]. If the components of metric-like fields are, say cojets (as suggested by the higher-spin Lie derivative), then they do _not_ transform as symmetric tensor fields under diffeomorphisms. Such property might be at the root of the well-known obstruction in higher-spin gravity to "naive" minimal coupling of higher-spin particles to gravitons (see _e.g._[2] and refs therein). The sections of the \(k\)-cojet bundle \(D^{k}M\) will be called \(k\)**-cojet fields**. Locally, the latter are compactly expressed as generating functions \(X(x;\partial_{\varepsilon})\) which are smooth in the coordinates \(x^{\mu}\) and polynomial in the auxiliary variable \(\partial/\partial\varepsilon^{\nu}\). There is an infinite sequence of embeddings (actually, an \(\mathbb{N}\)-filtration) of vector bundles on \(M\): \[M\times\mathbb{R}^{*}\cong D^{0}M\hookrightarrow D^{1}M\hookrightarrow D^{2}M \hookrightarrow\ldots\hookrightarrow D^{k}M\hookrightarrow D^{k+1}M \hookrightarrow\ldots \tag{130}\] the direct limit of which is the infinite-order cojet bundle \(DM\) that will be called the **enveloping bundle** of \(M\).24 Accordingly, the infinite-order cojet space \(D_{m}M\) will be called the **enveloping space** at \(m\in M\). Footnote 24: This term is motivated by the importance of universal enveloping algebras of finite-dimensional Lie algebras in higher-spin gravity. Moreover, this terminology is in line with the fact that the almost-commutative algebra of differential operators \(\mathcal{D}(M)\) is the universal enveloping algebra of the Lie-Rinehart algebra of vector fields \(\mathfrak{X}(M)\). **Example (jet vs cojet space) :** The \(\infty\)-cojet space \(D_{0}V\cong\odot(V)\) at the origin of the vector space \(V\) is the topological dual of the \(\infty\)-jet space \(J_{0}^{\infty}V\cong\overline{\odot}(V^{*})\). Conversely, the \(\infty\)-jet space \(J_{0}^{\infty}V\) is the algebraic dual of the \(\infty\)-cojet space \(D_{0}V\). The cokernel of the \(k\)th embedding \(D^{k-1}M\hookrightarrow D^{k}M\) is the bundle of symmetric contravariant tensors of rank \(k\), i.e. \(\odot^{k}TM\). In other words, the quotient of \(k\)-cojet space by the \((k-1)\)-cojet space is isomorphic to the space of symmetric contravariant tangent tensors of rank \(k\): \[\odot^{k}\!\!TM\;\cong\;D^{k}M\,/\,D^{k-1}M\,. \tag{131}\] A representative of the equivalence class of a \(k\)-cojet is its principal symbol. #### 6.3.3 Differential operators as cojet fields Differential operators of order \(k\) can be defined as linear operators \(\hat{X}:C^{\infty}(M)\to C^{\infty}(M)\) which factor through the \(k\)-jet bundle \(J^{k}M\) in the sense that \(\hat{X}=X\circ j^{k}\) where \(X\in\Gamma(D^{k}M)\) is a \(k\)-cojet field, _i.e._ a section of the \(k\)-cojet bundle \(D^{k}M\), and \(j^{k}:C^{\infty}(M)\to\mathcal{J}^{k}(M)\) is the order- prolongation map. In local coordinates, the correspondence goes as follows: \[(\hat{X}f)(x) = \big{[}\,X(x;\partial_{\varepsilon})\,(j^{k}f)(x;\varepsilon)\, \big{]}\big{|}_{\varepsilon=0} \tag{132}\] \[= \Big{(}\,X\big{(}j^{k}f\,\big{)}\,\Big{)}(x)\,,\] which can be checked by making use of the previous explicit local expressions for the differential operator \(\hat{X}:C^{\infty}(M)\to C^{\infty}(M)\) of order \(k\), its \(k\)-cojet field \(X:{\cal J}^{k}(M)\to C^{\infty}(M)\) and the prolongation map \(j^{k}:C^{\infty}(M)\to{\cal J}^{k}(M)\). Due to this equivalent definition of differential operators, we will often identify differential operators with cojet fields. Actually, the identification corresponds to the formal replacement \(\partial_{\varepsilon}\) with \(\partial_{x}\) together with the normal ordering prescription (related to the previous subtlety in the isomorphism). **The many faces of differential operators** Given a smooth manifold \(M\), the following notions are equivalent: 1. a linear scalar-valued differential operator \(\hat{X}\in{\cal D}^{k}(M)\) of order \(k\) on \(M\), 2. an endomorphism \(\hat{X}\in{\rm End}\big{(}C^{\infty}(M)\big{)}\) of the vector space \(C^{\infty}(M)\) such that \([\,[\,\ldots\,[\hat{X}\,,\,\hat{f}_{1}]\,,\,\hat{f}_{2}]\ldots\,,\,\hat{f}_{k }]\in{\cal D}^{0}(M)\) for any functions \(f_{1},f_{2},\cdots,f_{k}\in C^{\infty}(M)\), 3. an endomorphism \(\hat{X}\in{\rm End}\big{(}C^{\infty}(M)\big{)}\) of the vector space \(C^{\infty}(M)\) which is almost \(C^{\infty}(M)\)-linear in the sense that \([\hat{X},\hat{f}]\in{\cal D}^{k-1}(M)\) for any function \(f\in C^{\infty}(M)\), 4. a \(k\)-cojet field \(X\in\Gamma(D^{k}M)\) on \(M\), _i.e._ a section of the \(k\)-cojet bundle \(D^{k}M\). ### Pushforward of differential operators and cojets Let \(F:M\to N\) be a map from the manifold \(M\) (source) to the manifold \(N\) (target). The following definitions are the direct higher-order generalisations of their standard versions for vector fields. Two differential operators \(\hat{X}\in{\cal D}(M)\) and \(\hat{Y}\in{\cal D}(N)\) are said to be **related by \(F\)** if \[\hat{X}\circ F^{*}=F^{*}\circ\hat{Y} \tag{133}\] where \(F^{*}\,:\,C^{\infty}(N)\to C^{\infty}(M)\) is the pullback of \(F:M\to N\). The equality (133) makes sense since the differential operators \(\hat{X}\) and \(\hat{Y}\) are seen as endomorphisms of \(C^{\infty}(M)\) and \(C^{\infty}(N)\) respectively while \(F^{*}\) is a homomorphism from \(C^{\infty}(N)\) to \(C^{\infty}(M)\). More explicitly, the condition (133) reads as follows: \[\hat{X}\big{[}F^{*}(g)\big{]}=F^{*}\big{(}\hat{Y}[g]\big{)}\,,\qquad\forall g\in C ^{\infty}(N)\,. \tag{134}\] This condition is sufficiently subtle and important to deserve a third equivalent writing: \[\hat{X}[\,g\circ F]=\hat{Y}[g]\circ F\,,\qquad\forall g\in C^{\infty}(N)\,. \tag{135}\] If \(F\) is bijective then any differential operator \(\hat{X}\) on the source \(M\) is related by \(F\) to a unique differential operator \(\hat{Y}\) on the target \(N\), which is called the **pushforward of \(\hat{X}\)** by \(F\) and the corresponding \(\hat{Y}\) in (133) is denoted \(F_{*}\hat{X}\). Therefore, the **pushforward by a bijective \(F\)** is defined as the algebra isomorphism \[F_{*}\,:\,\mathcal{D}(M)\stackrel{{\sim}}{{\to}}\mathcal{D}(N) \,:\,\hat{X}\mapsto(F^{-1})^{*}\circ\hat{X}\circ F^{*}\,. \tag{136}\] The pushforward preserves the filtration by the order of the differential operators. If \(F\) is injective then its restriction is invertible when the codomain is restricted to \(F(M)\subseteq N\). Therefore, the **pushforward by an injective \(F:M\hookrightarrow N\)** is a well defined map \(F_{*}\,:\,\mathcal{D}\big{(}M\big{)}\hookrightarrow\mathcal{D}\big{(}F(M) \big{)}\). However, if \(F\) is surjective, then the pushforward is not always well defined, which motivates the following definition: when a differential operator \(\hat{X}\) on \(M\) is related by a surjective \(F\) to a (well-defined) differential operator \(\hat{Y}\) on \(N\), then the former \(\hat{X}\) is called **projectable on \(N\)** (by the pushforward \(F_{*}\)) while the latter \(\hat{Y}\) is called the pushforward of \(\hat{X}\) by \(F\) and is denoted \(F_{*}\hat{X}(=\hat{Y})\). The space of differential operators on \(M\) projectable on \(N\) by \(F\) will be denoted as the preimage \((F_{*})^{-1}\mathcal{D}(N)\). In this way, it becomes tautological to say that the **pushforward by a surjective \(F:M\twoheadrightarrow N\)** is a well defined map \(F_{*}\,:\,(F_{*})^{-1}\mathcal{D}(N)\twoheadrightarrow\mathcal{D}(N)\). All these definitions of pushforward by \(F\) are morphisms of algebras, _i.e._\(F_{*}(\hat{X}_{1}\circ\hat{X}_{2})=(F_{*}\hat{X}_{1})\circ(F_{*}\hat{X}_{2})\) for any differential operators \(\hat{X}_{1}\) and \(\hat{X}_{2}\) on the source \(M\). Therefore the kernel, \(\operatorname{Ker}F_{*}\), of the pushforward by a surjective map \(F\) is an associative ideal of the algebra of projectable differential operators. The corresponding quotient is isomorphic to the algebra of differential operators on the target manifold: \(\mathcal{D}(N)\cong(F_{*})^{-1}\mathcal{D}(N)\,/\operatorname{Ker}F_{*}\). The pushforward map itself \(F\mapsto F_{*}\) is a homomorphism from the associative algebra of smooth maps between manifolds to the associative algebra of morphisms between associative algebras of differential operators, _i.e._ it preserves the order of multiplication: \((F\circ G)_{*}=F_{*}\circ G_{*}\). In another generalisation of Milnor exercise by Grabowski and Poncin, the commutative algebra of functions is replaced with the associative25 algebra of differential operators [20]. **Theorem (Grabowski & Poncin) :**_A map \(\Phi:{\cal D}(M)\stackrel{{\sim}}{{\rightarrow}}{\cal D}(N)\) between the algebras of differential operators on two smooth manifolds is an isomorphism of associative algebras iff it is the pushforward of a diffeomorphism \(F:M\stackrel{{\sim}}{{\rightarrow}}N\) between these two manifolds, i.e. \(\Phi=F_{*}\)._ The essence of the proof relies on showing that any such isomorphism of associative algebras is also an isomorphism of filtered algebra (in which case, one can reduce the problem to the genuine Milnor exercise). The various Milnor exercises that were reviewed till now gives a mathematically precise meaning to the equivalences (1) pictured in Section 2: Consider a differential operator \(\hat{X}\) on \(M\) related by \(F:M\to N\) to the differential operator \(F_{*}\hat{X}\) on \(N\), _cf._ (134). For any function \(g\) on \(N\), the functions \(\hat{X}[F^{*}g]\) and \(F_{*}\hat{X}[g]\) are functions on \(M\) and \(N\) respectively. Moreover, the evaluation at the point \(m\in M\) of the function \(\hat{X}[F^{*}g]\) is equal to the evaluation at the point \(F(m)\in N\) of the function \(F_{*}\hat{X}[g]\). In other words, we have the familiar equality \[\hat{X}[F^{*}g]|_{m}\,=\,F_{*}\hat{X}[g]|_{F(m)}. \tag{137}\] A \(k\)-cojet \(X_{m}\) at a point \(m\in M\) can be identified with the equivalence class of differential operators \(\hat{X}\) on \(M\) of order \(k\) with the same value \(\hat{X}|_{m}=\delta_{m}\circ\hat{X}\) at \(m\). The remarkable fact about the equality (137) is that it only involves values at a single point of the relevant objects (the \(k\)-cojet \(X_{m}\) and the \(k\)-jet \(j^{k}_{m}f\)). Therefore all the previous problems for pushforwards of cojet fields do not arise for individual cojets, because the latter are only defined at a single point. In particular, the pushforward of a cojet by a map \(F:M\to N\) is well-defined independently of the injectivity/surjectivity properties of the map \(F\). In fact, the following definition of the pushforward \[F_{*}\hat{X}|_{m}\,:=\,\hat{X}|_{m}\circ F^{*} \tag{138}\] of a \(k\)-cojet \(\hat{X}|_{m}\) at a point \(m\) agrees with (137). The pushforward by \(F:M\to N\) is the vector bundle morphism \(F_{*}\,:\,D^{k}M\to D^{k}N\) corresponding in the fibre to the linear maps \[F_{*m}\,:\,D^{k}_{m}M\to D^{k}_{F(m)}N\,:\,X_{m}\mapsto(F_{*}X)_{F(m)}\,, \tag{139}\] at each point. Actually this notion of pushforward ensures that the map \(D:M\mapsto DM\) sending a manifold to its enveloping bundle defines a covariant functor from the category of smooth manifolds to the category of vector bundles, where any smooth map \(F:M\to N\) between two manifolds is sent to the morphism \(F_{*}\,:\,DM\to DN\) of vector bundles. In the language of category theory, the pushforward \(F_{*}\,:\,DM\to DN\) would be denoted as \(DF\). The map \(D:M\mapsto DM\) sending a manifold to its enveloping bundle also defines a covariant functor from the category of smooth manifolds to the category of vector bundles. In the first-order case \(k=1\) one may restrict to the tangent bundle, in which case the pushforward by \(F:M\to N\) is the vector bundle morphism \(TF\,:\,TM\to TN\) also known as the **differential of \(F\)**. ## Acknowledgments I am grateful to Thomas Basile for his patient reading and helpful comments on some early version of these notes.
2308.03437
AudioVMAF: Audio Quality Prediction with VMAF
Video Multimethod Assessment Fusion (VMAF) [1], [2], [3] is a popular tool in the industry for measuring coded video quality. In this study, we propose an auditory-inspired frontend in existing VMAF for creating videos of reference and coded spectrograms, and extended VMAF for measuring coded audio quality. We name our system AudioVMAF. We demonstrate that image replication is capable of further enhancing prediction accuracy, especially when band-limited anchors are present. The proposed method significantly outperforms all existing visual quality features repurposed for audio, and even demonstrates a significant overall improvement of 7.8% and 2.0% of Pearson and Spearman rank correlation coefficient, respectively, over a dedicated audio quality metric (ViSQOL-v3 [4]) also inspired from the image domain.
Arijit Biswas, Harald Mundt
2023-08-07T09:37:42Z
http://arxiv.org/abs/2308.03437v1
# AudioVMAF: Audio Quality Prediction with VMAF ###### Abstract Video Multimed Assessment Fusion (VMAF) [1, 2, 3] is a popular tool in the industry for measuring coded video quality. In this study, we propose an auditory-inspired frontend in existing VMAF for creating videos of reference and coded spectrograms, and extended VMAF for measuring coded audio quality. We name our system AudioVMAF. We demonstrate that image replication is capable of further enhancing prediction accuracy, especially when band-limited anchors are present. The proposed method significantly outperforms all existing visual quality features repurposed for audio, and even demonstrates a significant overall improvement of 7.8% and 2.0% of Pearson and Spearman rank correlation coefficient, respectively, over a dedicated audio quality metric (VISQOL-v3 [4]) also inspired from the image domain. Objective audio quality metrics, video quality metrics, VMAF, VISQOL, audio coding, audio signal processing ## I Introduction Vision and audition are the richest sources of sensory data that drive the quality of multimedia experiences. While video storage and transmission command more resources, audio is also a resource hog and is important to consumers. Especially since bandwidth-hungry spatial audio [5] is becoming more pervasive, perceptual audio rate control should be a significant factor in the quality of experience (QoE) optimization. Therefore, sound-quality evaluation tests are critical since they provide the necessary user feedback that drives audio quality improvements. However, subjective listening tests demand a large amount of time and effort. Thus, there is an impetus to develop accurate audio quality assessment models. In the broader context, perceived multimedia QoE is affected by both perceptual video and audio quality. Quality assessments of audio and video have both been widely researched for decades yet the two areas have been largely mutually independent [6]. However, the neuros sensory systems of the two modalities bear similarities. There are both low-level analogous processing (e.g., masking and multi-scale decomposition) as well as high-level cognitive modeling leading to a useful fusion of the senses [7]. Furthermore, the modalities themselves bear similarities that make them inter-convertible, e.g., using time-frequency representations of audio (spectrograms) to create a visual format for analysis. So, it is reasonable to consider whether suitable video quality metrics might be adapted for audio quality prediction. It has been found in subjective joint audio-visual studies [8] that: (1) video modality is relatively more important to QoE than the audio modality; (2) unlike video quality, subjects found it harder to differentiate audio quality (even with audio bitrates chosen to create large degradation), and (3) subjects usually judge the quality of each modality sequentially, before giving an overall rating; suggesting that for a joint audio-visual quality (AVQ) estimation it makes sense to fuse audio and video quality scores using posterior fusion strategy [9, 8, 10]. Thus, for an AVQ prediction task, observation (1) would indicate that it is reasonable to make use of a video quality metric that is well accepted by the community; observation (2) would hint that one may not need an incoherent (and consequently a complex) system architecture. Therefore, for a coherent system design, it becomes valid to ask the research question, why not derive audio quality from a state-of-the-art video quality metric? With the above research question, we propose AudioVMAF to measure audio quality with the industry-popular VMAF. We are unaware of techniques that utilize an "out-of-the-box" video quality metric as an audio quality metric. We are only aware of: (a) adaptation of a set of classical 2D visual quality indicators for 1D audio signals for measuring audio quality [8]; and (b) adaptation of a 2D image distortion metric (Structural Similarity Index or SSIM [11]) to a 2D distortion metric (Neurogram Similarity Index Measure or NSIM [12]) for measuring speech intelligibility and coded speech [13] and audio quality [14, 15, 4] with ViSQOL, where the support vector machine (SVM) [16] is trained for the task. In [8], the 1D variant of 2D visual quality features was evaluated with stereo audio waveform coded with Advanced Audio Coding (AAC) at 8, 32, and 128 kb/s. It is expected that the lowest two bitrates provide an unacceptable audio quality, and differences in decoded audio bandwidth would already serve as a cue for ranking the quality. We believe automatic audio quality assessment becomes a challenge when modern parametric bandwidth extension [17] and parametric stereo coding [17] tools are utilized at lower bitrates. Thus, we evaluated AudioVMAF with modern audio codecs for a wide range of bitrates (i.e., quality). Furthermore, unlike the measures used in [8] and ViSQOL, due to the usage of VMAF, we can predict the coded audio quality directly on a 0-100 MUSHRA (Multiple Stimuli with Hidden Reference and Anchor [18]) quality scale; a well-established scale for assessing audio codecs. The paper is organized as follows. Section II describes the AudioVMAF technology. The data used for AudioVMAF evaluation along with the experimental results are given in Section III, and finally, the conclusion is drawn in Section IV. ## II AudioVMAF VMAF predicts the perceptual video quality of a coded video with respect to an uncoded reference video. In VMAF, pixel-level data are pooled to create frame-level image quality measures (Visual Information Fidelity or VIF [19], Detail Loss Metric or DLM [20]) modified to cover multiple scales of resolution. Thereafter, different spatial and temporal features are fused using SVM regression to create frame-level quality scores; and finally, consecutive frame scores are pooled to produce the final VMAF score. For audio quality prediction, reference and coded audio signals are extracted from the corresponding video files (e.g., mp4), and a new set of reference and coded video files are created containing perceptually motivated spectrograms. These video files are then fed into VMAF for computing the audio quality score (Figure 1). Our MATLAB-based framework is built around the FFmpeg [21] tool which is used for extracting audio from video, creating videos from images, and for running VMAF. Note that even though the VMAF repository [22] allows retraining the SVM, in this study, we focused on the proposed auditory-inspired frontend to VMAF. Next, we describe how the spectrogram video files are created. Three trivial steps need to be computed before that: (1) audio is extracted from the video files, (2) coded audio is resampled to the sample rate of reference audio, and (3) coded audio is time-aligned with reference audio. ### _Audio to perceptually motivated spectrogram images_ First, perceptually motivated power spectrograms of the reference and coded audio are computed. For stereo audio, left (L), right (R), and mid-signal (M = 0.5(L+R)) are considered. Note that mid-signal has been considered previously [15, 23] for the coded stereo audio quality prediction task. Next, the following perceptually inspired setups are employed: (i) analysis with 80 Equivalent Rectangular Bandwidth (ERB) bands [24] in the 30 Hz-18 kHz audible frequency range, using FFT with a window length of 42.7 ms and Gammatone filter shape [24] weighting, (ii) the time stride is aligned with the reference video frame rate (30 fps or 33.3 ms), (iii) the ERB center frequencies were adjusted to the nearest FFT bin center frequencies to avoid sampling issues with the relatively low FFT frequency resolution compared to the narrow Gammatone filter shapes at low frequencies, and (iv) the input audio is calibrated such that \(-25\) dB full-scale sine tone corresponds to 85 dB sound pressure level and signals below the threshold-in-quiet [25] are set to zero. ### _Framing and timing of spectrogram images_ Next, we constructed data frames from the reference and coded spectrograms individually as follows. At the video frame rate, 32 spectrogram frames (\(\approx\)1s) are assembled resulting in 2D arrays of size [80\(\times\)32] for every audio signal. Then we stacked the 2D arrays for each audio signal on top of each other which, in the case of stereo audio, results in an array of size [240\(\times\)32] due to the three signals L, R, and M. The stacking was done for a joint analysis of both individual channels as well as their inter-channel relationships. Finally, for feeding into VMAF, we copy this array as many times as needed to fit into an image of size 480\(\times\)640 (height\(\times\)width). The copying (replication) was done to improve the quality prediction accuracy of bandlimited audio (see, results in Section III-B). ### _Color and intensity scaling of spectrogram images_ The reference and coded image frames are converted to the dB domain with a maximum dynamic range of 70 dB. The dB-domain audio data is then quantized linearly into the [0,255] range. For every frame, the quantized data is used as an index to the HSV (Hue, Saturation, Value) colormap [26] to create color images in the Portable Network Graphics (PNG) format. This mapping induces a non-monotonic conversion from dB to luma, which is then analyzed by VMAF. Without this mapping from the spectrogram dB-domain to color images using the HSV colormap, we observed significantly worse audio quality prediction performance with monotonic grayscale images (see, results in Section III-B). The design choices of AudioVMAF presented in this section were not informed by the test sets used for its benchmarking. The perceptual front end is similar in spirit to the perceptual frontend used in ViSQOL-v3. The replication method was introduced when we observed that the predicted quality of the bandlimited signals is as high as that of the full-bandwidth signals. The method was found through experiments. Similarly, the HSV color mapping was also found through experiments conducted on a typical MUSHRA listening test (different from the test sets) with 12 excerpts (not present in the test set), involving two variants of a codec (not present in the test set). ## III Experiments and Results ### _Test sets_ We benchmarked the prediction accuracy of AudioVMAF against subjective listening scores from the Unified Speech and Audio Coding (USAC) [27] verification listening tests [28, 29]. These comprehensive tests contain 24 excerpts coded Fig. 1: Utilizing VMAF for both video and audio quality prediction. For AudioVMAF, reference-coded pairs of perceptually motivated spectrogram videos are fed to VMAF. with USAC, High-Efficiency Advanced Audio Coding (HEAAC), and Extended Adaptive Multi-Rate - Wideband (AMR-WB+) with bitrates ranging from 8 kb/s mono to 96 kb/s stereo. According to our experience, automatic audio quality assessment becomes a challenge when parametric coding tools are activated. Hence, we evaluated AudioVMAF with modern audio codecs which utilize parametric bandwidth extension and parametric stereo coding tools at lower bitrates. USAC verification listening tests consist of three separate listening tests: mono at low bitrates and stereo at both low and high bitrates. All tests were MUSHRA tests, with a 0-100 quality scale, where a higher score implies better quality. For the details of these MUSHRA listening tests, interested readers are referred to [28, 29]. For each of the three listening tests, we included all the 24 test excerpts, which consist of 8 speech, 8 music, and 8 mixed excerpts (see, Table 4 in [29]). ### _AudioVMAF Benchmarking_ We compared the accuracy of AudioVMAF against a dedicated audio quality metric which is also inherited from the image domain. We decided to use ViSQOL-v3 (operating in audio mode) [30] because the NSIM distortion metric used in ViSQOL for comparing reference and coded Gammatone spectrograms is inspired by the 2D image distortion metric. Furthermore, it has been reported in [31], that out of all objective measures designed to evaluate codecs, ViSQOL shows the best correlation with subjective scores and achieves high and stable performance for all content types. Similarly, it was reported in [32] that overall VISQOL performed very well across five different datasets (see, Table II in [32], overall correlation with ViSQOL as the teacher). In addition, we benchmark against a related prior research [8], where 1D variants of the popular video frame/picture quality predictors, e.g., SSIM [11], Multi-scale Structural Similarity Index (MS-SSIM) [33], Visual Information Fidelity in the pixel domain (VIFP) [19], Gradient Magnitude Similarity Mean (GMSM) [34], and Gradient Magnitude Similarity Deviation (GMSD) [34] were used to predict audio quality. These measures were defined using 1D data windows and are denoted with a subscript 1D in Table I. We utilize the implementation [35] provided by the authors [8]. Note that in the implementation, SSIM\({}_{\text{1D}}\), MS-SSIM\({}_{\text{1D}}\), VIFP\({}_{\text{1D}}\), GMSM\({}_{\text{1D}}\) are bounded between [0, 1], whereas GMSD\({}_{\text{1D}}\) is bounded between [0.687, 1]. ViSQOL-v3 on the other hand is trained to predict the Mean Opinion Score (MOS) (between 1 and 5), but it is observed to be bounded between [1, 4.732][36, 30]. Furthermore, none of the benchmarks are designed for stereo. Internally, ViSQOL-v3 downmixes stereo to a mono mid-signal and then predicts the MOS. Whereas, the 1D visual quality measures (as implemented [35]) consider only the left channel. However, to make them at least comparable with ViSQOL-v3, we report the results by considering the mono mid signal. For stereo audio, AudioVMAF considers left (L), right (R), and mid-signal (M) by stacking spectrograms of L, R, and M channels in the image. We used the Spearman rank-order correlation coefficient (\(R_{s}\)) to measure the prediction monotonicity of the models and the Pearson linear correlation coefficient (\(R_{p}\)) to measure the prediction linearity. For both \(R_{p}\) and \(R_{s}\), larger values denote better performance. The prediction accuracy on three listening tests is presented in Table I. It can be observed that none of the 1D visual quality features are competitive in predicting the audio quality if the (3.5 kHz and 7 kHz) bandlimited anchors are included. We observed that the quality of the anchors was significantly overestimated (almost as high as the reference quality). Excluding the anchors improves the accuracy, but none of them outperform ViSQOL-v3. Considering all three tests, the top two 1D visual quality features are MS-SSIM and VIFP, indicating the importance of multi-scale modeling and natural statistics-based features also for audio quality prediction. On the contrary, AudioVMAF significantly outperforms 1D visual quality features both with and without the anchors; and \begin{table} \end{table} TABLE I: Performance of AudioVMAF on USAC verification listening tests. The table shows the correlation coefficients (\(R_{p}\) and \(R_{s}\)) between predicted objective scores and subjective (MUSHRA) scores for (a) mono listening test, (b) stereo low-bitrate, and (c) stereo high-bitrate. Note that AudioVMAF (w/ replication) is enabled with the best settings, i.e., both replication and HSV colormap. performs as well or better than ViSQOL-v3 if anchors are excluded. We further enhanced the performance with anchors by replicating the data (see, Section II-B). Overall, across all three listening tests, we demonstrate a significant improvement (\(7.8\%\) and \(2\%\) improvement of Pearson's and Spearman's Rank correlation coefficient, respectively) over ViSQOL-v3. We can also observe that if we do not perform the mapping from the spectrogram dB-domain to color images using the HSV colormap (see, Section II-C), the prediction performance of AudioVMAF is significantly degraded. Furthermore, since the predicted AudioVMAF scores are between 0-100, we can also easily compute (without any audio quality scale conversion or mapping) and report the outlier ratios [37] for the AudioVMAF with its best settings (i.e., with replication and HSV colormap) enabled: 0.740 (mono test), 0.742 (stereo low bitrate test), and 0.773 (stereo high bitrate test). Finally, we coded the 24 stereo excerpts for a wide range of bitrates using the (HE)-AAC family of codecs typically used in practice, and demonstrate (Figure 2) that the mean AudioVMAF score scales with the bitrates. This observation, along with the strong performance in predicting the subjective quality (presented in Table I) makes AudioVMAF also suitable for bitrate laddering (similar to the popular application of VMAF [38]). ## IV Conclusion and Discussion In this paper, we present AudioVMAF: a novel "out-of-the-box" VMAF-based coded audio quality prediction model for 48 kHz sample rate. Our contribution can be viewed as a perceptual preprocessing to VMAF. We expanded to predict coded stereo audio quality by stacking spectrograms of left, right, and mid channels in the image. Furthermore, we found that replicating images leads to improved prediction accuracy when bandlimited anchors are included. Interestingly, we also found that the mapping from the spectrogram dB-domain to color images using the HSV colormap is needed for a reasonable prediction performance. In the future, we would like to extend the AudioVMAF for multi-channel coded audio quality prediction, improve the sensitivity of the method at high bitrates, and investigate re-training the SVM with subjective listening test data. Finally, for explaining our findings with AudioVMAF, it is worth understanding the implementation details of VMAF and the dataset that was used for its training. Such insights informed by our observations might trigger ideas for improving also VMAF for image and video quality assessment. With this research, we provide a new angle for developing an audio quality metric. The proposed method is a step towards merging coded audio and video quality prediction tasks which may pave a new way for AVQ modeling using a coherent architecture.
2303.01152
Category Theory for Autonomous Robots: The Marathon 2 Use Case
Model-based systems engineering (MBSE) is a methodology that exploits system representation during the entire system life-cycle. The use of formal models has gained momentum in robotics engineering over the past few years. Models play a crucial role in robot design; they serve as the basis for achieving holistic properties, such as functional reliability or adaptive resilience, and facilitate the automated production of modules. We propose the use of formal conceptualizations beyond the engineering phase, providing accurate models that can be leveraged at runtime. This paper explores the use of Category Theory, a mathematical framework for describing abstractions, as a formal language to produce such robot models. To showcase its practical application, we present a concrete example based on the Marathon 2 experiment. Here, we illustrate the potential of formalizing systems -- including their recovery mechanisms -- which allows engineers to design more trustworthy autonomous robots. This, in turn, enhances their dependability and performance.
Esther Aguado, Virgilio Gómez, Miguel Hernando, Claudio Rossi, Ricardo Sanz
2023-03-02T10:54:46Z
http://arxiv.org/abs/2303.01152v2
# Category Theory for Autonomous Robots: The Marathon 2 Use Case ###### Abstract Model-based systems engineering (MBSE) is a methodology that exploits system representation during the entire system life-cycle. The use of formal models has gained momentum in robotics engineering over the past few years. Models play a crucial role in robot design; they serve as the basis for achieving holistic properties, such as functional reliability or adaptive resilience, and facilitate the automated production of modules. We propose the use of formal conceptualizations beyond the engineering phase, providing accurate models that can be leveraged at runtime. This paper explores the use of Category Theory, a mathematical framework for describing abstractions, as a formal language to produce such robot models. To showcase its practical application, we present a concrete example based on the Marathon 2 experiment. Here, we illustrate the potential of formalizing systems--including their recovery mechanisms--which allows engineers to design more trustworthy autonomous robots. This, in turn, enhances their dependability and performance. ## 1 Introduction Autonomous robot engineering constitutes a major challenge in system development--design, construction, assurance, operation--due to their complexity in both hardware/software and operational environments. Their construction requires well-orchestrated and knowledge-intensive engineering processes that can benefit from rigorous _Systems Engineering_ (SE) practices. Having a suitable technology to develop _system models_ is a major step in solving this challenge. We use models to abstract and manage system complexity. Model-based systems engineering (MBSE) is a semi-formalized methodology that exploits a system representation with reduced detail so that its structure and behavior can be managed. MBSE provides support for requirement definition, design, analysis, verification, and validation throughout the entire life-cycle of intricate systems. Of all the aspects of the system, a highly critical one is _system architecture_[5], because it affects and even determines most of the properties of the system. MBSE modeling of system architectures is a cornerstone of reliable system construction. There are languages that support MBSE, such as AADL [9] or SysML [22], which facilitate the definition, communication, and maintenance of system models. These languages are typically used to document and reason about the properties of the system and its consistency. However, when systems are deployed in unpredictable environments, it is hard to ensure its behavior at the development time. We propose the use of _Category Theory_ (CT) as a framework to support system modeling and behavioral analysis for complex Systems of Systems (SoS). CT is a general theory of mathematical structures. It was invented in the 1940s to unify and synthesize different areas in mathematics [30]. It can be seen as a tool set for describing general structures and maintaining control over which aspects are preserved when performing abstractions. Moreover, its mathematical foundation enables powerful communication between apparently unrelated fields. Our motivation is based on the difficulties in addressing the reality gap between robot usage in a controlled environment and its deployment in open, real-world scenarios. We aim to address dependability using robust models at runtime as a tool to better understand the situation and react properly in unstructured environments. In this article, we draw a landscape of how CT can be used for this purpose. In particular, the main contributions are as follows. * A brief guide on how to use CT to design systems, * its application to a state-of-the-art mobile robot, and * a discussion on the benefits of using CT in robotics. Beyond this introduction, the paper is organized as follows. Section 2 addresses the description of some fundamental CT elements that can add value in robotic engineering processes. Section 3 describes some related work on applied CT in domains of close interest for roboticists--databases, knowledge management, hierarchical systems, robotics, etc. Section 4 describes a concrete use case: the Marathon 2 experiment on autonomous mobile robots in real social environments. Section 5 describes the use of CT in the Marathon 2 use case. Finally, Section 6 discusses the potential use of CT in the modeling of robotic systems in an MBSE context. The paper ends with some conclusions, acknowledgments, and references. ## 2 Category Theory Concepts for Robotics In this Section we define the specific language we will use to our CT-based system model. CT allows us to be completely precise about otherwise informal concepts [14]. We start with some basic elements to represent structures and evolve them into more general formalisms, _operads_. Lastly, we introduce the concept of _pushout_, which provides a method to condense and combine data from pre-existing structures. Note that this introduction is just a basic description for robotics; for further reading, refer to [16, 15, 2] from a mathematical perspective, [4] from a computer science perspective, and [30, 11] for a scientific application perspective with a strong mathematical flavor. ### Basis: Category, Functor, and Natural Transformation A _category_\(\mathcal{C}\), is a collection of elements with relations between them. It constitutes an aggregation of objects with an imposed structure [19]. To specify a category, we need three constituents: * A collection of _objects_, \(\operatorname{Ob}(\mathcal{C})\), * a collection of _morphisms_\(f:X\to Y\), for every pair of objects \(X,Y\in\operatorname{Ob}(\mathcal{C})\), and * a _composition_ operation \(g\circ f:X\to Z\) composing morphisms \(f:X\to Y\) and \(g:Y\to Z\). Additionally, categories must satisfy the following conditions: * identity: for every object \(X\in\operatorname{Ob}(\mathcal{C})\), exists an identity morphism \(id_{X}:X\to X\). * associativity: for any three morphisms, \(f:X\to Y\), \(g:Y\to Z\), \(h:Z\to W\), the following expressions are equal: \((h\circ g)\circ f=h\circ(g\circ f)=h\circ g\circ f\) * unitality: for any morphism \(f:X\to Y\), the composition with the identity morphisms at each object does not affect the result, \(id_{X}\circ f=f\) and \(f\circ id_{Y}=f\). A simple example is the **Set** category, in which objects are sets, morphisms are functions between sets, and a composition operation is the composition between its functions. A _functor_\(F\), is a map between two categories \(\mathcal{C}\), \(\mathcal{D}\). It assigns objects to objects and morphisms to morphisms, preserving identities and composition properties. Functors preserve structures when projecting one category inside another. Functors can map between the same types of category, such as \(\textbf{Set}\rightarrow\textbf{Set}\), or between different categories, such as \(\textbf{Set}\rightarrow\textbf{Vect}\), between the category of sets and the category of vector spaces. A _natural transformation_\(\alpha\), is a structure-preserving mapping between functors. Functors project images of a category inside another; whereas natural transformations shift the projection defined by a functor \(F\) into the projection defined by a functor \(G\). Diagram 1 relates these three concepts. There are two categories \(\mathcal{C},\mathcal{D}\) and two different functors \(F,G:\mathcal{C}\rightarrow\mathcal{D}\). The two functors are linked by the natural transformation \(\alpha:F\Rightarrow G\). (1) To specify a natural transformation, we define a morphism \(\alpha_{c}:F(c)\to G(c)\) for each object \(c\in\mathcal{C}\), such that for every morphism \(f:c\to d\) in \(\mathcal{C}\) the composition rule \(\alpha_{d}\circ F(f)=G(f)\circ\alpha_{c}\) holds. This condition is often expressed as the commutative diagram shown in Diagram 2, where the natural transformation morphisms are represented as dashed arrows. This means that the projection of \(\mathcal{C}\) in \(\mathcal{D}\) through \(F\) can be transformed into projections through \(G\). The commutative condition implies that the order in which we apply the transformation does not matter. (2) ### Representing Resources: Wiring Diagrams Fong and Spivak [11] define _wiring diagrams_ as an answer to the question "Can I transform what I have into what I want?" Wiring diagrams constitute a CT-formalization of engineering block diagrams. In these diagrams, boxes represent transformations (i.e. morphisms in a category), wires represent resources (i.e. objects in a category), and two boxes in series are the result of composing morphisms. In the broad sense, wiring diagrams depict monoidal categories \((\mathcal{C},\otimes,\{1\})\), which are categories \(\mathcal{C}\) equipped with a tensor product \(\otimes\) and a monoidal unit \(\{1\}\) that allows the combination of objects in the category. This structure provides a rationale to place boxes in parallel in the diagram. Lastly, _symmetric monoidal categories_ (SMC) and its respective coherence conditions support the use of feedback wires and swapping in these diagrams. A precise definition of wiring diagrams can be found in [15] and [28]. ### Representing Networks: Operads and Algebras Categories, functors, and natural transformations are the building blocks of CT. They allow us to express objects, relationships among them, and abstractions while preserving their structure. This approach imposes some directionality since morphisms, functors, and natural transformations are maps with a domain and a co-domain. From an engineering perspective, we could see domains and co-domains as input and output ports, respectively. However, we sometimes prefer to express systems in terms of networks in which ports exit without a direction. CT provides a generalization over categories, operads, to model assemblies of structures. An _operad_ is a mathematical object that describes operations with multiple inputs and one output [34]; classically, it only describes one object. We use here what are commonly known as colored operads, which can hold many objects (i.e. colors). Leinster [17] provides a precise definition of operads. An operad consists of: * a set of objects \(\mathrm{Ob}(\mathcal{O})=X_{1},\ldots,X_{n}\) each of which can be seen as cells, * a set of morphisms \(\varphi,\psi,\ldots\) that constitute the operations and act as an assembly of cell interfaces creating an external interface, and * a composition formula \(\circ\) or substitution, that enables the assembly of morphisms, i.e. the composition of interfaces. Operads allow us to define abstract operations, but we need to "fill" those cells to ground them. An _algebra_ is an operad functor from the operad to the **Set** category, \(F:\mathcal{O}\rightarrow\textbf{Set}\) that produces a concrete realization. It is subject to the operations and composition relations specified by the operad. In practice, an algebra determines: * for any object, \(x\in\mathcal{O}\), a set \(F(x)\), * for any morphism in the operad, \(f:(x_{1},\ldots,x_{n})\to y\) and for any element \(f_{i}\in F(x)\), a new object \(f^{\prime}=F(f)(f_{1},\ldots,f_{n})\in F(y)\). Different algebras allow us to "fill" the operad structure from different perspectives and then combine them using a morphism between algebras \(\delta:A\to B\). In Section 5, _we propose two algebras, structure and behavior, for the system operad_. These algebras can be seen as the semantics of the syntax established in the operad. ### Combining information: Pushouts So far we have introduced the machinery to focus on roles and structure via categories and operads. In this work, our aim is to use abstractions to build the best possible system for each situation. Pushouts serve as a valuable tool in this process as they provide the best approximation of an object in a category that satisfies certain conditions. _Pushouts_ can be seen as a way to combine two objects in a category with a common object so that all information is preserved. For example, in a category \(\mathcal{C}\) with three sets as objects, \(X,Y,A\in\mathrm{Ob}(\mathcal{C})\), we can have a set \(A\) representing three robots, \(A=\{robot\_1,robot\_2,robot\_3\}\), a set \(X\) representing robot grippers, \(X=\{finger\_gripper,\)\(vacuum\_gripper\}\), and a set \(Y\) representing the maximum speed (m/s) in wheeled robots, \(Y=\{1.5,3\}\). There are two morphisms in our category, \(f:A\to X\) assigning \(robot\_1\) a maximum speed of 1.5 m/s and \(robot\_2\) a maximum speed of 3 m/s; and morphism \(g:A\to Y\) assigning \(robot\_2\) to the \(three\_fingers\_gripper\) and \(robot\_3\) to the \(vacuum\_gripper\). The pushout \(X_{+A}Y\) can be seen as the summary of objects and its relationships, in a **Set** category, a non-disjoint union. In this case, a set \(X_{+A}Y=\{robot\_1\_speed\_1.5,robot\_2\_speed\_3\_and\_finger\_gripper,robot\_3\_vacuum\_gripper\}\) and its corresponding morphisms to sets \(X,Y\). Note that objects connected by morphisms are seen as the same element, i.e. the same system made up of subsystems \(X,Y,A\). Pushouts are usually expressed as Diagram 3 in which \(f^{\prime},g^{\prime}\) are the pushout morphisms that respectively link \(Y,X\) with the pushout object \(X_{+A}Y\). If this diagram commutes, i.e. \(g^{\prime}\circ f=f^{\prime}\circ g\), there exists a unique morphism \(u:X_{+A}Y\to T\) such that Diagram 4 also commutes. The dashed arrow represents the unique morphism, and the element \(T\) represents the unique pushout object. \(l_{x},l_{y}\) constitute the validity limits of \(T\)[18]. (3) (4) ## 3 Related Work CT has proven to be a powerful tool for abstract reasoning about mathematical structures. However, it is still not as widely used in applied scenarios as other mathematical disciplines, such as linear algebra or calculus. The main reasons could be that it is a too specialized and abstract field, with a steep learning curve, and the community. Besides its main uses in mathematics, it is applied in computer science and software engineering; mainly to identify commonalities and patterns across different domains. A notable contribution is its use in the design and implementation of the C++ Standard Template Library [32]. In recent years, the number of fields that implement CT-based approaches has increased. In knowledge representation, Spivak [29] proposes the ontology log, _olog_, a CT-formalism similar to relational database schemas. It provides a friendly definition language and supports safe data migration [31]. Aligned with this approach, Patterson [24] introduces a relational ontology, which seeks to bring ologs closer to description logics. These works rely on categories, functors, and natural transformations and use pushouts to combine information and generate new concepts from old ones. From the robotics perspective, Censi [7] introduces the theory of co-design to optimize multi-objective systems. Zardini [35] applies interconnected co-design problems to a self-driving car. This approach applies wiring diagrams to optimize resource allocation in the design phase. Perhaps the vision of Lloyd is more in line with our work. Lloyd [19] suggests the use of CT language and theory to study systems and reflexes on its implications. Its domain application is biological sciences. In [18], he provides a CT-centered approach to model and simulate the emergent properties of intercellular interaction. Like us, Bakirtzis [3] studies the unification of requirements, behaviors, and architectures for complex systems. He proposes the use of wiring diagrams as an alternative to engineering modeling languages such as SysML and provides a grounding example on an unmanned aerial vehicle. Its perspective on _contracted behaviors_, which composes behaviors and requirements through an algebra, has inspired our recovery model introduced in Section 5.3. The main limitation of this approach is the directionality imposed by wiring diagrams. The generalization provided by operads appears to be a better structure to model complex systems. Schweiker et al. [27] propose the application of wiring diagrams operad to model safe critical systems. In particular, they establish viewpoints with a sound mathematical framework in which behaviors and error propagation can be assessed from different perspectives. This model is applied to evaluate the failure effects of air traffic management communication channels on aircraft control. Our approach is based on the same abstract construction, the operad of wiring diagrams, as fundamental structure for the system model. Breiner et al. [6] offer another perspective on using operads to model systems. It uses the separation of concerns provided by operads to analyze a length calibration device, the length scale interferometer (LSI), used at the US National Institute of Standards and Technology (NIST). They model the system interfaces as a port graph operad. Then, they use a functor between the system and the probability of failure to perform the failure diagnosis. Foley et al. [10] examine operad-based problem modeling from two angles. It takes a top-down modeling approach using wiring diagrams operad to analyze the LSI case from [6], and a down-top perspective using network operad to design a search and rescue architecture and its design mission task plan. These works show that CT is seen as a tool to increase the dependability and explainability of the system, which is a major concern in any technology-centered domain. The growing number of publications suggests the potential of CT for practical use. However, there are still challenges in bridging the abstraction provided by CT and the concrete implementation in real-world scenarios. In this work, we try to take a step in that direction, defining a model that can be used both at the design phase and during robot operation. ## 4 The Marathon 2 Use Case The Marathon 2 experiment [20] proposes a guidance, navigation, and control (GNC) solution for mobile robots, which has become a widely adopted stack for handling perception, mapping, localization, path planning, and control. This software, known as _Nav2_, provides a design pattern that seeks modularity, flexibility, and extensibility. Section 5 models the system deployed in this experiment using CT. The described experiment intends to cover a distance greater than a marathon without human assistance. Specifically, a robot shall traverse 16 waypoints several times in a university setup. In operation, the robot faced difficulties such as human crowds and navigation around complex areas such as a central staircase, a high-traffic bridge, a hallway, and a narrow doorway. The robots used in the experiments, a Tiago [23] and a RB-1 [25] have similar base dimensions and their speed was limited to 0.45 m/s for safety reasons, which is below their maximum speed. They use differential steering on a circular base, as required by the implementation of the Adaptive Monte Carlo Localization (AMCL) [12] and the A\({}^{*}\) planner [13]. These robots use a laser and a depth camera as the main sensors. Figure 1 depicts the Marathon 2 architecture. It enables a mobile robot to autonomously reach a goal state, such as a specific position and orientation relative to a specific map. It takes as input a map, a goal pose, and the current pose, and provides velocity commands to drive the robot. In the experiments, the goal pose was the sequence of 16 waypoints. The software relies on a behavior tree (BT) [8] to provide flexibility and adaptability to perform the task. In this case, it coordinates (i) guidance through a path planner to find the best route to the goal pose, (ii) navigation based on AMCL for state estimation, and (iii) a controller to compute the path and local information to generate a precise velocity control signal. Additionally, it provides a recovery server to solve runtime contingencies. ### Recoveries in Marathon 2 Marathon 2 uses ad-hoc solutions to handle contingencies. Recovery actions depend on the specific failing component and the data it manages. As shown in Figure 1, Marathon 2 provides recoveries for the path planner, the controller, and the entire Nav2 subsystem. Nav2 logic uses a BT to orchestrate the execution of tasks. Recoveries are encoded as BT fallbacks, triggered when the specific task fails. In particular, (i) when the system cannot Figure 1: Marathon 2 software architecture. ROS topics are depicted in bold and starts with a “\(\backslash\)” symbol. The gray boxes are the main elements: mission server, map server, robot and Nav2. The Nav2 subsystem is further decomposed into the white boxes: state estimator or navigator, path planner or guidance, and controller. Note that the path planner, the controller and the Nav2 boxes have a diamond shape in the corner with an “R” representing that exists available recovery actions for such subsystems. generate a plan, it clears the global map used by the path planner; (ii) when the controller fails to determine a suitable velocity, it clears the corresponding area in the environmental model (i.e. local map); and (iii) when the whole GNC process fails, it first clears the entire environmental model, then spins the robot in place to reorient local obstacles, and ultimately waits a timeout. Macenski et al. [20] identified two primary scenarios that triggered recovery behaviors in the robot. The first scenario involved crowded spaces where the robot was unable to compute a clear path to its destination. If the _clear map recovery_ approach failed and the path was obstructed by people, the robot used _wait recovery_, which paused navigation until the path was clear. The second scenario occurred when localization confidence was low, particularly in long corridors with repetitive features and many people. In these cases, the _spin recovery_ approach--in which the robot spins in place--was used to improve the confidence in position before resuming its task. While these ad-hoc recovery approaches did utilize the knowledge and experience of engineers in dealing with contingencies, they lacked the ability to provide a clear rationale for the robot's actions. As a result, the reasoning behind these approaches remained solely in the minds of the engineers. This lack of transparency can give rise to concerns about the reliability of the system, as a particular recovery approach may not be appropriate for a given situation or may even adversely affect a properly functioning subsystem. For example, the planner recovery approach, which clears the global map, can propagate to the local map utilized by the faultless controller. ## 5 CT-model for the Marathon 2 In this Section, we formalize the Marathon 2 use case in a model based on CT with three intentions: (i) provide a formal model of the system with a strong mathematical foundation, (ii) represent all valid equivalent designs, and (iii) exploit that model at runtime, i.e. implement sound model-based recoveries. ### System model Operads offer an effective modeling procedure for hierarchical structures. We base our model on the wiring diagram operad, \(\mathcal{W}\), similar to the wiring diagram operad for dynamical systems explicitly defined in [33]. Our building blocks, i.e. operad objects, are boxes \(x\in\mathrm{Ob}(\mathcal{W})\) with two types of interface: resources required and resources provided. The operad morphisms \(\varphi,\psi,\ldots\) represent which composite interfaces can be produced out of atomic ones, as explained in Section 2.2. The composition formula \(\circ\) ensures that morphisms can be composed creating higher-level structures in terms of resources. In Section 2.3 we claim that operads define abstract operations, they act as "placeholders" to create a theory of composition [30]. We use algebras to define the "fillings", the concrete elements that provide and consume resources. In robotics, those resources can be seen from three perspectives (i) capability, (ii) structure, and (iii) behavior. Each viewpoint constitutes an algebra, a way to provide meaning to the abstract structure. #### 5.1.1 Capability model Capability refers to the "potential" ability of a system, i.e. what it is intended to do to satisfy a need. Its algebra, \(C:\mathcal{W}\rightarrow\mathbf{Set}\), gives meaning to resources in terms of needs. In particular, the capability algebra produces an instance of a \(\mathbf{Set}\) category in which: * For every object \(x\in\mathrm{Ob}(\mathcal{W})\), the set \(C(x)\) is the set of capabilities that provides and consumes resources from \(x\). * For every morphism in the operad \(f:(x_{1},\ldots,x_{n})\to y\) and for every choice of capability \(c_{1}\in C(x_{1}),\ldots,c_{n}\in C(x_{n})\) it creates a composed capability \(c^{\prime}=C(f)(c_{1},\ldots c_{n})\in C(y)\). Figure 2 displays the main capabilities of the Marathon 2 system. The operad structure is represented as boxes, the capability model is composed of two interconnected cells corresponding to the physical interaction and the computation of motion to reach the point. The mission specification and the environmental perceptible features are external capabilities consumed by the system but depicted for completeness. Figure 2: Capability model of the Marathon 2 system. Atomic capabilities in white boxes, composed capabilities in gray boxes. Lines in black represent links between atomic capabilities and dotted blue lines, links between composed capabilities. #### 5.1.2 Structural model Structure represents the logical arrangement of components--hardware or software--and its interactions. It defines the system, its composing subsystems, and its connections. The structural algebra \(S:\mathcal{O}\to\mathbf{Set}\), gives meaning to the subsystems required to transform resources. In particular, the structural algebra produces an instance of a \(\mathbf{Set}\) category in which: * For every object \(x\in\operatorname{Ob}(\mathcal{W})\), the set \(S(x)\) is the set of subsystems that provide and consume resources from \(x\). * For every morphism in the operad \(f:(x_{1},\ldots,x_{n})\to y\) and for every choice of subsystem \(s_{1}\in S(x_{1}),\ldots,s_{n}\in S(x_{n})\) it creates a system out of subsystems \(s^{\prime}=S(f)(s_{1},\ldots s_{n})\in S(y)\). In the Marathon 2 system, the structural model consists of the robot and the GNC subsystem. Its hierarchical decomposition is shown in Figure 3. The operad structure is represented as boxes, the structural model is composed of two interconnected cells corresponding to the sensor/actuator subsystem and the Nav2 subsystem. The mission server and model map are external subsystems providing resources to Marathon 2, they are depicted for completeness. #### 5.1.3 Behavioral model Behavior is generally understood as the visible actions of a system in reaction to certain inputs, conditions, or stimuli. The behavioral algebra \(B:\mathcal{O}\to\mathbf{Set}\), gives meaning to the information flow--exchanged resources--and the resulting actions from the execution of robot commands. In particular, the behavioral algebra produces an instance of a \(\mathbf{Set}\) category in which: * For every object \(x\in\operatorname{Ob}(\mathcal{W})\), the set \(B(x)\) is the set of behaviors that provides and consumes resources--information pieces--from \(x\). * For every morphism in the operad \(f:(x_{1},\ldots,x_{n})\to y\) and for every choice of behavior \(b_{1}\in B(x_{1}),\ldots,b_{n}\in B(x_{n})\) it creates a composed behavior, \(b^{\prime}=B(f)(b_{1},\ldots b_{n})\in B(y)\). A behavioral model specifies the actual response of the system in a particular situation; unlike our previous models, in which the system emerges from the combination of elements. In the behavioral model, this combination must also occur in the right order. Wiring diagrams are the most extended formalism for expressing the direction of resource consumption in SMC, as defined in Section 2.2. The behavior algebra produces a finite \(\mathbf{Set}\) category, which constitutes an SMC [30], so it can be expressed as a wiring diagram similar to the Nav2 subsystem in Figure 1. The behavioral Marathon 2 is a wiring diagram that has (i) as boxes the concrete algorithms used according to [20], A\({}^{*}\) planner, an AMCL state estimator, and a TEB controller, and (ii) as wires the ROS topics depicted in bold in Figure 1. The composite behavior takes as input a map of the environment and a goal position and drives the robot there. ### Specification of a design A system design provides the resulting conceptualization that satisfies the three previously defined models. Figure 4 represents the relationship between the three concepts. Capability models refine the needs that the system must satisfy. These capabilities define requirements at different granularity. Structural and behavioral models propose solutions to meet its corresponding requirements. Solutions satisfying the three models provide the set of valid designs. Eventually, the "best valid design" alternative is the one to be realized. This process can be seen as a pushout \(Structure\gets Capability\to Behavior\) with requirements \(SR,BR\) as morphisms between capabilities and structure or behavior. As shown in Diagram 5, designs are a combination of both. If the diagram commutes, it means that it is equivalent to creating a design that first satisfies behavioral requirements and then structural ones, or vice versa. In this case, there exists a unique pushout object, known as _realization_. The realization shall be the unique design that maximizes the Figure 3: Structural model of the Marathon 2 system. Atomic subsystems in white boxes, composed subsystems—i.e. system of system—in gray boxes. Lines in black represent links between atomic subsystems and dotted blue lines, links between composed subsystems. capabilities of the system. In most systems, this is the design that maximizes behavioral performance and minimizes structural cost. For this reason, the pushout limits of validity are \(max\_perf,min\_cost\). (5) ### CT-Model-based Recoveries Models shall be used throughout the system life-cycle. We propose the use of CT-models at runtime to reconfigure the system and increase its resiliency. Today, most subsystems have specific recovery actions to ensure proper operation at the component level. However, to fully address the open issues in dependability, the system must be aware of these atomic recoveries and how they affect other components and its shared resources. CT provides the formal framework for mapping between components and behaviors. Figure 4: Relation between capability, structural and behavioral model with requirements. These models are used to establish the best valid system design. During robot deployment, changes in the environment or system contingencies may cause that the best designed realization is no longer valid. In this case, we propose a functor mapping to generate new _valid_ structural and behavioral models. These new models then evolve into a _recovered_ realization. From a CT perspective, there is a functor \(\delta:S\to B\) to establish the map between structure-**Set** category and behavior-**Set** category resulting from its corresponding algebras. When a contingency arises, there is an alternative mapping \(\delta^{\prime}:S^{\prime}\to B^{\prime}\) with the recovery changes applied. Diagram 6 expresses a natural transformation as defined in Section 2.1. There are two maps \(\delta,\delta^{\prime}\) between the realization category and the realization category _recovered_. The natural transformation \(\alpha\) relates these two maps. (6) The implications of the natural transformation are decomposed in Diagram 7. A valid recovery is the one that makes this diagram commute. (7) Marathon 2 recoveries described in Section 4.1 deal with crowded environments and lack of localization confidence. According to our CT model, this affects the behavioral model. For example, when the planner cannot compute a path because the map is "full of obstacles", no component is damaged. However, the solution to this recovery, clearing the global map, may affect the local map model and the controller component. In this case, a change in the behavioral model \(\alpha_{beh}\) produces a recovered structure model \(S(r^{\prime})\) through the map \(d^{top}\) that sends behaviors to structures. Another possible contingency is a failure in the laser sensor; in this case, its functionality can be provided by an alternative component, such as the depth camera. In this scenario, the system requires an additional element to transform the point cloud output produced by the depth camera into a laser scan message. Therefore, a contingency in the structural model affects the behavior of the system by changing the information flow. According to Diagram 7, a change in the structural model \(\alpha_{str}\) produces a recovered behavioral model \(B(r^{\prime})\) through the map \(\delta^{\prime}\). Discussion on CT-driven System Modeling System models constitute a vehicle for the realization of systems. They can be used throughout its entire life-cycle; for example, to define requirements, establish a design, analyze it, and verify and validate it. Formalisms can help engineers--and the robot itself--identify potential problems, examine complex systems, and communicate ideas with others in a clear and unambiguous way. Moreover, they provide consistency and systematization to increase the reusability of the model. As CT is "the language" of abstract mathematics, it becomes the perfect approach to represent systems at different levels of abstraction. Applied CT has become an increasingly attractive field, as evidenced by the notable growth in publications and research. Although it has a steep learning curve, it can be smoothed out with practitioners and collaborations across domains. The main concern when CT is applied in robotics is the lack of tool support. There is a framework under development, _Catalab_1, for applied and computational CT written in the Julia language. However, we believe that CT models can be seen as meta-models that provide a robust formal foundation to OWL ontologies, widely used in robotics [21]. To operationalize them, we shall produce user-friendly interfaces for developers to ease the use of CT models in applied domains. Footnote 1: [https://github.com/AlgebraicJulia/Catlab.jl](https://github.com/AlgebraicJulia/Catlab.jl) There are several potential research directions that could be pursued in this area. On the one hand, examine other paradigmatic use-cases in robotics from the CT perspective can be a powerful way to refine model abstractions and show practitioners the value of this approach. The EU-funded ROBOMINERS and CORESENSE projects try to do this. Another promising approach could be the application of the discrete-time process operad formalism [26], to model and analyze system dynamics. In our case, our next actions will be (i) measure the impact of CT model exploitation in terms of how it affects robot performance and mission fulfillment in real-world scenarios, which we have already addressed from the ontological perspective [1]; and (ii) extend the model to other activities in the system life-cycle, such as mission specification, performance metrics, testing, _etc_. These concepts can be used for validation using compositional viewpoint equations, as suggested by [27, 6]. ## 7 Conclusions In this article, we propose the use of formal abstract theories--Category Theory--for modeling in robotics. We suggest an approach to help system engineers in the initial phases of understanding CT. Furthermore, we illustrate how these formal tools can provide a systematic approach to addressing the situations that a system engineer may encounter. This is of special importance when building robust and adaptive robots that are systems of extreme complexity. Following this approach, we subsequently apply the proposed concepts to model the adaptive robot experiment outlined in [20]. This system serves as a fundamental yet im pactful use case of the _Nav2_ stack, widely extended to control mobile robots within the ROS community. The formal conceptualization goes beyond the engineering phases and endows robots with tools for better understanding of situations, which is a powerful asset towards achieving robustness and resiliency. We propose an approach to leverage explicit engineering knowledge during robot operation, which enhances the robot's dependability. In particular, we model system recoveries as (i) functorial mappings between system realizations, and (ii) natural transformations between structural and behavioral models. However, there are still many open issues with respect to a full use of CT models in robotic systems. The two main problems that we shall confront are the steep learning curve of these methods, and the lack of engineering-grade tool support. We contemplate the potential use of these formalizations as ontological meta-models, widely used in robotics nowadays. In future work, we aim to extend this model-based approach to other aspects of the robot system life-cycle. In conclusion, we believe that the steps shown in this article toward formalizing designs and its recoveries are a promising method to enhance robot autonomy. If we have learned something from the past of science and technology, is that effective scientific conceptualizations of real-world phenomena open enormous possibilities for new technology. ## Acknowledgments This work was partially supported by the ROBOMINERS project with funding from the European Union's Horizon 2020 Research and Innovation Programme (Grant Agreement No. 820971), by the CORESENSE project with funding from the European Union's Horizon Europe Research and Innovation Programme (Grant Agreement No. 101070254), and by a grant from _Programa Propio_ of Universidad Politecnica de Madrid.
2307.14725
vox2vec: A Framework for Self-supervised Contrastive Learning of Voxel-level Representations in Medical Images
This paper introduces vox2vec - a contrastive method for self-supervised learning (SSL) of voxel-level representations. vox2vec representations are modeled by a Feature Pyramid Network (FPN): a voxel representation is a concatenation of the corresponding feature vectors from different pyramid levels. The FPN is pre-trained to produce similar representations for the same voxel in different augmented contexts and distinctive representations for different voxels. This results in unified multi-scale representations that capture both global semantics (e.g., body part) and local semantics (e.g., different small organs or healthy versus tumor tissue). We use vox2vec to pre-train a FPN on more than 6500 publicly available computed tomography images. We evaluate the pre-trained representations by attaching simple heads on top of them and training the resulting models for 22 segmentation tasks. We show that vox2vec outperforms existing medical imaging SSL techniques in three evaluation setups: linear and non-linear probing and end-to-end fine-tuning. Moreover, a non-linear head trained on top of the frozen vox2vec representations achieves competitive performance with the FPN trained from scratch while having 50 times fewer trainable parameters. The code is available at https://github.com/mishgon/vox2vec .
Mikhail Goncharov, Vera Soboleva, Anvar Kurmukov, Maxim Pisov, Mikhail Belyaev
2023-07-27T09:30:22Z
http://arxiv.org/abs/2307.14725v1
vox2vec: A Framework for Self-supervised Contrastive Learning of Voxel-level Representations in Medical Images ###### Abstract This paper introduces vox2vec -- a contrastive method for self-supervised learning (SSL) of voxel-level representations. vox2vec representations are modeled by a Feature Pyramid Network (FPN): a voxel representation is a concatenation of the corresponding feature vectors from different pyramid levels. The FPN is pre-trained to produce similar representations for the same voxel in different augmented contexts and distinctive representations for different voxels. This results in unified multi-scale representations that capture both global semantics (e.g., body part) and local semantics (e.g., different small organs or healthy versus tumor tissue). We use vox2vec to pre-train a FPN on more than 6500 publicly available computed tomography images. We evaluate the pre-trained representations by attaching simple heads on top of them and training the resulting models for 22 segmentation tasks. We show that vox2vec outperforms existing medical imaging SSL techniques in three evaluation setups: linear and non-linear probing and end-to-end fine-tuning. Moreover, a non-linear head trained on top of the frozen vox2vec representations achieves competitive performance with the FPN trained from scratch while having 50 times fewer trainable parameters. The code is available at [https://github.com/mishgon/vox2vec](https://github.com/mishgon/vox2vec). Keywords:Contrastive Self-Supervised Representation Learning Medical Image Segmentation ## 1 Introduction Medical image segmentation often relies on supervised model training [14], but this approach has limitations. Firstly, it requires costly manual annotations. Secondly, the resulting models may not generalize well to unseen data domains. Even small changes in the task may result in a significant drop in performance, requiring re-training from scratch [18]. Self-supervised learning (SSL) is a promising solution to these limitations. SSL pre-trains a model backbone to extract informative representations from unlabeled data. Then, a simple linear or non-linear head on top of the frozen pre-trained backbone can be trained for various downstream tasks in a supervised manner (linear or non-linear probing). Alternatively, the backbone can be fine-tuned for a downstream task along with the head. Pre-training the backbone in a self-supervised manner enables scaling to larger datasets across multiple data and task domains. In medical imaging, this is particularly useful given the growing number of available datasets. In this work, we focus on contrastive learning [12, 8], one of the most effective approaches to SSL in computer vision. In contrastive learning, the model is trained to produce similar vector representations for augmented views of the same image and dissimilar representations for different images. Contrastive methods can also be used to learn dense, i.e., patch-level or even pixel- or voxel-level representations: pixels of augmented image views from the same region of the original image should have similar representations, while different pixels should have dissimilar ones [23]. Several works have implemented contrastive learning of dense representations in medical imaging [25, 7, 2, 26, 29]. Representations in [25, 7] do not resolve nearby voxels due to the negative sampling strategy and the architectural reasons. This makes them unsuitable for full-resolution segmentation, especially in linear and non-linear probing regimes. In the current SotA dense SSL methods [2, 26], authors employ restorative learning in addition to patch-level contrastive learning, in order to pre-train voxel-level representations in full-resolution. In [29], separate global and voxel-wise representations are learned in a contrastive manner to implement efficient dense image retrieval. The common weakness of all the above works is that they do not evaluate their SSL models in linear or non-linear probing setups, even though these setups are de-facto standards for evaluation of SSL methods in natural images [8, 13, 23]. Moreover, fine-tuned models can deviate drastically from their pre-trained states due to catastrophical forgetting [11], while models trained in linear or non-linear probing regimes are more robust as they have several orders of magnitude fewer trainable parameters. Our contributions are threefold. **First**, we propose vox2vec, a framework for contrastive learning of voxel-level representations. Our simple negative sampling strategy and the idea of storing voxel-level representations in a feature pyramid form result in high-dimensional, fine-grained, multi-scale representations suitable for the segmentation of different organs and tumors in full resolution. **Second**, we employ vox2vec to pre-train a FPN architecture on a diverse collection of six unannotated datasets, totaling over 6,500 CT images of the thorax and abdomen. We make the pre-trained model publicly available to simplify the reproduction of our results and to encourage practitioners to utilize this model as a starting point for the segmentation algorithms training. **Finally**, we compare the pre-trained model with the baselines on 22 segmentation tasks on seven CT datasets in three setups: linear probing, non-linear probing, and fine-tuning. We show that vox2vec performs slightly better than SotA models in the fine-tuning setup and outperforms them by a huge margin in the linear and non-linear probing setups. To the best of our knowledge, this is the first successful attempt to evaluate dense SSL methods in the medical imaging domain in linear and non-linear probing regimes. ## 2 Related work In recent years, self-supervised learning in computer vision has evolved from simple pretext tasks like Jigsaw Puzzles [22], Rotation Prediction [17], and Patch Position Prediction [10] to the current SotA methods such as restorative autoencoders [13] and contrastive [8] or non-contrastive [9] joint embedding methods. Several methods produce dense or pixel-wise vector representations [23, 28, 6] to pre-train models for downstream tasks like segmentation or object detection. In [23], pixel-wise representations are learned by forcing local features to remain constant over different viewing conditions. This means that matching regions describing the same location of the scene on different views should be positive pairs, while non-matching regions should be negative pairs. In [28], authors define positive and negative pairs as spatially close and distant pixels, respectively. While in [6], authors minimize the mean square distance between matched pixel embeddings, simultaneously preserving the embedding variance along the batch and decorrelating different embedding vector components. The methods initially proposed for natural images are often used to pre-train models on medical images. In [25], authors propose the 3D adaptation of Jigsaw Puzzle, Rotation Prediction, Patch Position Prediction, and image-level contrastive learning. Another common way for pre-training on medical images is to combine different approaches such as rotation prediction [26], restorative autoencoders [2, 26], and image-level contrastive learning [2, 26]. Several methods allows to obtain voxel-wise features. The model [29] maximizes the consistency of local features in the intersection between two differently augmented images. The algorithm [29] was mainly proposed for image retrieval and uses only feature representations in the largest and smallest scales in separate contrastive losses, while vox2vec produce voxels' representations via concatenation of feature vectors from a feature pyramid and pre-train them in a unified manner using a single contrastive loss. Finally, a number of works propose semi-supervised contrastive learning methods [20], however, they require additional task-specific manual labeling. ## 3 Method In a nutshell, vox2vec pre-trains a neural network to produce similar representations for the same voxel placed in different contexts (positive pairs) and predict distinctive representations for different voxels (negative pairs). In the following Sections 3.1, 3.2, 3.3, we describe in detail the main components of our method: 1) definition and sampling of positive and negative pairs of voxels; 2) modeling voxel-level representations via a neural network; 3) computation of the contrastive loss. The whole pre-training pipeline is schematically illustrated in Figure 1. We also describe the methodology of the evaluation of the pre-trained representations on downstream segmentation tasks in Section 3.4. ### Sampling of Positive and Negative Pairs We define a _positive pair_ as any pair of voxels that correspond to the same location in a given volume. Conversely, we call a _negative pair_ any pair of voxels that correspond to different locations in the same volume as well as voxels belonging to different volumes. Figure 1 (left) illustrates our strategy for positive and negative pairs sampling. For a given volume, we sample two overlapping 3D patches of size \((H,W,D)\). We apply color augmentations to them, including random gaussian blur, random gaussian sharpening, adding random gaussian noise, clipping the intensities to the random Hounsfield window, and rescaling them to the \((0,1)\) interval. Next, we sample \(m\) different positions from the patches' overlapping region. Each position yields a pair of voxels -- one from each patch, which results in a total of Figure 1: Illustration of the vox2vec pre-training pipeline. Left: two overlapping augmented 3D patches are sampled from each volume in a batch. Markers of the same color and shape denote positive pairs of voxels. Right: voxel-level representations are obtained via the concatenation of corresponding feature vectors from different levels of the FPN. Finally, the representations are projected to the space where contrastive loss is computed. positive pairs of voxels. At each pre-training iteration, we repeat this procedure for \(n\) different volumes, resulting in \(2\cdot n\) patches containing \(N=n\cdot m\) positive pairs. Thus, each sampled voxel has one positive counterpart and forms negative pairs with all the remaining \(2N-2\) voxels. In our experiments we set \((H,W,D)=(128,128,32)\), \(n=10\) and \(m=1000\). We exclude the background voxels from the sampling and do not penalize their representations. We obtain the background voxels by using a simple two-step algorithm: 1) thresholding voxels with an intensity less than \(-500\) HU; 2) keep voxels from the same connected component as the corner voxel of the CT volume, using a flood fill algorithm. ### Architecture A standard architecture for voxel-wise prediction is 3D UNet [24]. UNet's backbone returns a feature map of the same resolution as the input patch. However, our experiments show that this feature map alone is insufficient for modeling self-supervised voxel-level representations. The reason is that producing a feature map with more than 100 channels in full resolution is infeasible due to memory constraints. Meanwhile, to be suitable for many downstream tasks, representations should have a dimensionality of about 1000, as in [8]. To address this issue, we utilize a 3D FPN architecture instead of a standard 3D UNet. FPN returns voxel-level representations in the form of a feature pyramid. The pyramid's base is a feature map with 16 channels of the same resolution as the input patch. Each next pyramid level has twice as many channels and two times lower resolution than the previous one. Each voxel's representation is a concatenation of the corresponding feature vectors from all the pyramid levels. We use FPN with six pyramid levels, which results in 1008-dimensional representations. See Figure 1 (right) for an illustration. ### Loss Function At each pre-training iteration, we fed \(2\cdot n\) patches to the FPN and obtain the representations for \(N\) positive pairs of voxels. We denote the representations in \(i\)-th positive pair as \(h_{i}^{(1)}\) and \(h_{i}^{(2)}\), \(i=1,\ldots,N\). Following [8], instead of penalizing the representations directly, we project them on 128-dimensional unit sphere via a trainable 3-layer perceptron \(g(\cdot)\) followed by 12-normalization: \(z_{i}^{(1)}=g(h_{i}^{(1)})/\|g(h_{i}^{(1)})\|\), \(z_{i}^{(2)}=g(h_{i}^{(2)})/\|g(h_{i}^{(2)})\|\), \(i=1,\ldots,N\). Similar to [8] we use the InfoNCE loss as a contrastive objective: \(\mathcal{L}=\sum_{i=1}^{N}\sum_{k\in\{1,2\}}\mathcal{L}_{i}^{k}\), where \[\mathcal{L}_{i}^{k}=-\log\frac{\exp(\langle z_{i}^{(1)},z_{i}^{(2)}\rangle/ \tau)}{\exp(\langle z_{i}^{(1)},z_{i}^{(2)}\rangle/\tau)+\sum_{j\in\{1,\ldots,N \}\setminus\{i\}}\sum_{l\in\{1,2\}}\exp(\langle z_{i}^{(k)},z_{j}^{(l)}\rangle /\tau)}.\] ### Evaluation protocol We evaluate the quality of self-supervised voxel-level representations on downstream segmentation tasks in three setups: 1) linear probing, 2) non-linear probing, and 3) end-to-end fine-tuning. Linear or non-linear probing means training a voxel-wise linear or non-linear classifier on top of the frozen representations. If the representations are modeled by the UNet model, such classifier can be implemented as one or several \(1\times 1\) convolutional layers with a kernel size 1 on top of the output feature map. A linear voxel-wise head (linear FPN head) can be implemented as follows. Each pyramid level is separately fed to its own convolutional layer with kernel size 1. Then, as the number of channels on all pyramid levels has decreased, they can be upsampled to the full resolution and summed up. This operation is equivalent to applying a linear classifier to FPN voxel-wise representations described in Section 3.2. Linear FPN head has four orders of magnitude fewer parameters than FPN. The architecture of the non-linear voxel-wise head replicates the UNet's decoder but sets the kernel size of all convolutions to 1. It has 50 times fewer parameters than the entire FPN architecture. In the end-to-end fine-tuning setup, we attach the voxel-wise non-linear head, but in contrast to the non-linear probing regime, we also train the backbone. ## 4 Experiments ### Pre-training We use vox2vec to pre-train both FPN and UNet models (further vox2vec-FPN and vox2vec-UNet) in order to ablate the effect of using a feature pyramid instead of single full-resolution feature map for modeling voxel-wise representations. For pre-training, we use 6 public CT datasets [15, 21, 1, 3, 5, 27], totaling more than 6550 CTs, covering abdomen and thorax domains. We do not use the annotations for these datasets during the pre-training stage. Pre-processing includes the following steps: 1) cropping to the minimal volume containing all the voxels with the intensity greater than \(-500\) HU; 2) interpolation to the voxel spacing of \(1\times 1\times 2\) mm\({}^{3}\) (intensities are clipped and rescaled at the augmentation step, see Section 3.1). We pre-train both models for 100K batches using the Adam optimizer [16] with a learning rate of 0.0003. Both models are trained on a single A100-40Gb GPU for an average of 3 days. Further details about the pre-training setup can be found in Supplementary materials. ### Evaluation We evaluate our method on the Beyond the Cranial Vault Abdomen (BTCV) [19] and Medical Segmentation Decathlon (MSD) [4] datasets. The BTCV dataset consists of 30 CT scans along with 13 different organ annotations. We test our method on 6 CT MSD datasets, which include 9 different organ and tumor segmentation tasks. A 5 fold cross-validation is used for BTCV experiments, and a 3 fold cross-validation for MSD experiments. The segmentation performance of each model on BTCV and MSD datasets is evaluated by the Dice score. For our method, the pre-processing steps are the same for all datasets, as at the pre-training stage, but in addition, intensities are clipped to \((-1350,1000)\) HU window and rescaled to \((0,1)\). We compare our results with the current state-of-the-art self-supervised methods [26, 2] in medical imaging. The pre-trained weights for the SwinUNETR encoder and TransVW UNet are taken from the official repositories of corresponding papers. In these experiments, we keep the crucial pipeline hyperparameters (e.g., spacing, clipping window, patch size) the same as in the original works. To evaluate the pre-trained SwinUNETR and TransVW in linear and non-linear probing setups, we use similar linear and non-linear head architectures as for vox2vec-FPN (see Section 3.4). SwinUNETR and TransVW cost 391 GFLOPs and 1.2 TFLOPS, correspondingly, compared to 115 GFLOPs of vox2vec-FPN. We train all models for 45000 batches of size 7 (batch size for SwinUNETR is set to 3 due to memory constraints), using the Adam optimizer with a learning rate of 0.0003. In the fine-tuning setup, we freeze the backbone for the first 15000 batches and then exponentially increase the learning rate for the backbone parameters from 0.00003 up to 0.0003 during 1200 batches. ## 5 Results The mean value and standard deviation of Dice score across 5 folds on the BTCV dataset for all models in all evaluation setups are presented in Table 1. vox2vec-FPN performs slightly better than other models in the fine-tuning setup. However, considering the standard deviation, all the fine-tuned models perform on par with their counterparts trained from scratch. Nevertheless, vox2vec-FPN significantly outperforms other models in linear and non-linear regimes. On top of that, we observe that in non-linear probing regime, it performs (within the standard deviation) as well as the FPN trained from scratch while having x50 times fewer trainable parameters (see Figure 2). We demonstrate an example of the excellent performance of vox2vec-FPN in both linear and non-linear probing regimes in Supplementary materials. We reproduce the key results on MSD challenge CT datasets, which contain tumor and organ segmentation tasks. Table 3 shows that in the vox2vec representation space, organ voxels can be separated from tumor voxels with a quality comparable to the model trained from scratch. A t-SNE embedding of vox2vec representations on MSD is available in the Supplementary materials. ## 6 Conclusion In this work, we present vox2vec -- a self-supervised framework for voxel-wise representation learning in medical imaging. Our method expands the contrastive learning setup to the feature pyramid architecture allowing to pre-train effective representations in full resolution. By pre-training a FPN backbone to extract informative representations from unlabeled data, our method scales to large datasets across multiple task domains. We pre-train a FPN architecture on more than 6500 CT images and test it on various segmentation tasks, including different organs and tumors segmentation in three setups: linear probing, non-linear probing, and fine-tuning. Our model outperformed existing methods in all regimes. Moreover, vox2vec establishes a new state-of-the-art result on the linear and non-linear probing scenarios. Still, this work has a few limitations to consider. We plan to investigate further how the performance of vox2vec scales with the increasing size of the pre-training dataset and the pre-trained architecture size. Another interesting research direction is exploring the effectiveness of vox2vec in the domain adaptation and few-shot learning scenarios. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline model & Sp & Kid & Ch. & Eu & Li & St & Alex & VPC & Py & Pa & AG & Avg \\ \hline \multicolumn{10}{c}{**from scratch**} \\ \hline TransAW & UNet & 79.2 & 82.7 & 41.9 & 63.9 & 87.2 & 61.1 & 86.0 & 61.3 & 61.7 & 51.4 & 68.0 & \(\pm\) 2.1 \\ SwinNET & 90.8 & 87.8 & 60.4 & 60.8 & 94.7 & 72.8 & 88.0 & 81.8 & 67.7 & 60.6 & 61.5 & 77.0 & \(\pm\) 2.5 \\ UNet & 91.1 & 88.5 & 55.8 & 72.3 & 60.0 & **88.8** & 90.2 & 60.3 & 70.4 & 63.2 & 78.2 & 2.3 \\ FPN & **92.4** & 80.3 & **60.9** & 71.1 & **90.3** & 82.7 & 90.1 & **83.9** & 60.0 & 71.8 & 62.5 & 78.5 & \(\pm\) 2.2 \\ \multicolumn{10}{c}{**flower probing**} \\ \hline TransAW & 34.4 & 22.7 & 8.9 & 34.4 & 26.8 & 72.1 & 22.0 & 15.0 & 85.8 & 82.0 & 26.6 & \(\pm\) 1.1 \\ SwinNET & 44.4 & 35.3 & 7.6 & 28.7 & 72.4 & 71.8 & 36.6 & 20.9 & 14.4 & 36.1 & 18.7 & \(\pm\) 2.1 \\ random-run & 68.0 & 68.1 & 32.0 & 38.6 & 41.3 & 63.0 & 52.4 & 27.2 & 72.9 & 48.6 & \(\pm\) 3.0 \\ vvone-UN & 77.4 & 78.8 & 28.9 & 37.0 & 45.2 & 72.5 & 78.3 & 36.0 & 40.9 & 43.6 & 67.9 & \(\pm\) 2.0 \\ vvone-FPN & 87.4 & 64.0 & 75.0 & 60.1 & 62.5 & 56.7 & 77.5 & 66.6 & 88.3 & 53.3 & \(\pm\) 2.0 & \(\pm\) 1.2 \\ \multicolumn{10}{c}{**non-linear probing**} \\ \hline TransAW & 24.9 & 31.5 & 67.1 & 81.0 & 49.4 & 27.2 & 71.0 & 12.0 & 71.2 & 15.4 & 23.5 & \(\pm\) 2.7 \\ random-run & 77.6 & 67.0 & 34.1 & 67.1 & 78.2 & 72.5 & 72.3 & 30.2 & 28.6 & 31.5 & \(\pm\) 2.1 & \(\pm\) 4.9 \\ SwinNET & 77.0 & 74.4 & 68.1 & 82.1 & 67.0 & 71.3 & 73.5 & 58.1 & 67.2 & 33.3 & 30.9 & \(\pm\) 3.5 & \(\pm\) 2.6 \\ vvone-UNet & 80.3 & 81.4 & 34.1 & 42.1 & 61.0 & 79.6 & 71.6 & 64.7 & 43.3 & 37.6 & 69.6 & \(\pm\) 3.0 \\ vvone-FPN & 91.0 & 80.2 & 50.7 & 67.5 & 73.8 & 72.8 & 84.0 & 67.4 & 69.1 & 20.9 & 75.5 & \(\pm\) 1.7 \\ \multicolumn{10}{c}{**fine-tuning**} \\ \hline TransAW & 77.5 & 88.7 & 42.9 & 65.3 & 50.3 & 58.2 & 72.3 & 67.3 & 67.4 & 54.0 & 67.8 & \(\pm\) 1.9 \\ SwinNET & 84.2 & 84.7 & 55.4 & 70.4 & 94.5 & 70.6 & 87.7 & 82.1 & 67.0 & 63.5 & \(\pm\) 3.3 \\ vvone-UNet & 84.1 & 90.1 & 51.3 & 72.5 & 65.8 & 80.9 & 82.6 & 66.9 & 71.1 & 61.5 & 77.6 & \(\pm\) 1.0 \\ vvone-FPN & 91.4 & **90.7** & **93.5** & **72.7** & **96.3** & **51.2** & **91.3** & **83.9** & **90.2** & **73.9** & **65.3** & \(\pm\) 1.3 \\ \hline \hline \end{tabular} \end{table} Table 1: Average cross validation Dice scores on BTCV multi-organ segmentation dataset. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline & & Liver & Lung & Pancreas & Hepatic vessel & Spleen & Colon \\ \hline model & organ tumor & tumor & organ tumor & organ tumor & organ & cancer \\ \multicolumn{10}{c}{**from scratch**} \\ FPN & 94.4 & 44.6 & 53.1 & **77.1** & 28.0 & 53.7 & 49.4 & 96.0 & **32.2** \\ \multicolumn{10}{c}{**non-linear probing**} \\ \hline vox2vec-FPN & 94.7 & 43.9 & 49.5 & 71.4 & 28.5 & 58.1 & 54.8 & 95.1 & 24.8 \\ \multicolumn{10}{c}{**fine-tuning**} \\ SwinUNETR & 95.0 & 49.3 & 55.2 & 75.2 & **35.9** & **60.9** & 57.5 & 95.5 & 29.2 \\ vox2vec-FPN & **95.6** & **51.0** & **56.6** & 77.0 & 31.8 & 59.5 & **62.4** & **96.1** & 30.1 \\ \hline \hline \end{tabular} \end{table} Table 2: Cross validation Dice score on CT tasks of MSD challenge. Figure 2: Dice score on BTCV cross-validation averaged for all organs w.r.t. the number of trainable parameters of different models in different evaluation setups. AcknowledgementsThis work was supported by the Russian Science Foundation grant number 20-71-10134.
2310.11292
Two subspace methods for frequency sparse graph signals
We study signals that are sparse in graph spectral domain and develop explicit algorithms to reconstruct the support set as well as partial components from samples on few vertices of the graph. The number of required samples is independent of the total size of the graph and takes only local properties of the graph into account. Our results rely on an operator based framework for subspace methods and become effective when the spectral eigenfunctions are zero-free or linear independent on small sets of the vertices. The latter has recently been adressed using algebraic methods by the first author.
Tarek Emmrich, Martina Juhnke-Kubitzke, Stefan Kunis
2023-10-17T14:16:16Z
http://arxiv.org/abs/2310.11292v1
# Two subspace methods for frequency sparse graph signals ###### Abstract We study signals that are sparse in graph spectral domain and develop explicit algorithms to reconstruct the support set as well as partial components from samples on few vertices of the graph. The number of required samples is independent of the total size of the graph and takes only local properties of the graph into account. Our results rely on an operator based framework for subspace methods and become effective when the spectral eigenfunctions are zero-free or linear independent on small sets of the vertices. The latter has recently been adressed using algebraic methods by the first author. _Key words and phrases_: Signal processing on graphs, Sparse graph Fourier transform. _2020 AMS Mathematics Subject Classification_ : 41A30, 05C50. ## 1 Introduction The Fourier transform has been generalised for signals defined on the vertex set of a graph [17] and thus can serve also in the study of convolution operators in popular areas like geometric deep learning [1, 12]. Special cases are the well known discrete Fourier transform (DFT), the discrete cosine transform, and the Walsh-Hadamard transform which are the Fourier transforms on the circle graph, the path graph, and the hypercube, respectively. In all three cases fast algorithms are available for transforming any signal between spatial/vertex domain and frequency domain and these rely on divide and conquer strategies available for these specific graphs. More recently, the a-priori assumption that only a small number of Fourier coefficients is non-zero has gained some attention and led to so-called sparse FFTs [9, 6] which reduce the spatial sampling effort as well as the running time considerably using several quite specific properties of the DFT. In greater generality, compressive sensing has also gained some popularity over the last two decades and its success for the DFT relies on a so-called robust uncertainty principle, see e.g., [5, Thm. 12.32] using tailored probabilistic arguments for _random sampling_. On the other hand, subspace methods are known to realise parameters (such as the active frequencies in a sparse sum of exponentials) as eigenvalues of certain matrices obtained from _consecutive samples_, see e.g. [2, 18]. Their success relies on certain rank conditions which can be fulfilled by Chebotarev's uncertainty principle [20] for the DFT of prime length. The primary goal of this short note is the discussion of two subspace methods for graphs which rely on 1) a recent generalisation of Chebotarev's uncertainty principle to graphs by the first author [4] and 2) a meaningful interpretation of consecutive samples for graphs. Our first result is a variant of [18] adapted to the graph setting and uses samples in the neighbourhood of one vertex on which the eigenfunctions are assumed to not vanish. The special case of a circle graph has been considered e.g., in [11]. The second method samples the graph signal in smaller neighbourhoods of several vertices, see also [13] for a similar approach in so-called multi-snapshot spectral estimation. We refer the interested reader to [19] for a recent survey on sampling graph signals. However, note that in most cases the frequency-support of the signal is assumed to be known and we are only aware of [11], where the frequency-support is effectively computed for signals on the circle graph, and [14, Proposition 2], which states that samples in the neighbourhood of one vertex suffice for the identification of the support but only \(\ell_{0}\)-minimization is suggested as reconstruction algorithm. The article is structured as follows: Section 2 shortly introduces the considered graph signal model together with two well understood examples. All main results are contained in Section 3 and we close by a generalisation to simplicial complexes in Section 4. ## 2 Preliminaries For a finite set of _vertices_\(V=[n]\) and \(E\subseteq\binom{V}{2}\) we call \(G=(V,E)\) a _graph_. We call two vertices \(v,w\)_connected_, if \(\{v,w\}\in E\) and write \(v\sim w\). The _adjacency matrix_\(A\in\{0,1\}^{n\times n}\) of \(G\) is defined by \[A_{v,w}=\begin{cases}1,\text{ if }\{v,w\}\in E,\\ 0,\text{ otherwise.}\end{cases}\] The _combinatorial Laplacian matrix_\(L\in\mathbb{R}^{n\times n}\) is the matrix \(L=D-A\), with \(D_{v,v}=\deg(v)=|\{\{v,w\}\in E\}|\). The Laplacian matrix is symmetric and positive semidefinite with eigenvalues \(0=\lambda_{1}\leq\lambda_{1}\leq\ldots\leq\lambda_{n}\) and orthornormal eigenvectors \(n^{-1/2}(1,\ldots,1)^{\top}=u_{1},u_{2},\ldots,u_{n}\in\mathbb{R}^{n}\). We write \[L=U\Lambda U^{*}, \tag{2.1}\] with \(\Lambda=\operatorname{diag}(\lambda_{1},\ldots,\lambda_{n})\) and \(U=(u_{1},\ldots,u_{n})\). Throughout this paper, we assume the graph being _simple_ and _connected_, which is equivalent to \(A_{v,v}=0\), \(v\in V\), and \(\lambda_{2}>0\), respectively. We define the _distance_\(d(v,w)\) of two vertices \(v,w\) by the smallest integer \(r\) such that there exist vertices \(v=v_{0},v_{1},\ldots,v_{r}=w\) with \(\{v_{k},v_{k+1}\}\in E\). For a vertex \(v\in V\) the _\(k\)-neighbourhood_ of \(v\) is the set \(N(v,k)=\{w\in V:d(w,v)\leq k\}\). We are interested in the analysis of signals defined on the vertices of the graph, i.e., vectors \(f\in\mathbb{R}^{n}\) are identified with functions \(f\colon V\to\mathbb{R}\). We are especially asking under which conditions an \(s\)-frequency-sparse signal \[f=\sum_{j\in S}\beta_{j}u_{j},\qquad\beta_{j}\neq 0,\;S\in\binom{[n]}{s},\;s \ll n, \tag{2.2}\] can be efficiently recovered from its samples \(f(v)\), \(v\in W\subset V\), \(|W|\ll n\), without knowing the support \(S\). For notational convenience, let \(f_{W}\) and \(U_{W,S}\) denote the restriction of \(f\) and \(U\) to the rows in \(W\) and columns in \(S\), respectively. **Example 2.1** (Circle graph).: Let us start with the well-studied circle graph on \(n\) vertices as illustrated in Figure 2.1. On this graph, the periodic forward shift has matrix representation \[\tilde{A}_{k,\ell}=\begin{cases}1,&\text{if }k=\ell-1\mod n,\\ 0,&\text{otherwise},\end{cases}\] the adjacency matrix is \(A=\tilde{A}+\tilde{A}^{\top}\) and Laplacian matrix is \(L=2I-A\). All three matrices are diagonalised by the Fourier matrix \(U=\left(\mathrm{e}^{2\pi ikj/n}\right)_{k,j\in[n]}\) and sparse signals with respect to this basis are well understood. We have: * If \(n\) is prime, then a classical result by Chebotarev, see e.g., [20], asserts that no minor of \(U\) vanishes, which implies a strong uncertainty principle and uniqueness of any \(s\)-sparse signal when at least \(2s\) arbitrary samples are known, see also Theorem 3.1 for the precise statement. * If \(n\) is arbitrary and \(m\geq Cs\log^{4}n\) vertices are _randomly_ selected, i.e., rows of the matrix \(U\) are chosen uniformly at random, then with high probability every subset \(S\subset[n]\) of the columns yields a well-conditioned matrix and every \(s\)-sparse signal can be stably recovered by means of basis pursuit or specific greedy algorithms, see e.g., [5, Thm. 12.32]. * If \(n\) is arbitrary and \(m\geq 2s\)_consecutive_ vertices are selected, then every \(s\)-sparse signal can be recovered by subspace methods like Prony's method, matrix pencil method, ESPRIT, or MUSIC see e.g., [2, 18]. Stability is guaranteed if and only if the support \(S\) is well-separated, i.e., if \(\min_{j,\ell\in S,j\neq\ell}\min_{r\in\mathbb{Z}}|j-\ell+rn|>n/m\), see e.g., [7]. **Example 2.2** (One-sparse signal).: We now consider an arbitrary graph and the one-sparse signal \(f=\beta_{j}u_{j}\) for some \(j\in[n]\). In this case, we have \[\beta_{j}\lambda_{j}u_{j}(v)=(Lf)(v)=\sum_{w\sim v}\left(f(w)-f(v)\right)\quad \text{and hence}\quad\lambda_{j}=\frac{1}{f(v)}\sum_{w\sim v}\left(f(w)-f(v)\right)\] if \(u_{j}(v)\neq 0\). That is, the single active eigenvalue can be recovered from sampling the neighbouring vertices of a vertex on which the signal does not vanish. We generalise this idea to larger sparsity by means of so-called subspace methods. Moreover, note that \(u_{j}(v)=0\) for any nontrivial eigenvalue \(\lambda_{j}\neq 0\) implies \[0=(\lambda_{j}u_{j})(v)=(Lu_{j})(v)=\sum_{w\sim v}u_{j}(w)\] which can be read as a local symmetry property of the eigenfunction and graph. ## 3 Sampling and reconstruction methods In what follows, we discuss two subspace methods (a.k.a. Prony's method) to first solve for the occuring eigenvalues and in a second step for the eigenfunctions in the reconstruction problem (2.2). For the first method, we adapt the operator based approach [18] to the graph Laplacian resulting in sampling the \(2s\)-neighbourhood of one vertex for recovering an \(s\)-sparse signal. The second method uses smaller neighbourhoods of several vertices and a similar idea can be found in [13] under the name multi-snapshot spectral estimation. We will analyse the number of required samples and the runtime of these approaches. The computation of the occuring eigenvalues can be done without prior knowledge of the eigendecomposition of the Laplacian \(L\). Up to a scalar factor we can also compute the eigenfunctions locally without knowledge of the whole eigendecomposition. ### Minimal sampling sets determining sparse signals Before considering the two specific methods, we will discuss a well-known result that \(2s\) samples are necessary and, under an additional assumption, also sufficient for the reconstruction problem (2.2). We say that a matrix \(U\in\mathbb{R}^{n\times n}\) is _Chebotarev_, if none of its minors vanishes. Two explicit examples are the Fourier matrix in Example 2.1 if \(n\) is prime, see [20], and any Vandermonde-matrix with positive nodes, see [5, Thm. A.25]. As each minor is a polynomial in the entries of the matrix, any sufficiently random matrix is Chebotarev almost surely and we expect this property for the eigenvector matrix to hold except for specific obstructions by the graph itself. A detailed analysis of this property by the first author can be found in [4] for adjacency and Laplacian eigenvector matrices of random graphs. The following result is well-known and we include a short proof for the reader's convenience. **Theorem 3.1** (e.g., [5, Thm. 2.13]).: _For any \(s\in\mathbb{N}\), \(s\leq\frac{n}{2}\), \(W\subset V\), \(|W|\leq 2s-1\), and \(S_{f}\in\binom{[n]}{s}\), there exist a set \(S_{g}\in\binom{[n]}{s}\), and coefficients \(\hat{f},\hat{g}\in\mathbb{R}^{n}\) with \(\operatorname{supp}\hat{f}\subset S_{f}\), \(\operatorname{supp}\hat{g}\subset S_{g}\),_ \[\sum_{k\in\widehat{S}_{f}}\hat{f}_{k}u_{k}=f\neq g=\sum_{k\in\widehat{S}_{g}} \hat{g}_{k}u_{k},\quad\text{but}\quad f_{W}=g_{W}.\] _On the other hand, if the eigenvector matrix \(U\in\mathbb{C}^{n\times n}\) is Chebotarev, then for any \(W\subset V\), \(|W|\geq 2s\), the samples at \(W\) uniquely determine any \(s\)-sparse signal, i.e.,_ \[f_{W}=g_{W}\quad\text{implies}\quad f=g.\] Proof.: Without loss of generality, let \(S_{f}\cap S_{g}=\emptyset\) and consider the restricted eigenvector matrix \[U_{W,S_{f}\cup S_{g}}\in\mathbb{R}^{|W|\times 2s}\] which has a nontrivial kernel in the first case and has full column rank by assumption in the second case. An algorithm using such a minimal sampling set \(W\) would compute the eigenvector matrix \(U_{W}\in\mathbb{R}^{|W|\times n}\) first and then use a combinatorial search for the smallest set \(S\subset[n]\) which allows to interpolate all \(|W|\) samples. For each candidate set \(S\) out of the \(\binom{n}{s}\) possibilities, this asks for solving the overdetermined linear system of equations \((u_{k}(v))_{v\in W,k\in S}\cdot(\hat{f}_{k})_{k\in S}=(f(v))_{v\in W}\). ### Sampling one neighbourhood The first method builds on an operator-based formulation of Prony's method as considered in [18]. We rely on the two facts that the powers of the Laplace operator applied to the signal and evaluated at a fixed vertex * can be computed recursively from neighbouring samples via \[(L^{k}f)(v)=\sum_{w\sim v}(L^{k-1}f)(v)-(L^{k-1}f)(w),\qquad L^{0}f=f,\] * and that they constitute an exponential sum (with respect to the eigenvalues) \[(L^{k}f)(v)=\sum_{j\in S}\alpha_{j}\lambda_{j}^{k},\qquad\alpha_{j}\coloneqq \beta_{j}u_{j}(v).\] **Theorem 3.2**.: _Let \(G\) be a graph, \(U\in\mathbb{R}^{n\times n}\) be the matrix of eigenvectors of \(L\), and \(v\in V\) be some fixed vertex. Then every \(s\)-sparse signal \(f=\sum_{j\in S}\beta_{j}u_{j}\) with \(u_{j}(v)\neq 0\), \(j\in S\), can be recovered from the samples \(f(w)\), \(w\in W:=N(v,2s-1)\). In addition to the support set \(S\) also the restricted eigenfunctions \(\beta_{j}u_{j}(w)\), \(w\in N(v,s)\), can be reconstructed._ Proof.: The samples \(f(w)\), \(w\in W\), allow to calculate \(g(k):=(L^{k}f)(v)\) for \(k=0,\ldots,2s-1\) and the remaining task is to solve the equations \[\sum_{j\in S}\alpha_{j}\lambda_{j}^{k}=g(k)\] for the unknowns \(\lambda_{j}\), e.g., via Prony's method [18], or one of its variants. Beyond this first step, the recovery of the weighted eigenfunctions relies on \[\left(\prod_{k\in S\setminus\{j\}}\frac{L-\lambda_{k}I}{\lambda_{j}-\lambda_{ k}}\right)f=\sum_{i\in S}\beta_{i}\prod_{k\in S\setminus\{j\}}\frac{(L- \lambda_{k}I)\,u_{i}}{\lambda_{j}-\lambda_{k}}=\sum_{i\in S}\beta_{i}\prod_{k \in S\setminus\{j\}}\frac{\lambda_{i}-\lambda_{k}}{\lambda_{j}-\lambda_{k}}u_ {i}=\beta_{j}u_{j} \tag{3.1}\] which can be evaluated for all vertices \(w\in V\) with \(d(w,v)\leq s\) since the product of the shifted Laplace operators on the leftmost expression uses the \((s-1)\)-neighbourhood of this vertex \(w\) which stays within the \((2s-1)\)-neighbourhood of the fixed vertex \(v\). **Remark 3.3** (Sampling effort and computational complexity).: Let \(m\) denote the total number of edges in the subgraph induced by the vertices in the neighbourhood \(N(v,2s-1)\). Then the Laplace matrix restricted to these vertices has \(O(m)\) nonzero entries and thus the first step needs at most \(O(sm)\) operations. We note in passing that the factor \(s\) can be removed if the neighbourhood of \(v\) grows exponentially fast. The second and third step take \(O(s^{3})\) floating point operations. Similar to the first step, the last step in (3.1) takes at most \(O(s^{2}m)\) floating point operations. **Example 3.4**.: We will illustrate Algorithm 1 for the path graph as illustrated in Figure 3.1. Figure 3.1: The path graph. The path graph on \(n\) vertices has edge set \(E=\{\{v,v+1\}:v=1,\ldots,n\}\). The Laplace matrix is given by \[L=\begin{pmatrix}1&-1&0&\ldots&0\\ -1&2&\ddots&\ddots&\vdots\\ 0&\ddots&\ddots&\ddots&0\\ \vdots&\ddots&\ddots&2&-1\\ 0&\ldots&0&-1&1\end{pmatrix}.\] Its simple eigenvalues and eigenvectors are \[\lambda_{j}=2-2\cos\left(\frac{\pi(j-1)}{n}\right)\in[0,4),\;u_{j}(v)=\frac{ \sqrt{2-\delta_{1,j}}}{\sqrt{n}}\cdot\cos\left(\frac{\pi(j-1)(2v-1)}{2n} \right),\;v,j=1,\ldots,n.\] To give an explicit example, let \(n=20\), \(s=2\), \(S=\{3,15\}\), \(f=1u_{3}+\frac{1}{5}u_{15}\), and \(v=1\). Figure 3.2 shows the sparse coefficient vector (on the active eigenvalues \(\lambda_{3}\) and \(\lambda_{15}\)) and the graph signal \(f\) on the vertices \(1,\ldots,20\), respectively. Algorithm 1 uses the samples \((f(1),\ldots,f(4))\approx(0.34,0.22,0.27,0.15)\), see Figure 3.2 (right, blue circles), computes in Figure 3.2: From left to right: Sparse coefficient vector on active eigenvalues, graph signal on the vertices \(1,\ldots,20\) together with its used samples on the vertices \(W=\{1,2,3,4\}\) (blue circles), and the decomposition via (3.1) into \(1u_{3}\) and \(\frac{1}{5}u_{15}\) on the vertices \(W=\{1,2,3\}\) (blue dots). All continuous curves are only meant to support visual inspection. \(g(0),\ldots,g(3)\approx(0.34,0.12,0.29,0.92)\), sets up the Hankel matrix and from its kernel vector the companion matrix, i.e., \[H\approx\begin{pmatrix}0.34&0.12&0.29\\ 0.12&0.29&0.92\end{pmatrix},\qquad P\approx\begin{pmatrix}0.00&-0.31\\ 1.00&3.27\end{pmatrix}.\] Here, the absolute error between the numerically computed eigenvalues of the companion matrix and the true active eigenvalues \(\lambda_{3}\), \(\lambda_{15}\) as well as the maximal error between the numerically computed samples via (3.1) and the true components \(1u_{3}\) and \(\frac{1}{5}u_{15}\) is smaller than \(10^{-14}\). **Remark 3.5**.: The condition \(u_{j}(v)\neq 0\) is much weaker than being Chebotarev, since only the size-one minors are concerned. Moreover, note that this condition is true for all eigenfunctions and all vertices if the characteristic polynomial of the Laplace matrix is irreducible over the rationals up to the trivial factor for the eigenvalue zero, see [4, Thm. 6] for details and a discussion for random graphs. **Remark 3.6** (Multiple eigenvalues).: The leftmost bracketed term in (3.1) is the projection operator to the eigenspace of the semisimple eigenvalue \(\lambda_{j}\). If this subspace has dimension larger than one, we only recover this projection and no individual decomposition in a basis of this subspace. An example is given by the circle graph in Example 2.1, where the \(j\)-th and the \((n+2-j)\)-th column of the Fourier matrix belong to the same real eigenvalue of the Laplacian matrix. This can be seen as the discrete analogon to the two eigenfunctions \(t\mapsto\exp(\pm\lambda t)\) for the Laplace operator on the real line with eigenvalue \(\lambda\). Even larger multiplicities occur e.g., for the Laplace-Beltrami operator on the sphere and seem to be likely for specific discretisations of the sphere and small eigenvalues. ### Sampling several neighbourhoods Since, depending on the graph, the size of the set \(W=N(v,2s-1)\) can grow relatively fast with \(s\), we want to improve our above approach in the way that we use less samples. The decrease in the size of the neighbourhood will be compensated by considering several neighbourhoods, see also [13] for a similar idea in so-called multi-snapshot spectral estimation. **Theorem 3.7**.: _Let \(G\) be a graph, \(U\in\mathbb{R}^{n\times n}\) be the matrix of eigenvectors of \(L\), fix some vertices \(v_{1},\ldots,v_{t}\in V\) and radii \(r_{1},\ldots,r_{t}\in\mathbb{N}\), \(r=r_{1}+\ldots+r_{t}\). Then every \(s\)-sparse signal \(f=\sum_{j\in S}\beta_{j}u_{j}\) can be recovered from the samples \(f(w)\), \(w\in W:=\cup_{i=1}^{t}N(v_{i},s-1+r_{i})\) if the matrix_ \[B=\begin{pmatrix}\alpha_{1,j_{1}}&\alpha_{1,j_{2}}&\ldots&\alpha_{1,j_{s}}\\ \lambda_{j_{1}}\alpha_{1,j_{1}}&\lambda_{j_{2}}\alpha_{1,j_{2}}&\ldots&\lambda _{j_{s}}\alpha_{1,j_{s}}\\ \vdots&\vdots&\vdots&\vdots\\ \lambda_{j_{1}}^{r_{1}-1}\alpha_{1,j_{1}}&\lambda_{j_{2}}^{r_{1}-1}\alpha_{1, j_{2}}&\ldots&\lambda_{j_{s}}^{r_{1}-1}\alpha_{1,j_{s}}\\ \alpha_{2,j_{1}}&\alpha_{2,j_{2}}&\ldots&\alpha_{2,j_{s}}\\ \lambda_{j_{1}}\alpha_{2,j_{1}}&\lambda_{j_{2}}\alpha_{2,j_{2}}&\ldots&\lambda _{j_{s}}\alpha_{2,j_{s}}\\ \vdots&\vdots&&\vdots\\ \lambda_{j_{1}}^{r_{t}-1}\alpha_{t,j_{1}}&\lambda_{j_{2}}^{r_{t}-1}\alpha_{t, j_{2}}&\ldots&\lambda_{j_{s}}^{r_{t}-1}\alpha_{t,j_{s}}\end{pmatrix}\in\mathbb{R}^{r \times s},\qquad\alpha_{i,j}=\beta_{j}u_{j}(v_{i}),\] _has full column rank \(s\)._ Our result in Theorem 3.2 is the case \(t=1\), \(r_{1}=s\), where \(B\) has full rank by the factorization \(B=(\lambda_{j}^{k})_{k=0,\ldots,r_{1}-1;s\in S}\cdot\operatorname{diag}(( \alpha_{1,j})_{j\in S})\) in the product of a Vandermonde matrix with respect to the eigenvalues and a diagonal matrix with nonzero diagonal entries if \(u_{j}(v_{1})\neq 0\) and \(\beta_{j}\neq 0\). Proof.: The main idea is to use the equations \[g_{i}(k):=(L^{k}f)(i)=\sum_{j\in S}\alpha_{i,j}\lambda_{j}^{k}\] for not only one vertex \(v\) but all vertices \(v_{1},\ldots,v_{t}\). We set up a stacked Hankel matrix of these samples which allows for the following Vandermonde factorization \[\tilde{H}=\begin{pmatrix}g_{1}(0)&g_{1}(1)&\ldots&g_{1}(s-1)\\ g_{1}(1)&g_{1}(2)&\ldots&g_{1}(s)\\ \vdots&\vdots&&\vdots\\ g_{1}(r_{1}-1)&g_{1}(r_{1})&\ldots&g_{1}(s-2+r_{1})\\ g_{2}(0)&g_{2}(1)&\ldots&g_{2}(s-1)\\ g_{2}(1)&g_{2}(2)&\ldots&g_{2}(s)\\ \vdots&\vdots&&\vdots\\ g_{t}(r_{t}-1)&g_{t}(r_{t})&\ldots&g_{t}(s-2+r_{t})\end{pmatrix}=B\cdot C, \qquad C=\left(\lambda_{j}^{k}\right)_{j\in S,k=0,\ldots,s}.\] First note that the kernel of the Vandermonde matrix \(C\) is one dimensional and \(Cp=0\) can be rephrased as the polynomial \(p=\sum_{k=0}^{s}p_{k}\lambda^{k}\) having exactly the roots \(\lambda_{j}\), \(j\in S\). If the matrix \(B\) has full column rank \(s\), then \(\ker\tilde{H}=\ker C\). The second extremal case is \(r_{1}=\ldots=r_{t}=1\) and thus \(r=t\) and for simplicity also \(t=s\), where the matrix \(B\) has full column rank if and only if the eigenvectors restricted to \(W\) are linearly independent. A sufficient condition for this to happen is that the full eigenvector matrix \(U\) is Chebotarev. **Corollary 3.8**.: _Let \(G\) be a graph, \(U\in\mathbb{R}^{n\times n}\) be the matrix of eigenvectors of \(L\), and fix some vertices \(v_{1},\ldots,v_{s}\in V\). Then every \(s\)-sparse signal \(f=\sum_{j\in S}\beta_{j}u_{j}\) can be recovered from the samples \(f(w)\), \(w\in W:=\cup_{i=1}^{s}N(v_{i},s)\) if the matrix_ \[U_{W,S}=(u_{j}(v_{i}))_{i=1,\ldots,s;j\in S}\in\mathbb{R}^{s\times s}\] _is regular._ **Remark 3.9** (Rank of \(B\)).: If \(r_{1}=r_{2}=\ldots=r_{t}\), then the matrix \(B\) in Theorem 3.7 can be written as column-wise Kronecker product \[B=(B_{j_{1}}\otimes\Lambda_{j_{1}}\ldots B_{j_{s}}\otimes\Lambda_{j_{s}})\,, \qquad B_{j}=\begin{pmatrix}\alpha_{1,j}\\ \vdots\\ \alpha_{t,j}\end{pmatrix},\quad\Lambda_{j}=\begin{pmatrix}\lambda_{j}^{0}\\ \vdots\\ \lambda_{j}^{r_{j}-1}\end{pmatrix}.\] Together with [10, Eq.(2.2)], \(\tilde{B}=(B_{j_{1}},\ldots,B_{j_{s}})\), and \(\tilde{\Lambda}=(\Lambda_{j_{1}},\ldots,\Lambda_{j_{s}})\), we have the identity \(B^{\top}B=\tilde{B}^{\top}\tilde{B}\circ\tilde{\Lambda}^{\top}\tilde{\Lambda}\), where \(\circ\) denotes the Hadamard/entrywise product. The rank of \(B^{\top}B\) (and thus also of \(B\)) is bounded from above by the product of the ranks of the factors \(\tilde{B}\) and \(\tilde{\Lambda}\) - and we suspect that this is attained generically, see also [3]. Other recent results [8] show that \(B\) has full rank as soon as certain rank and Kruskal rank conditions on the factors are met - which however would become effective in our situation only if \(\min\{r_{1},t\}\geq s\) instead of \(t\cdot r_{1}\geq s\). **Remark 3.10** (Sampling effort).: Algorithm 1 can be adapted by replacing the Hankel matrix by the stacked Hankel matrix and skipping the last step. In case of Corollary 3.8, we expect that much less samples of the graph signal are taken - \(s\) times an \(s\)-neighbourhood in contrast to one \((2s-1)\)-neighbourhood. **Example 3.11**.: For the path graph one might be tempted to use the method of Theorem 3.7 in an efficient way that needs less samples. For example any \(6\)-sparse signal \(f\) and the sequence \(r_{1}=3\), \(r_{2}=2\), \(r_{3}=1\) leads to the sampling set \(N(1,8)\cup N(2,7)\cup N(3,6)=\{1,\ldots,9\}\), which however is too small for recovery by Theorem 3.1. Moreover, there are cases where the neighbourhood contains enough vertices but the matrix \(B\) is still singular. Let \(n\geq 6\) and \(G\) be the graph as shown in Figure 3.3. Setting \(s=r=3\), \(r_{n}=2\), and \(r_{n-1}=1\) samples each graph signal at all vertices but the matri \[B=\begin{pmatrix}\alpha_{n,j_{1}}&\alpha_{n,j_{2}}&\alpha_{n,j_{3}}\\ \lambda_{j_{1}}\alpha_{n,j_{1}}&\lambda_{j_{2}}\alpha_{n,j_{2}}&\lambda_{j_{3}} \alpha_{n,j_{3}}\\ \alpha_{n-1,j_{1}}&\alpha_{n-1,j_{2}}&\alpha_{n-1,j_{3}}\end{pmatrix}\] is singular since \(\lambda_{j}\alpha_{n,j}=\alpha_{n,j}-\alpha_{n-1,j}\). Figure 3.3: The umbrella graph. ## 4 Generalization to simplicial complexes In this section we will generalize the results for sparse signals on graphs to simplicial complexes. We will see that the special structure of the Laplacian and its eigenspaces allows to even improve the results for higher-dimensional simplicial complexes. Similar to graphs there are Laplacian matrices or Laplacian operators for simplicial complexes [15]. Let \(\Delta\) be a simplicial complex on \([n]\), i.e., \(\Delta\) is a collection of subsets of \([n]\) that is closed under inclusion. We denote by \(\partial\) the boundary operator of the chain complex \(C_{\bullet}(\Delta)\) with \(C_{k}(\Delta)\) being the free \(\mathbb{R}\)-vectorspace of \(k\)-chains whose basis is given by the \(k\)-faces of \(\Delta\), i.e., by elements in \(\Delta_{k}=\binom{[n]}{k+1}\cap\Delta\). We use \(\partial_{k}\) to denote the \(k\)-th operator of \(\partial\), i.e., \[\partial_{k}(\langle v_{0},\ldots,v_{k}\rangle)=\sum_{i=0}^{k}\left(-1\right) ^{i}\cdot\langle v_{0},\ldots,\bar{v_{i}},\ldots,v_{k}\rangle,\] where \(\bar{v_{i}}\) means that the element \(v_{i}\) is omitted. The \(k\)_-th Laplacian operator_\(L_{k}(\Delta)\), or just \(L_{k}\), if the complex is clear from the context, is defined by \[L_{k}=\underbrace{\partial_{k}^{*}\circ\partial_{k}}_{L_{k}^{\rm DN}}+ \underbrace{\partial_{k+1}\circ\partial_{k+1}^{*}}_{L_{k}^{\rm UP}},\] where we also use \(L_{k}\), \(L_{k}^{\rm UP}\) and \(L_{k}^{\rm DN}\) for the matrix representation of the \(k\)-th Laplacian (up or down)-operator. Note that \(L_{0}=L_{0}^{\rm UP}\) is the usual graph Laplacian of the \(1\)-skeleton of \(\Delta\) and that \(\partial_{1}\) is the vertex-edge incidence matrix. It is well-known that \[\ker(L_{k})\cong H_{k}(\Delta,\mathbb{R}),\] where \(H_{k}(\Delta)=\ker(\partial_{k})/\operatorname{im}(\partial_{k+1})\) denotes the \(k\)-th homology group of \(\Delta\) with coefficients in \(\mathbb{R}\). We are interested in the eigenvalues and eigenvectors of \(L_{k}\). The former are again real because of the symmetry and they are nonnegative. Furthermore, we know that \[\operatorname{Spec}_{\neq 0}(L_{k})=\operatorname{Spec}_{\neq 0}(L_{k}^{\rm UP })\cup\operatorname{Spec}_{\neq 0}(L_{k}^{\rm DN})\] since the operators are self-adjoint and mutually annihilating, i.e., \(\partial_{k}\circ\partial_{k+1}=0\). Thus we can decompose \[C_{k}(\Delta)=\operatorname{Eig}_{\neq 0}(L_{k}^{\rm UP})\oplus\operatorname{ Eig}_{\neq 0}(L_{k}^{\rm DN})\oplus H_{k}(\Delta,\mathbb{R}).\] We write again \(L_{K}=U\Lambda U^{*}\) with columns \(u_{i}\) of \(U\). Given a simplicial complex \(\Delta\), we say that \(k\)-faces \(\sigma,\tau\in\Delta\) have distance \(d\), denoted by \(d(\sigma,\tau)=d\), if \(d\in\mathbb{N}\) is minimal such that there exists a sequence \(\sigma=\tau_{0},\ldots,\tau_{d}=\tau\) of \(k\)-faces of \(\Delta\) with \(|(\tau_{i}\cap\tau_{i+1})|=k\) for all \(0\leq i\leq d-1\). We define the \(d\)-neighbourhood of \(\sigma\) as \[N(\sigma,d)\coloneqq\{\tau\in\binom{[n]}{k+1}\ :\ d(\sigma,\tau)\leq d\}.\] Basically, the same theorems as in the graph case do hold. We want to analyze how well we can recover an \(s\)-sparse sum of eigenvectors \[f=\sum_{i\in S}\beta_{i}u_{i}\] of eigenvectors by sampling \(f\) at relatively few \(k\)-faces. As in the graph case, we will need to sample at a neighbourhood of a \(k\)-face, but we can improve on the size of the neighbourhood. We start with the reconstruction of \(L_{k}^{\mathrm{UP}}\)- or \(L_{k}^{\mathrm{DN}}\)-signals analog to Theorem 3.2. **Theorem 4.1**.: _Let \(\Delta\) be a simplicial complex, \(L_{k}\) its \(k\)-th Laplacian matrix, \(L_{k}=U\Lambda U^{*}\) and \(T\in\{L_{k}^{\mathrm{UP}},L_{k}^{\mathrm{DN}}\}\). Every \(s\)-sparse signal_ \[f=\sum_{j\in S}\beta_{j}u_{j},\text{ for }u_{j}\in\mathrm{Eig}_{\neq 0}(T) \text{ for all }j\in S,\] _with only one eigenvector per eigenvalue can be recovered at the face \(\sigma\in\Delta_{k}\) by sampling the values \(f(\tau)\) for all \(\tau\in N(\sigma,2s-1)\), if \(u_{j}(\sigma)\neq 0\) for all \(j\in S\)._ We decompose the set \(S\) into \(S=S^{\mathrm{UP}}\cup S^{\mathrm{DN}}\cup S^{0}\), according to \(i\in S^{\mathrm{UP}}\) iff \(u_{i}\in\mathrm{Eig}(L_{k}^{\mathrm{UP}})\), \(i\in S^{\mathrm{DN}}\) iff \(u_{i}\in\mathrm{Eig}(L_{k}^{\mathrm{DN}})\) and \(i\in S^{0}\) iff \(u_{i}\in H_{k}(\Delta,\mathbb{R})\). **Theorem 4.2**.: _Let \(\Delta\) be a simplicial complex, \(L_{k}\) its \(k\)-th Laplacian matrix, \(L_{k}=U\Lambda U^{*}\). Let_ \[f=\sum_{i\in S}\beta_{i}u_{i}\] _be a signal with \(u_{i}\notin\ker(L_{k})\). We can recover the representation of \(f\) by recovering the representations of \(L_{k}^{\mathrm{UP}}(f)\) and \(L_{k}^{\mathrm{DN}}(f)\). This requires sampling in the \(2\cdot\max\{|S^{\mathrm{DN}}|,|S^{\mathrm{UP}}|\}\)-neighbourhood._ Proof.: By applying \(L_{k}^{\mathrm{UP}}\) and \(L_{k}^{\mathrm{DN}}\) to \(f\) we obtain two sums with a reduced number of summands, namely \[f^{\mathrm{UP}}=\sum_{i\in S^{\mathrm{UP}}}\beta_{i}\lambda_{i}u_{i}\text{ and }f^{\mathrm{DN}}=\sum_{i\in S^{\mathrm{DN}}}\beta_{i}\lambda_{i}u_{i}.\] This gives us two sums that can be recovered individually. A priori we do not know the sizes of the sets \(S^{\mathrm{UP}}\) and \(S^{\mathrm{DN}}\). We know for sure that \(|S^{\mathrm{UP}}|+|S^{\mathrm{DN}}|\leq s\) and thus we know that one of the set sizes is bounded by \(\frac{s}{2}\). In a first step, we therefore assume that they are \(\frac{s}{2}\)-sparse. One of these sums will be recovered and will also reveal the real number of summands in the sum. We hence know an upper bound for the number of summands in the other sum and can recover this sum in a second step. By applying \(L_{k}^{\mathrm{UP}}\) and \(L_{k}^{\mathrm{DN}}\) we lose the contribution of the eigenvectors corresponding to the eigenvalue \(0\). The sampling of \(f^{\mathrm{UP}}\) and \(f^{\mathrm{DN}}\) in an \(r\)-neighbourhood requires the knowledge of \(f\) in an \((r+1)\)-neighbourhood. Hence we need the \(2r\)- and not the \((2r-1)\)-neighbourhood from the prior theorem. We continue with the most general statement. The matrix \(B\) will is the same matrix as in Theorem 3.7. **Theorem 4.3**.: _Let \(\Delta\) be a simplicial complex, \(L_{k}\) its \(k\)-th Laplacian matrix, \(L_{k}=U\Lambda U^{*}\) and \(T\in\{L_{k}^{\mathrm{UP}},L_{k}^{\mathrm{DN}}\}\). Every \(s\)-sparse signal_ \[f=\sum_{j\in S}\beta_{j}u_{j},\text{ for }u_{j}\in\mathrm{Eig}_{\neq 0}(T) \text{ for all }j\in S,\] _with only one eigenvector per eigenvalue can be recovered by sampling the values \(f(\tau)\) for \(\tau\in W\) if there exists a set \(R\subseteq\Delta_{k}\) and integers \(r_{\sigma}\geq 1\) for \(\sigma\in R\), with \(\sum_{\sigma\in R}r_{\sigma}=s\) and \(N(\sigma,s-1+r_{\sigma})\subseteq W\) and if the corresponding matrix \(B\) is regular._ Reddy and Chepuri [16] claim that any sparse signal can be uniquely recovered by sampling in some neighbourhood of a simplex. However, they do not give explicit algorithms and do not use the smaller sampling trick using \(f^{\mathrm{UP}}\) and \(f^{\mathrm{DN}}\). Also the full recovery of the kernel in any case seems to be daring.
2302.07839
A "morphogenetic action" principle for 3D shape formation by the growth of thin sheets
How does growth encode form in developing organisms? Many different spatiotemporal growth profiles may sculpt tissues into the same target 3D shapes, but only specific growth patterns are observed in animal and plant development. In particular, growth profiles may differ in their degree of spatial variation and growth anisotropy, however, the criteria that distinguish observed patterns of growth from other possible alternatives are not understood. Here we exploit the mathematical formalism of quasiconformal transformations to formulate the problem of "growth pattern selection" quantitatively in the context of 3D shape formation by growing 2D epithelial sheets. We propose that nature settles on growth patterns that are the 'simplest' in a certain way. Specifically, we demonstrate that growth pattern selection can be formulated as an optimization problem and solved for the trajectories that minimize spatiotemporal variation in areal growth rates and deformation anisotropy. The result is a complete prediction for the growth of the surface, including not only a set of intermediate shapes, but also a prediction for cell displacement along those surfaces in the process of growth. Optimization of growth trajectories for both idealized surfaces and those observed in nature show that relative growth rates can be uniformized at the cost of introducing anisotropy. Minimizing the variation of programmed growth rates can therefore be viewed as a generic mechanism for growth pattern selection and may help to understand the prevalence of anisotropy in developmental programs.
Dillon J. Cislo, Anastasios Pavlopoulos, Boris I. Shraiman
2023-02-15T18:22:43Z
http://arxiv.org/abs/2302.07839v1
# A "morphogenetic action" principle for 3D shape formation by the growth of thin sheets ###### Abstract How does growth encode form in developing organisms? Many different spatiotemporal growth profiles may sculpt tissues into the same target 3D shapes, but only specific growth patterns are observed in animal and plant development. In particular, growth profiles may differ in their degree of spatial variation and growth anisotropy, however, the criteria that distinguish observed patterns of growth from other possible alternatives are not understood. Here we exploit the mathematical formalism of quasiconformal transformations to formulate the problem of "growth pattern selection" quantitatively in the context of 3D shape formation by growing 2D epithelial sheets. We propose that nature settles on growth patterns that are the'simplest' in a certain way. Specifically, we demonstrate that growth pattern selection can be formulated as an optimization problem and solved for the trajectories that minimize spatiotemporal variation in areal growth rates and deformation anisotropy. The result is a complete prediction for the growth of the surface, including not only a set of intermediate shapes, but also a prediction for cell displacement along those surfaces in the process of growth. Optimization of growth trajectories for both idealized surfaces and those observed in nature show that relative growth rates can be uniformized at the cost of introducing anisotropy. Minimizing the variation of programmed growth rates can therefore be viewed as a generic mechanism for growth pattern selection and may help to understand the prevalence of anisotropy in developmental programs. ## I Introduction Morphogenesis, the process through which genes generate form, transforms simple initial configurations of cells into complex and specific shapes [1; 2]. In order to accomplish this task of self-organization, morphogenetic programs employ both short and long-range molecular signaling, as well as intercellular mechanical interactions, to spatiotemporally coordinate gene expression with cell growth and proliferation [3; 4]. These interdependent processes all feedback and couple to each other in nontrivial ways. Controlled cell proliferation and rearrangement forces tissues to deform mechanically, generating specific target architectures selected by evolution from within the enormous'morphospace' of possible shapes and configurations that living systems can assume (Fig. 1(a)). To help elucidate how developmental processes define the shape of a tissue, we examine the common case of 3D shapes being formed by the growth of 2D tissues, which occurs, for instance, in arthropod limb morphogenesis [5; 6] and in plant shoot apical meristem [7]. From a purely geometric perspective, the same target 3D shape can be generated by infinitely many different growth patterns [8; 9]. In particular, the "morphogenetic trajectories" (i.e. the sequence of growth and local shape changes) of clonal regions defining each growth pattern may be markedly different despite converging to equivalent tissue scale shapes as a whole. Nevertheless, actual developmental processes are stereotypic across length and time scales, from the behavior of small domains of dividing cells [10], to coarse-grained tissue scale flows [11], to entire organismal stages of development [12; 13; 14]. Here we formulate the question of "growth pattern selection" by defining possible criteria that distinguish between different spatiotemporal growth programs leading to the same final 3D shape. We address the problem of shape formation in the context of epithelial morphogenesis. Epithelia are the fundamental tissue scale building block of multicellular systems and regularly undergo dramatic shape changes during development [15; 16]. These tissue scale transformations are driven by a limited repertory of cell behaviors including, most notably, cell growth and division, as well as cell rearrangement and shape change (Fig. 1(b)) [17; 18; 19]. It is intuitively evident that cell division and growth within a 2D tissue layer will tend to increase the layer's area in order to avoid strong lateral compression of cells and concomitant thickening of the epithelial monolayer [20]. This increase in the tissue area can be accommodated by out-of-plane bending that relieves the resultant in-plane mechanical stress. 3D shape formation due to nonuniform in-plane expansion has been documented in biological morphogenesis (e.g. in leaf growth [21]) and has also been realized in engineered materials [22]. Here, we focus on the shape-forming capabilities of spatially modulated in-plane growth in thin tissues, deferring discus sion of additional mechanisms of shape control available in thicker tissues or bilayers [23]. Crucially, cell division and tissue growth are often anisotropic. In addition to specifying a non-uniform profile of areal growth, morphogenetic programs can also prescribe a variable degree of growth anisotropy with preferred orientations that vary throughout the tissue. Such anisotropy is a ubiquitous hallmark in the growth and development of plants and animals. Numerous morphogenetic processes, including organogenesis [24; 25; 26], appendage formation [27; 5] and vertebrate gastrulation [28; 29], have been shown to feature anisotropic components. Developmental systems differ not only in their spatiotemporal patterns of growth, but also in the molecular-genetic and cell-biological mechanisms that define and reproducibly establish these patterns. Our analysis focuses on understanding the general relationship between spatiotemporal patterns of anisotropic 2D growth and 3D shape. Accordingly, we subsume the details of the underlying cellular processes into a coarse-grained, effective theory that integrates collective cell behaviors into smooth tissue scale deformations. In this framework, active internal processes within the tissue induce changes to the coarse-grained _intrinsic geometry_ (see Fig. 1(b))[9; 30]. As the intrinsic geometry is updated, the physical geometry morphs via mechanical relaxation in an attempt to match the intrinsically specified target configuration. This intuitive description is captured mathematically by the formalism of metric elasticity [31; 32] extended to morphogenetic dynamics by relating the time derivative of the metric - the mathematical descriptor of _intrinsic geometry_ - to anisotropic growth. Different growth scenarios are distinct in their distributions of clonal regions, each clone being derived by cell division from a single ancestral cell at an early stage of the process. Fig. 1(b) illustrates the different final configurations produced by an isotropic and an anisotropic growth pattern with the same initial configuration and uniform areal growth rates. If we allow growth rates to vary in both space and time, we can produce the same final coarse grained shape through either isotropic or anisotropic growth patterns (Fig. 1(a)). Hence different growth trajectories leading to the same shape may require very different morphogenetic programs to be encoded and executed by the cells. We can quite generally link local growth patterns on the scale of cells with the global shape of tissue, by considering incremental maps of the surface, defined explicitly by tracking expanding clone territories over short times. On the coarse grained level, such maps can be generated by (locally expanding) continuous flows. The Lagrangian morphogenetic trajectories of these flows characterize the behaviors of individual cells and their offspring, thereby distinguishing between the different Figure 1: (a) Dynamic growth patterns are trajectories in ‘morphospace’. An arbitrary growth pattern can be reconstructed from the time course of local area and anisotropy in material regions. Different growth patterns may generate the same final shape, but with dissimilar distributions of material patches. Even growth patterns that generate the same final configuration may differ drastically in terms of the realized intermediate configurations. _Inset:_ A schematic depiction of how the Beltrami coefficient, \(\mu\), encodes the local anisotropy induced by a quasiconformal map. (b) Coarse-grained effective theory integrates cellular behaviors into tissue scale deformations. Processes at the cell scale are combined to collectively induce tissue scale shape change. The anisotropy, or lack thereof, of tissue scale growth is determined by the average preferred direction of the underlying cell-scale behaviors. Coarse-graining the cellular motifs yields the _growth tensor_, \(\hat{\mathbf{g}}_{t}\), a tensorial representation of the rate and orientation of the time rate of change of the tissue’s intrinsic geometry. In the schematic, circular patches represent material parcels of tissue that deform along Lagrangian pathlines. Dark colored patches highlight growth and deformation of a clonal region. growth scenarios that lead to identical final shapes. This correspondence enables trajectories of physical configurations in morphospace to be quantified in terms of the time dependence of the intrinsic geometry. Flows of cells become smooth time dependent maps. In particular, flows due to isotropic growth are angle-preserving conformal maps, whereas flows due to anisotropic growth are described by quasiconformal maps, i.e. smooth transformations of bounded anistropic distortion [33; 34; 35]. Quasiconformal (QC) tranformations provide a natural language for describing anisotropic growth and linking it to surface geometry. Using the mathematical formalism of quasiconformal maps, we demonstrate how complex, nonlinear deformations can be decomposed into sequences of simple infinitesimal updates and thereby linked to infinitesimal changes to the intrinsic geometry of a thin tissue expressed in terms of an anisotropic local growth rate. This will, in turn, allow us to formulate a simple action-like principle for growth pattern selection, given fixed initial and final shapes of the surface. We propose that developmental growth patterns are the'simplest' growth strategies in the sense of requiring the least amount of _a priori_ information for their specification. Specifically, we consider minimization of the spatiotemporal variation in growth rates and anisotropy. Optimizing the corresponding functional for given initial and final geometries produces a complete, testable prediction for the full time course of a dynamic growth pattern. Focusing on the case of constant growth, in which growth rates can vary in space, but are held constant in time, we use this "morphogenetic action" principle to deduce the optimal growth trajectories for a variety of synthetic shapes. Finally, we apply our approach to deduce the optimal growth trajectories for limb morphogenesis in the crustacean _Parhyale hawaiensis_, where developing appendages can be analyzed _in toto_ with single-cell resolution [36]. ## II Quasiconformal Parametrizations of Anisotropic Growth Patterns Let us represent a growing thin tissue with a continuous curved surface \(\mathcal{M}_{t}\subset\mathbb{R}^{3}\) and assign to each material parcel in the tissue a set of curvilinear Lagrangian coordinates \(\vec{\mathbf{x}}=(x^{1},x^{2})\in\mathcal{B}\) defined over a planar domain of parameterization \(\mathcal{B}\subset\mathbb{R}^{2}\). The embedding of the surface in 3D is a time dependent map \(\vec{\mathbf{R}}(\vec{\mathbf{x}},t):\mathcal{B}\rightarrow\mathbb{R}^{3}\). Each point \(\vec{\mathbf{R}}(\vec{\mathbf{x}},t)\equiv\vec{\mathbf{R}}_{t}(\vec{\mathbf{x }})\in\mathcal{M}_{t}\) is characterized by its tangent vectors, \(\vec{\mathbf{e}}_{1}=\partial\vec{\mathbf{R}}/\partial x^{1}\) and \(\vec{\mathbf{e}}_{2}=\partial\vec{\mathbf{R}}/\partial x^{2}\), and its unit normal vector \(\hat{\mathbf{n}}=\vec{\mathbf{e}}_{1}\times\vec{\mathbf{e}}_{2}/||\vec{ \mathbf{e}}_{1}\times\vec{\mathbf{e}}_{2}||\). The covariant geometry of the surface is captured by the induced metric tensor \[g_{\alpha\beta}(\vec{\mathbf{x}},t)=\frac{\partial\vec{\mathbf{R}}_{t}(\vec{ \mathbf{x}})}{\partial x^{\alpha}}\cdot\frac{\partial\vec{\mathbf{R}}_{t}( \vec{\mathbf{x}})}{\partial x^{\beta}}, \tag{1}\] which quantifies lengths and angles between nearby points on the surface. From this 'top-down' perspective, \(\mathbf{g}(\vec{\mathbf{x}},t)\equiv\mathbf{g}_{t}(\vec{\mathbf{x}})\) is a characterization of the surface embedding defined with respect to a given \(\vec{\mathbf{R}}_{t}(\vec{\mathbf{x}})\). We can alternatively assume a complementary 'bottom-up' perspective in which \(\mathbf{g}_{t}(\vec{\mathbf{x}})\) is instead viewed as a direct description of the underlying local geometry, i.e. the position, size, shape, and arrangement of material regions labelled by the time independent Lagrangian coordinates \(\vec{\mathbf{x}}\). Accordingly, we can calculate the 3D embedding \(\vec{\mathbf{R}}_{t}(\vec{\mathbf{x}})\) in terms of \(\mathbf{g}_{t}(\vec{\mathbf{x}})\), provided some physically plausible assumptions. Assuming the system behaves as a thin elastic sheet, \(\vec{\mathbf{R}}_{t}(\vec{\mathbf{x}})\) is determined by minimizing an in-plane stretching energy that tries to match the lengths and angles between material points on the 3D surface with those prescribed by \(\mathbf{g}_{t}(\vec{\mathbf{x}})\)[31; 32]. If the time scale of elastic relaxation in the tissue is short compared to the time scale over which \(\mathbf{g}_{t}(\vec{\mathbf{x}})\) changes, the system will effectively always be in mechanical equilibrium. For our purposes, we assume that the physical geometry is always an isometric embedding of the target geometry, i.e. the tissue adopts a stress-free equilibrium configuration specified by \(\mathbf{g}_{t}(\vec{\mathbf{x}})\). Notice that such growth patterns will always be incompressible: new cells introduced into the tissue by cell proliferation increase the local area of tissue patches so that cell density remains constant along Lagrangian pathlines. The metric \(\mathbf{g}_{t}(\vec{\mathbf{x}})\) can therefore be used as a representation of the time course of both shape change and material flow along the dynamic surface. Strictly speaking, even for thin sheets, where the elastic energy is dominated by stretching and compression, additional out-of-plane bending considerations can become relevant to distinguish isometric deformations that do not change lengths and angles in the surface. A more detailed discussion of the consequences of these effects can be found in Appendix B.4. We next relate the time dependence of \(\mathbf{g}_{t}(\vec{\mathbf{x}})\) to growth. We start by defining a parameterization for the initial surface \(\mathcal{M}_{0}\). Cumulative changes to the system geometry will be measured relative to this initial configuration. For simplicity, we restrict our consideration to surfaces with disk-like topology (although virtually identical arguments hold true for both topological cylinders and topological spheres [35; 37]). The Riemann mapping theorem assures us that it is always possible to find a conformal mapping from the unit disk \(\mathbb{D}=\{\vec{\mathbf{x}}:||\vec{\mathbf{x}}||<1\}\) to the disk-like surface in 3D, i.e. \(\vec{\mathbf{R}}_{0}(\vec{\mathbf{x}}):\mathbb{D}\rightarrow\mathcal{M}_{0}\)[38; 39]. This mapping is unique if we constrain the motion of one material point in the bulk of the tissue and another material point on the tissue boundary. Without loss of generality, we identify these material parcels with the points \(\vec{\mathbf{x}}=(0,0)\) and \(\vec{\mathbf{x}}=(1,0)\), respectively. Identifying our domain of parameterization \(\mathcal{B}\) with the unit disk, this mapping endows the surface with a set of so-called _isothermal_ coordinates, in terms of which the metric tensor takes the following diagonal form \[\mathbf{g}_{0}(\vec{\mathbf{x}})=e^{\Omega_{0}(\vec{\mathbf{x}})}\,\mathbf{1}, \tag{2}\] where \(\mathbf{1}\) denotes the rank-2 identity tensor. Because the mapping is conformal, it preserves angles. The images of coordinate curves that intersect orthogonally in \(\mathcal{B}\) will also intersect orthogonally on the the embedded surface. Geometrically, we can interpret the 'conformal factor', \(e^{\Omega_{0}(\mathbf{\vec{x}})}\) as a non-uniform dilation, describing the spatially modulated isotropic swelling that transforms material patches in the domain of parameterization into their corresponding configurations on the physical 3D surface. Throughout development, internal biological processes, such as cell growth and proliferation, induce changes to the system's intrinsic geometry. These changes are captured by the time derivative of the metric \(\dot{\mathbf{g}}\), which we call the _growth tensor_. In general, the growth tensor at some arbitrary time \(t\) during the developmental process can be written as \(\dot{\mathbf{g}}_{t}(\mathbf{\vec{x}})=\Gamma(\mathbf{\vec{x}},t)\,\mathbf{g }_{t}(\mathbf{\vec{x}})+\boldsymbol{\gamma}(\mathbf{\vec{x}},t)\), where \(\operatorname{Tr}[\mathbf{g}_{t}^{-1}\boldsymbol{\gamma}]=0\). The quantity \[\Gamma(\mathbf{\vec{x}},t)=\frac{1}{2}\operatorname{Tr}\left[\mathbf{g}_{t} ^{-1}\dot{\mathbf{g}}_{t}(\mathbf{\vec{x}})\right]=\frac{d}{dt}\!\log\!\left[ \sqrt{g_{t}}\right] \tag{3}\] is the relative rate of area change of a local material patch on the 3D surface. Here we have used the shorthand \(g_{t}=\det[\mathbf{g}_{t}]\). The traceless component \(\boldsymbol{\gamma}\) is related to anisotropic growth, in a manner that we will shortly make explicit. The growth tensor can be thought of as introducing infinitesimal updates to the system's instantaneous intrinsic geometry, i.e. \(\mathbf{g}_{t+\delta t}(\mathbf{\vec{x}})=\mathbf{g}_{t}(\mathbf{\vec{x}})+ \delta t\,\mathbf{g}_{t}(\mathbf{\vec{x}})+\mathcal{O}(\delta t^{2})\). Each infinitesimal change in the intrinsic geometry will induce a corresponding evolution of the 3D surface. Over time, many infinitesimal updates may combine to produce dramatic 3D shape changes. Consider the growing surface at some later time \(\mathcal{M}_{t}\). For arbitrary growth patterns, the Lagrangian coordinates \(\mathbf{\vec{x}}\) will no longer necessarily constitute a conformal parameterization at time \(t\), despite being initially conformal by construction. However, just as we did with the initial surface, we can construct an instantaneously conformal parameterization of this new surface \(\mathbf{\vec{R}}_{t}(\mathbf{\vec{u}}):\mathbb{D}\to\mathcal{M}_{t}\), in terms of a new set of _time dependent_ virtual isothermal coordinates \(\mathbf{\vec{u}}\). For consistency, we demand that the material points at \(\mathbf{\vec{u}}=(0,0)\) and \(\mathbf{\vec{u}}=(1,0)\) correspond to the same points in the Lagrangian parameterization. In practice, the motion of these points in 3D may be determined by tracking individual cells or other distinguishable features. In general, however, all other material points will be assigned new labels in the virtual parameterization. The key insight is that the virtual isothermal coordinates \(\mathbf{\vec{u}}(\mathbf{\vec{x}},t)\) can be thought of as a transformation of \(\mathbb{D}\) into itself, but mapping material points labelled by \(\mathbf{\vec{x}}\) into new locations. Explicitly, the complete material mapping, describing the true motion of material points in 3D, is given by the composition \(\mathbf{\vec{R}}_{t}\circ\mathbf{\vec{u}}\circ\mathbf{\vec{R}}_{0}^{-1}: \mathcal{M}_{0}\to\mathcal{M}_{t}\) where \(\mathbf{\vec{u}}:\mathbb{D}\to\mathbb{D}\) is some sense-preserving diffeomorphic parameterization of \(\mathbb{D}\) fixing the points \(\mathbf{\vec{x}}=(0,0)\) and \(\mathbf{\vec{x}}=(1,0)\). All admissible reparameterizations \(\mathbf{\vec{u}}(\mathbf{\vec{x}},t)\) are examples of quasiconformal maps (see Appendix A). Quasiconformal mappings are generalizations of conformal mappings that allow for shear deformations of bounded distortion [33]. Their study was effectively inaugurated by Gauss, who used them to investigate the existence of isothermal coordinates on surface patches embedded in 3D [39]. Let us define a pair of complex variables \(z=x^{1}+i\,x^{2}\) and \(w=u^{1}+i\,u^{2}\). The map \(w(z):\mathbb{D}\to\mathbb{D}\) is considered quasiconformal if it is a solution to the _complex Beltrami equation_ \[\frac{\partial w}{\partial\bar{z}}=\mu(z,\bar{z})\,\frac{\partial w}{\partial z} \tag{4}\] where where \(\bar{z}=x^{1}-i\,x^{2}\) and the _Beltrami coefficient_, \(\mu\), is subject to \(|\mu|_{\infty}<1\). This last condition is necessary and sufficient to ensure that the Jacobian determinant of the transformation \(J=|\partial_{z}w|^{2}-|\partial\bar{\tau}w|^{2}>0\), such that \(w(z)\) is a sense-preserving diffeomorphism. The geometric properties of quasiconformal transformations can be elucidated by comparison with conformal mappings. Locally, around a point, a conformal transformation maps infinitesimal circles into similar circles. Therefore, a conformal map does not introduce any preferred local orientation and is necessarily isotropic. In contrast, a quasiconformal transformation maps infinitesimal circles into similar ellipses. The Beltrami coefficient encodes both the magnitude and directionality of this distortion and, thus, also the anisotropy of the transformation. This geometric intuition is illustrated in the inset of Fig. 1(a). Notice that when \(\mu=0\), the Beltrami equation reduces to the Cauchy-Riemann equation, \(\partial_{\mathbf{\vec{\tau}}}w=0\), the necessary and sufficient condition for the conformality of a complex map. The Measurable Riemann Mapping Theorem assures us that there is a unique solution \(w(z)\) to the complex Beltrami equation (Eq. (4)), fixing the points \(z=0\) and \(z=1\), for any \(\mu\) with \(|\mu|_{\infty}<1\)[40]. More information about quasiconformal mappings can be found in Appendix A. Any arbitrary time dependent arrangement of material points on a growing 3D surface can therefore be captured by the composition \(\mathbf{\vec{R}}_{t}\circ w(z,t):\mathbb{D}\to\mathcal{M}_{t}\). All of the anisotropy in the parameterization is contained within the intermediate quasiconformal transformation \(w\) and is described by the associated Beltrami coefficient \(\mu=\partial_{\bar{z}}w/\partial_{z}w\). This construction is illustrated in Fig. 2(a). In terms of these quantities, the complete Lagrangian metric tensor is given by \[\mathbf{g}_{t}(z)=e^{\Omega w}|\partial_{z}w|^{2}\begin{pmatrix}|1+\mu|^{2}&-i (\mu-\bar{\mu})\\ -i(\mu-\bar{\mu})&|1-\mu|^{2}\end{pmatrix}, \tag{5}\] expressed in the real basis of the Lagrangian coordinates \(\mathbf{\vec{x}}\). Notice that, given a 3D surface and the motion of a single point on the boundary and a single point in the bulk, each \(\mu\) corresponds to a unique parameterization of that surface thanks to the uniqueness properties of both \(\mathbf{\vec{R}}_{t}\) and \(w(t)\) (see Appendix A). The next crucial step is to explicitly determine how growth induces changes to this quasiconformal geometry. At an arbitrary time \(t\) during the developmen tal process, the metric in Eq. (5) expressed in the virtual isothermal coordinates has the simple diagonal form \(\mathbf{g}_{t}(w)=e^{\Omega(w,t)}\,\mathbf{1}\). The most general possible growth tensor in these coordinates has the following form \[\begin{split}&\dot{\mathbf{g}}_{t}(w)=\Gamma(w,t)\,\mathbf{g}_{t}(w)+ \boldsymbol{\gamma}(w,t)\\ &=\Gamma(w,t)\,e^{\Omega(w,t)}\,\mathbf{1}\\ &+2\,e^{\Omega(w,t)}\,|\gamma(w,t)|\,\begin{pmatrix}\cos\vartheta (w,t)&\sin\vartheta(w,t)\\ \sin\vartheta(w,t)&-\cos\vartheta(w,t)\end{pmatrix},\end{split} \tag{6}\] expressed in the real basis of the intermediate coordinates \(\vec{\mathbf{u}}\). Here, \(\gamma(w,t)=|\gamma(w,t)|\,e^{i\vartheta(w,t)}\) is a complex field describing the magnitude and orientation of infinitesimal anisotropic updates to the intrinsic geometry. Clearly, for \(\gamma=0\), \(\Gamma\) simply modulates the conformal factor, whereas non-zero \(\gamma\) induces local shearing. The infinitesimal update to the geometry shown in Eq. (6) will induce a corresponding evolution of the intermediate isothermal coordinates \[w(z,t+\delta t)=w(z,t)+h_{t}\circ w(z,t)\,\delta t+\mathcal{O}(\delta t^{2}), \tag{7}\] The infinitesimal mapping \(h_{t}\) can be thought of as the instantaneous'velocity' of material points in the isothermal coordinates. Simultaneously, there will also be a change in the overall 3D shape of the surface captured by the conformal factor. The total relative rate of area change on the 3D surface resulting from these updates can be expressed in the intermediate coordinates as \[\begin{split}&\Gamma(w,t)=\frac{d\,\Omega(w,t)}{dt}+2\,\text{Re }\left[\partial_{w}h_{t}(w,t)\right]\\ &=\partial_{t}\Omega(w,t)+2\,\text{Re}\left[h_{t}(w,t)\,\partial _{w}\Omega(w,t)\right]+2\,\text{Re}\left[\partial_{w}h_{t}(w,t)\right].\end{split} \tag{8}\] We see that \(\Gamma\) is defined by contributions from both the change of the overall surface shape and the redistribution of points on the surface. The infinitesimal anisotropy of the mapping is simply \(\partial_{\bar{w}}w(t+\delta t)/\partial_{w}w(t+\delta t)\approx\partial_{\bar {w}}h_{t}=\gamma(w,t)\). In order to complete the description, we must relate the infinitesimal geometric updates in the intermediate frame to their counterparts in the Lagrangian coordinates. Application of the chain rule tells us that the the time rate of change of the cumulative anisotropy is given by \[\dot{\mu}(z,t)=(1-|\mu(z,t)|^{2})(\gamma\circ w(z,t))\,e^{i\psi}, \tag{9}\] where \(e^{i\psi}=\overline{\partial_{z}w}/\partial_{z}w\). According to Eq. (9), the same \(\gamma\) will produce diminished transformations of the cumulative geometry in regions that are already highly anisotropic. When expressed directly in Lagrangian coordinates, the infinitesimal mapping \(h_{t}[w,\dot{\mu}](z,t)\) becomes a functional of the intermediate parameterization \(w\) and \(\dot{\mu}\). The explicit construction for \(h_{t}[w,\dot{\mu}]\) is called the _Beltrami holomorphic flow_ (BHF) [35] and is discussed in detail in Appendix A.2. Finally, direct calculation tells us that \[\begin{split}&(\Gamma\circ w)(z,t)=\frac{d}{dt}(\Omega\circ w)\\ &\qquad+2\,\text{Re}\left[\frac{(\partial_{z}h_{t}[w,\dot{\mu}]) \overline{\partial_{z}w}}{|\partial_{z}w|^{2}}\right]+2\,\text{Re}\left[\frac {\dot{\mu}\,\bar{\mu}}{1-|\mu|^{2}}\right],\end{split} \tag{10}\] in which form it is explicitly clear that local area change on the 3D surface in Lagrangian material patches depends on cumulative anisotropy. We are now equipped with all of the necessary machinery to quantify dynamic growth patterns in thin tissues. At time \(t=0\), we can construct a conformal Lagrangian parameterization of the initial surface. A short time later, internal growth processes update the intrinsic anisotropy of the system according to \(\dot{\mu}\). This change induces a corresponding flow of the quasiconformal mapping captured by the quantity \(h_{t}[w,\dot{\mu}]\). Simultaneously, these processes also induce a change in the 3D area of material patches, which is captured by the quantity \(\Gamma\). We can calculate a corresponding update to \(\mathbf{g}_{t}\) in terms of \(\dot{\mu}\) and \(\Gamma\). This geometry can then be embedded into 3D to find the new configuration of the system. We can string these infinitesimal transformations together sequentially to construct any arbitrary growth trajectory. This construction is illustrated in Fig. 2. Therefore, if we know the time course of cumulative anisotropy \(\mu\) and the 3D area \(A\) of each material patch, we can uniquely construct the associated metric tensor, which serves as a proxy for both shape and flow. This decomposition enables us to assign a simple, concrete structure to the abstract morphospace alluded to into in the introduction. Morphogenetic trajectories of thin tissues are simply time dependent profiles of anisotropy and areal growth (Fig. 1(a)). ## III Growth pattern selection as an optimization problem As we have already stated, there is an infinite degeneracy of possible growth trajectories linking an initial configuration and a final shape. How does nature settle on a specific developmental program? To explore the possibility that natural selection acts as constrained optimization, we can frame the growth pattern selection as a variational problem. Given an initial configuration \(\mathcal{M}_{0}\) (interpreted as a fixed initial shape _and_ a fixed initial parameterization) and a final shape \(\mathcal{M}_{T}\) (a fixed final shape, but _not_ a fixed final parameterization), we define the optimal growth trajectory \(\dot{\mathbf{g}}_{t}^{*}(\vec{\mathbf{x}})=\arg\min_{\dot{\mathbf{g}}_{t}( \vec{\mathbf{x}})}\mathcal{S}\) as the minimizer of \[\begin{split}\mathcal{S}=\int_{0}^{T}dt\,\int_{\mathcal{B}}d^{2} \vec{\mathbf{x}}\,\sqrt{g_{t}}\,\mathcal{L}\,[\dot{\mathbf{g}}_{t}(\vec{ \mathbf{x}});\mathcal{S}(t=0)]\\ &\qquad\qquad\qquad\qquad+\lambda\,D_{SEM}[\mathbf{g}_{T};\, \mathcal{M}_{T}].\end{split} \tag{11}\] In the language of optimal control theory [41], the first term is the 'cost functional' of the growth pattern calculated in terms of a running cost density \(\mathcal{L}\). The second term is a Lagrange multiplier enforcing the constraint that the metric tensor at the final time \(\mathbf{g}_{T}(\vec{\mathbf{x}})\) must be a valid induced metric of the final surface \(\mathcal{M}_{T}\). There is also an additional implicit constraint that \(\mathbf{g}_{t}\) remain a valid metric for all \(t\), i.e. \(\mathbf{g}_{T}\) is a symmetric, positive-definite type-(0,2) tensor. This shape constraint enforced by the Lagrange multiplier demands some additional explanation. We say that two metrics \(\mathbf{a}\) and \(\mathbf{g}\) are shape equivalent metrics (SEMs) if they are both induced metrics of the _same_ surface \(\mathcal{M}_{T}\), but corresponding to _different_ coordinate parameterizations. For notational convenience, we denote by \(\Sigma_{\mathcal{M}_{T}}\) the space of all SEMs for the surface \(\mathcal{M}_{T}\). Physically, given a shared Lagrangian parameterization \(\vec{\mathbf{x}}\) defined using \(\mathcal{M}_{0}\), we interpret these parameterizations as resulting from different material flows encapsulated by contrasting intermediate mappings \(\vec{\mathbf{u}}(\vec{\mathbf{x}},T)\). If we were to fix the complete configuration of the final surface, rather than just its shape, we would over-constrain the entire growth trajectory since intermediate configurations are linked to the final configuration via the machinery discussed in the previous section. For instance, we would not be able to objectively investigate the conditions under which growth pattern selection criteria give rise to anisotropy, since anisotropy (or lack thereof) would be baked into the final configuration by construction. Therefore, in order to generate an unbiased answer to the question of optimal growth, it is critical that we allow \(\mathbf{g}_{T}\) to search the entire space of \(\Sigma_{\mathcal{M}_{T}}\). Anything less results in overly restrictive demands on the material flows during growth. Determining whether or not \(\mathbf{g}_{T}\in\Sigma_{\mathcal{M}_{T}}\) therefore requires searching the space of possible parameterizations of \(\mathcal{M}_{T}\). In general, the complicated nature of this functional space makes problems of this type quite challenging. Recall, however, that each diffeomorphic parameterization of a disk-like surface corresponds to a unique Beltrami coefficient, so long as we constrain the motion of two material points. We can therefore simply explore the space of Beltrami coefficients, rather than directly searching the space of surface diffeomorphisms. This amounts to a much simpler task, since there are no restrictions that Beltrami coefficients must be one-to-one, onto, or satisfy any constraints on their Jacobians [37]. Explicitly, we have \[D_{SEM}[\mathbf{g}_{T};\,\mathcal{M}_{T}]=\min_{\mu}\,\frac{1}{2}\,\int_{ \mathbb{D}}||\mathbf{g}_{T}-\mathbf{a}[\mu]||^{2}\,\sqrt{g_{T}}\,d^{2}\vec{ \mathbf{x}}, \tag{12}\] where \(||\cdot||\) denotes the Frobenius norm and \(\mathbf{a}[\mu]\in\Sigma_{\mathcal{M}_{T}}\) is the induced metric of \(\mathcal{M}_{T}\) generated by \(\mu\). The functional \(D_{SEM}[\mathbf{g}_{T};\,\mathcal{M}_{T}]\) will vanish if and only if \(\mathbf{g}_{T}\in\Sigma_{\mathcal{M}_{T}}\). Formulating the nontrivial shape constraint in this way transforms it into a variational sub-problem that can be solved by using the BHF to calculate the variation in \(D_{SEM}[\mathbf{g}_{T};\,\mathcal{M}_{T}]\) under the corresponding variation in \(\mu\). In practice, this task can be addressed using standard gradient-based optimization methods [42]. We now return to the specific form of the cost functional defining optimal growth trajectories. We propose that optimal growth patterns minimize spatiotemporal variation in growth rates and anisotropy, i.e. \[\begin{split}&\mathcal{S}=\lambda\,D_{SEM}[\mathbf{g}_{T};\, \mathcal{M}_{T}]\quad+\\ &\int_{0}^{T}dt\,\int_{\mathcal{B}}d^{2}\vec{\mathbf{x}}\,\sqrt{g _{t}}\,\,\left[c_{1}|\nabla\Gamma|^{2}+c_{2}|\nabla\gamma|^{2}+c_{3}\dot{ \Gamma}^{2}+c_{4}|\dot{\gamma}|^{2}\right].\end{split} \tag{13}\] We use \(\nabla\) to denote the covariant derivative of a quan Figure 2: Quasiconformal parameterization of dynamic growth patterns. (a) An arbitrary parameterization of a topological disk in 3D can be expressed as the composition of a quasiconformal automorphism of the unit disk \(\vec{\mathbf{u}}:\mathbb{D}\to\mathbb{D}\), fixing the points \(\vec{\mathbf{x}}=(0,0)\) and \(\vec{\mathbf{x}}=(1,0)\), followed by a conformal mapping into 3D, \(\vec{\mathbf{R}}:\mathbb{D}\to\mathbb{R}^{3}\). The parameterization is unique if both a Beltrami coefficient \(\mu\) and the motion of two material points are specified. (b) Complicated time-dependent growth patterns for 3D surfaces can be built by stringing together simple infinitesimal updates to the system’s intrinsic geometry. Beginning with a conformal parameterization of the initial configuration, new configurations are generated by calculating the contributions of changes in the system’s intrinsic anisotropy and target areas. A time course of \(\dot{\mu}\) and \(\Gamma\) are sufficient to fully reconstruct arbitrary growth patterns. tity with respect to the metric \(\mathbf{g}_{t}\) in terms of the fixed 2D Lagrangian coordinates \(\vec{\mathbf{x}}\). For simplicity and expediency, we focus on a the special case of temporally _constant growth_ corresponding to the limit of high penalties \((c_{3},c_{4})\) for temporal variation of \(\Gamma\) and \(\gamma\). Thus \(\Gamma\) and \(\gamma\) may vary spatially, but are held constant for all time, i.e. \(\dot{\Gamma}=\dot{\gamma}=0\). Notice constant growth implies that the area of each material patch grows exponentially with a time-independent rate constant. We can clearly see this by calculating the time-dependent area of a material element \(dA(\vec{\mathbf{x}},t)=\sqrt{g_{t}}\,d^{2}\vec{\mathbf{x}}\). From Eq. (3), we have \[\sqrt{g_{t}}=e^{\Gamma\,t}\,\sqrt{g_{0}}\implies dA(t)=e^{\Gamma\,t}dA(t=0), \tag{14}\] where we have exploited the fact that \(\Gamma\) is constant in time. The final simplification is to define the covariant derivatives and area elements with respect to the metric at the initial time \(\mathbf{g}_{0}\). This choice completely removes any time dependence from the cost functional. The modified constant growth functional is given by \[\begin{split}&\mathcal{S}=\lambda\,D_{SEM}[\mathbf{g}_{T};\, \mathcal{M}_{T}]\quad+\\ & T\int_{\mathcal{B}}d^{2}\vec{\mathbf{x}}\,\sqrt{g_{0}}\ \left[c_{1}|\nabla\Gamma|^{2}+c_{2}|\nabla\gamma|^{2}\right].\end{split} \tag{15}\] Essentially, we assume that the developmental procedure is programmed into the tissue at some initial time and optimization of our cost functional selects the plan that minimizes the complexity needed to specify this primordial growth field. Solving this constrained problem yields a time course of metric tensors in terms of \(\Gamma\) and \(\gamma\), which we can them dynamically embed in \(\mathbb{R}^{3}\). The final result is a complete prediction for both the coarse-grained shape and flow of a growing tissue over time. ## IV Results We now apply our formalism to compute the optimal constant growth patterns for a variety of synthetic and natural examples. To that end, we generate numerical solutions using a combination of discrete differential geometry processing and nonlinear optimization methods. The target geometries produced by our numerical optimization machinery can be embedded sequentially in 3D to reconstruct a full time course of shape and flow in growing surfaces. Details of the optimization methods, the discretization of the BHF, and the methods used to embed optimal intrinsic geometries in 3D can be found in Appendix B. In all of the experiments that follow, we set the constants \(c_{1}=c_{2}=1\). We begin by calculating optimal growth patterns for a set of simple synthetic target shapes grown from an initially flat disk configuration (Fig. 3). In particular, we solve the optimal growth problem for a hemisphere (Fig. 3(a-c)) and an elliptic paraboloid (Fig. 3(d-f)) using a conformal parameterization as our initial guess. For all systems studied, growth rates in the optimal configuration are uniformized at the cost of introducing anisotropy. These results suggest that growth rate uniformization may be a generic mechanism underlying the presence of anisotropy in observed developmental growth patterns. Our formalism enables us to decode the contributions of areal expansion and anisotropic deformation towards determining 3D shape. Fig. 3(c) shows the cumulative anisotropy of the optimal constant growth pattern transforming a flat disk into a hemispherical shell. Displaying the local orientation along which tissue parcels extend due to anisotropic deformation results in a nematic texture supporting topological defects (Fig. 3(g)). Such textures have recently been studied in the context of epithelial morphogenesis coupled to active nematic biomolecular components [43], where defects in nematic textures have been identified as organizing centers of curvature and mass accretion leading to shape change [44; 45; 46]. The Figure 3: **Growth rates are uniformized by introducing anisotropy.** Optimized constant growth patterns for set of synthetic surfaces. Initial configuration for all surfaces is the flat unit disk. Circular texture on the surfaces represents the deformation of material patches under the flow. (a) A conformal growth pattern linking the unit disk and the hemispherical surface. Growth occurs primarily at the apex of the dome. (b) The optimized constant growth rates for the same final shape. Growth rates are heavily uniformized relative to the conformal growth pattern. (c) The absolute value of the Beltrami coefficient for the optimized constant growth pattern. (d) A conformal growth pattern linking the unit disk and an elliptic paraboloid. Growth occurs primarily at the poles. (e) The optimized constant growth rates for the same final shape. Growth rates are uniformized, although not as completely as the case of the spherical surface. (f) The absolute value of the Beltrami coefficient for the optimized constant growth pattern. (g) The absolute value of the Beltrami coefficient for the optimized constant growth pattern transforming a flat disk into a hemispherical surface and the corresponding nematic field illustrating the orientation along which material patches extend. The texture contains a +1 topological defect with phase \(\psi=0\). (h) A synthetic Beltrami coefficient with the same total integrated anisotropy constructed by placing two +1 defects with phase \(\psi=\pi/2\) at opposite poles of the disk boundary. (i) A new surface generated using the exact same growth rates as the surface in (b), but with the anisotropy texture shown in (h). optimal growth texture is characterized by a \(+1\) defect with phase \(\psi=0\) at the pole of the sphere. Fig. 3(h) displays a modified texture with the same total integrated anisotropy, i.e. \(\int_{\Omega}|\mu(\mathbf{\vec{x}})|^{2}d^{2}\mathbf{\vec{x}}\), but with an alternative texture constructed by placing two \(+1\) defects with phase \(\psi=\pi/2\) at opposite poles on the disk boundary. Simulating a growth process using this modified anisotropy texture, but keeping the areal growth rates exactly the same as in Fig. 3(b), produces an entirely new final shape (Fig. 3(i)). This proof-of-principle demonstrates how our methods can be used as a platform for growth pattern design in terms of nematic textures. Next, we applied our machinery directly to experimentally observed pattern of limb growth in the crustacean _Parhyale hawaiensis_ (Fig. 4). _Parhyale_ are direct developers that grow externally visible appendages and deep surface folds during embryogenesis when live imaging is readily possible [14, 36]. They are a uniquely suitable system for _in vivo_ investigation of the transformation of the 2D limb primordium into its 3D shape, in real time and across multiple length scales [47]. Recent work characterized appendage growth by tracking individual cells in 3D [5]. Analysis of individual cell behaviors is informative, but, on its own, produces limited insights about the dynamics of tissue-scale deformations. Using a combination of computer vision and discrete differential geometry, we extracted the continuous, quasiconformal growth pattern generating the limb _in vivo_. We were then able to apply our optimization approach to predict growth pattern linking the observed initial configuration and the final shape of the limb. The optimal growth pattern successfully reproduces the qualitative large scale features of biological limb morphogenesis. In both the measured and calculated growth patterns, areal growth occurs primarily at the distal tip of the nascent appendage. The anisotropic deformations that extend the growing limb along its proximodistal axis are also similar. For both the measured and calculated growth patterns, the tissue stretches along the proximodistal axis away from the animal's body. The most obvious quantitative discrepancy is that the optimal growth pattern under-predicts the maximum areal growth rates at the distal tip of the limb. Notice that the _Parhyale_ limb is comprised of relatively few cells (\(\sim 15\) cell lengths long at 109.1h AEL). The observed discrepancy may indicate that the optimal growth principle should be modified to account for the discrete cellular structure of such tissues, rather than naively optimizing for smooth gradients of proliferation. Finally, we made quantitative comparisons of different types of dynamic growth patterns to the observed developmental trajectory (Fig. 4(j)). In particular, we compared the optimal constant growth pattern to a synthetic growth pattern that linearly interpolates between the initial and final optimal metric tensor, i.e. \(\mathbf{\tilde{g}}_{t}=\left(1-t/T\right)\mathbf{g}_{0}+\left(t/T\right) \mathbf{g}_{T}\) for \(t\in[0,T]\). We see that the constant growth pattern produces a significantly more accurate results at intermediate times than the linearly interpolated geometry. ## V Discussion This work establishes a theoretical and computational approach to the study of 3D shape formation by growing 2D (epithelial) sheets. We have framed the problem Figure 4: **Optimal growth predictions captures features of experimentally quantified limb morphogenesis.** (a)-(c) A growing appendage in a _Parhyale hawaiensis_ embryo with a fluorescent nuclear marker captured using light-sheet microscopy. Time is measured in hours after egg lay (AEL). Cells corresponding to the growing limb are highlighted in yellow. (d)-(f) The surface of the growing limb extracted using tissue cartography. (g) Initial condition for the coarse-grained growth pattern. Circular patches are schematic representations of clonal regions. (h) The measured areal growth rates during appendage outgrowth. (i) The areal growth rates of the optimized constant growth pattern generating the final appendage shape. (j) Instantaneous integrated metric error between two different growth patterns. The blue curve is the prediction error for the constant growth pattern linking the initial configuration and the optimized final configuration holding \(\dot{\mu}\) and \(\Gamma\) constant. The orange curve links the same initial and final configurations, but does so by linearly interpolating the initial and final target geometries, i.e. \(\mathbf{\tilde{g}}_{t}=\left(1-t/T\right)\mathbf{g}_{0}+\left(t/T\right) \mathbf{g}_{T}\) for \(t\in[0,T]\). (k) The measured anisotropy during appendage outgrowth. (l) The anisotropy of the optimized constant growth pattern generating the final appendage shape. of morphogenetic growth pattern selection and defined a general action-like variational principle that allows one to determine spatiotemporal growth trajectories that uniformize growth, subject to the constraint of achieving the desired final shape. Our approach enables quantitative comparisons between different growth patterns, opening the door to a predictive understanding of how morphogenetic programs decode genetic information into shape and form. Our numerical implementation of the variational principle has demonstrated, quite generally, that growth rates may be uniformized at the cost of introducing anisotropy. Furthermore, already the simplest form of the optimization cost functional was shown to reproduce important qualitative features of experimentally measured growth patterns in arthropod limb morphogenesis. These results suggest that growth rate uniformization may serve as a generic mechanism contributing to the prevalence of anisotropy in observed developmental programs. Crucially, our optimization principle is both modular and adaptable. In the simple form used in this work, the optimization cost functional suppresses temporal variation so that the (in general anisotropic) growth rate tensor of a small patch of cells is directly related to the growth rate tensor of its clonal progenitor patch. Optimization in this case is uniformizing the pattern of growth "prescribed" over the initial primorium, thus minimizing the information supplied at the initial time. A compelling generalization of this simplest optimization condition would be the inclusion of mechanical and geometric feedback [48], which would effectively allow additional factors, e.g. local curvature or stress, to control or modulate morphogenetic growth. In plant shoot apical meristem, for instance, it is known that anisotropic stresses sustained within the outermost epidermal layer adaptively orient stress-bearing microtubules that sculpt the 3D shape of the developing organ by modulating the directions in which cells grow and divide [7]. In the developing _Drosophila_ wing, essentially uniform growth rates are maintained through a negative mechanical feeback loop where reduced tension along cell junctions in fast growing clones leads to down regulation of growth [49]. By relaxing the constraint that the physical geometry be an isometric embedding of the target geometry, we could directly explore how dynamical mechanical fields, such as in-plane stress, may modulate the feasibility of particular growth patterns with respect to optimality. Another possibile generalization would be the explicit inclusion of morphogen fields. Our machinery is well suited to analyze a variety of geometrically distinct classes of morphogens, including scalar fields (e.g. molecular concentration), vector fields (e.g. concentration gradients), and nematic tensor fields (e.g. contractile actomyosin networks). Essentially arbitrary interactions between mechanics, geometry, and signaling could be incorporated into novel optimality criteria. These explorations would be useful in establishing a link between the pattern formation frequently observed in morphogen signaling and the 3D shapes of organs and appendages. It would also be interesting to investigate sheets of finite thickness, where additional mechanical control over 3D shape can be provided by direct determination of local bending. Functionally, this could be executed in our methodology by expanding our simple description of intrinsic geometry to include a time dependent target tissue curvature. This target curvature may be determined biologically through differential expansion/contraction of the apical and basal surfaces of polarized epithelia, such as the apical constriction of cells that drives ventral furrow formation in _Drosophila_[50]. Similar mechanisms are available in the case of a tissue bilayer [23], i.e the differential growth of the adaxial and abaxial sides of leaves [51]. Finally, the portability of our methodology invites applications beyond biological morphogenesis to the design of synthetic surface structures and bioinspired materials. For simplicity, in this work, we did not include an explicit description of individual cells, choosing instead to subsume these behaviors into our continuum scale description of tissues as a whole. The discrete nature of cells is, however, vital to morphogenesis, insofar as it allows for modes of inter-cell communication and interaction inaccessible to featureless continuous media. The size of cells also sets a reference scale below which it is impossible or irrelevant to define spatially varying fields of growth parameters. Many growing tissues are comprised of relatively few cells, but are still amenable to the type of analysis explored in this work (the _Parhyale_ limb we analyzed is only \(\sim 15\) cell lengths long at 109.1h AEL). Following recent work [52], the most straightforward way to explicitly include cell behaviors in our formalism would be to retain a continuum description, with material points corresponding to subcellular parcels of tissue, and impose cellular topology on top of this description as a separate, time dependent field. New optimality criteria could be generated that explicitly depend on cellular topology. This inclusion would enable the systematic exploration of multi-scale behaviors underlying thin tissue morphogenesis, while retaining the explanatory power of our geometric formalism. ###### Acknowledgements. This work was supported by the NSF through PHY 2210612 award. DJC acknowledges support from the NSF through PHY 2013131. We thank Sebastian Streichan for stimulating discussions and critical feedback. ## Appendix A A Primer on Quasiconformal Maps ### Geometric Properties of Quasiconformal Maps This section contains a short mathematical introduction to planar quasiconformal mappings. Much of this exposition is drawn directly from Ahlfors' extremely lucid monograph [33] and a more comprehensive presentation can be found therein. The structure of a quasiconformal map can be deduced from the familiar notion of a conformal map by the by allowing the transformation to include shear deformations. Let \(w=f(z):\mathbb{C}\to\mathbb{C}\) be a \(C^{1}\) homeomorphism from one subregion of \(\mathbb{C}\) to another given in terms of the complex variables \(z=x^{1}+ix^{2}\) and \(w=u^{1}+iu^{2}\). The Jacobian determinant of this transformation is \[J=|\partial_{z}w|^{2}-|\partial_{\overline{z}}w|^{2}, \tag{30}\] where we denote the complex conjugate \(\overline{z}=x^{1}-ix^{2}\) and the complex derivative operators are defined via the chain rule as \[\frac{\partial}{\partial z}=\frac{1}{2}\left(\frac{\partial}{\partial x^{1}}- i\frac{\partial}{\partial x^{2}}\right),\qquad\frac{\partial}{\partial\overline{z}}= \frac{1}{2}\left(\frac{\partial}{\partial x^{1}}+i\frac{\partial}{\partial x^ {2}}\right). \tag{31}\] We limit ourselves to the case of sense-preserving diffeomorphisms, for which \(J\) is strictly positive, i.e. \(|\partial_{z}w|>|\partial_{\overline{z}}w|\). Locally, around a point \(z_{0}\), this diffeomorphism induces a linear mapping of the differentials \[du^{1} =\partial_{x^{1}}u^{1}\,dx^{1}+\partial_{x^{2}}u^{1}\,dx^{2} \tag{32}\] \[du^{2} =\partial_{x^{1}}u^{2}\,dx^{1}+\partial_{x^{2}}u^{2}\,dx^{2},\] or in terms of the complex notation \[dw=\partial_{z}w\,dz+\partial_{\overline{z}}w\,d\overline{z}. \tag{33}\] Geometrically, this is a local affine transformation bringing infinitesimal circles centered about \(z_{0}\) into similar ellipses. We define the linear distortion of the mapping, \(D_{w}\), as the ratio of the major axis of this image ellipse to its minor axis. From Equation (33), it is immediately apparent that \[\left(|\partial_{z}w|-|\partial_{\overline{z}}w|\right)|dz|\leq|dw|\leq\left( |\partial_{z}w|+|\partial_{\overline{z}}w|\right)|dz| \tag{34}\] where both limits can be achieved. The linear distortion is then \[D_{w}=\frac{|\partial_{z}w|+|\partial_{\overline{z}}w|}{|\partial_{z}w|-| \partial_{\overline{z}}w|}=\frac{1+|\mu|}{1-|\mu|} \tag{35}\] where we have defined the _Beltrami coefficient_ or _complex dilatation_\(\mu=\partial_{\overline{z}}w/\partial_{z}w\). In terms of the Beltrami coefficient, the Jacobian determinant is \(J=|\partial_{z}w|^{2}\left(1-|\mu|^{2}\right)\). Therefore, the strict positivity of \(J\) required for a sense-preserving diffeomorphism implies that \(|\mu|<1\). As previously mentioned, a quasiconformal transformation is a mapping of bounded distortion. In particular, a map is called \(K\)-quasiconformal if \(D_{w}\leq K\). It follows that a conformal transformation can be considered as a 1-quasiconformal map. The local distortion of conformality induced by a quasiconformal map is entirely encoded in the parameter \(\mu\). If we consider an infinitesimal circle at centered a point \(z_{0}\), its image ellipse has a major axis of length \(1+|\mu|\) and a minor axis of length \(1-|\mu|\). Furthermore, the angle of maximal distortion in the \(z\)-plane is \(\alpha=\ \arg(\mu)/2\) and the angle of minimal distortion is \(\alpha+\pi/2\). In other words, \(\mu\) measures regions that do not preserve local geometry. This geometric intuition, displayed graphically in the inset of Fig. 1(a), motivates the use of \(\mu\) as a measure of the anisotropy associated with the deformation of an elastic body. We complement this geometric derivation by stating the analytic definition of a quasiconformal map. A map \(f\) is considered quasiconformal if it is a solution to the _complex Beltrami equation_ \[\frac{\partial f}{\partial\overline{z}}=\mu(z,\overline{z})\frac{\partial f} {\partial z} \tag{36}\] given a Lebesgue-measurable function \(\mu\) with \(|\mu|_{\infty}<1\). This latter constraint is sufficient to ensure that \(J>0\) everywhere and hence, by the inverse function theorem, that \(f\) is a sense-preserving diffeomorphism. Notice that when \(\mu=0\), the Beltrami equation reduces to the Cauchy-Riemann equation, \(\partial_{\overline{z}}f=0\), the necessary and sufficient conditions for the analyticity of a complex map [38]. Confining our attention to topological disks, the Measurable Riemann Mapping Theorem assures that, for each admissible Beltrami coefficient \(\mu\) defined over \(\mathbb{D}\), there is a unique solution \(f:\mathbb{D}\to\mathbb{D}\) to Eq. (36) that fixes the points \(f(0)=0\) and \(f(1)=1\)[40]. To explore the consequences of this theorem, consider two disk-like surfaces \(\mathcal{M}_{1},\mathcal{M}_{2}\subset\mathbb{R}^{3}\). We wish to construct an arbitrary sense-preserving diffeomorphism between them, \(\mathbf{\tilde{\Phi}}:\mathcal{M}_{1}\to\mathcal{M}_{2}\) that specifies the transformation of the points \(\mathbf{\tilde{X}}_{0}\in\mathcal{M}_{1}\) and \(\mathbf{\tilde{X}}_{1}\in\partial\mathcal{M}_{1}\), i.e. \(\mathbf{\tilde{X}}_{0}^{\prime}=\mathbf{\tilde{\Phi}}(\mathbf{\tilde{X}}_{0} )\in\mathcal{M}_{2}\) and \(\mathbf{\tilde{X}}_{1}^{\prime}=\mathbf{\tilde{\Phi}}(\mathbf{\tilde{X}}_{1 })\in\partial\mathcal{M}_{2}\). The celebrated Riemann Mapping Theorem assures us that is is always possible to conformally map all such regions to the unit disk and that such a mapping is unique up to a Mobius transformation \(\varphi:\mathbb{D}\to\mathbb{D}\) of the form [38] \[\varphi(z)=e^{i\theta}\frac{z-z_{0}}{1-\overline{z_{0}}z},\quad|z_{0}|<1,\quad \theta\in[0,2\pi). \tag{37}\] Given a pair of such conformal maps, \(\mathbf{\tilde{R}}_{1}:\mathbb{D}\to\mathcal{M}_{1}\) and \(\mathbf{\tilde{R}}_{2}:\mathbb{D}\to\mathcal{M}_{2}\), we can fix this conformal gauge freedom by demanding that \(\mathbf{\tilde{R}}_{1}^{-1}(\mathbf{\tilde{X}}_{0})=\mathbf{\tilde{R}}_{2}^{- 1}(\mathbf{\tilde{X}}_{0}^{\prime})=0\) and \(\mathbf{\tilde{R}}_{1}^{-1}(\mathbf{\tilde{X}}_{1})=\mathbf{\tilde{R}}_{2}^{- 1}(\mathbf{\tilde{X}}_{1}^{\prime})=1\). In this case, both \(\mathbf{\tilde{R}}_{1}\) and \(\mathbf{\tilde{R}}_{2}\) are guaranteed to be unique. The most general admissible mapping \(\mathbf{\tilde{\Phi}}:\mathcal{M}_{1}\to\mathcal{M}_{2}\) can now be decomposed into \(\mathbf{\tilde{\Phi}}=\mathbf{\tilde{R}}_{2}\circ f\circ\mathbf{\tilde{R}}_{1}^ {-1}\), where \(f:\mathbb{D}\to\mathbb{D}\) is the unique quasiconformal mapping fixing \(f(0)=1\) and \(f(1)=1\) with associated Beltrami coefficient \(\mu\). We therefore arrive at the following relation regarding the space of all diffeomorphisms of topological disks and the space of all Beltrami coefficients \[\begin{split}&\text{\small Space of Diffeomorphisms}\\ &\text{\small$\overline{\text{M}}$obius Transformations}\\ &\cong\text{\small Space of Beltrami Coefficients}\end{split} \tag{10}\] where the congruence symbol denotes an isomorphism. We pause to emphasize the importance of this observation. Any arbitrary sense-preserving diffeomorphism connecting two topological disks is a quasiconformal transformation. Furthermore, the Beltrami coefficient characterizing this transformation uniquely describes any and all local anisotropy associated with the mapping. Finally, given a Beltrami coefficient, \(\mu\), and the correspondence between two points, this transformation is also unique. ### Beltrami Holomorphic Flow The Beltrami Holomorphic Flow (BHF) gives a precise form for the variation of a quasiconformal mapping due to the variation of its associated Beltrami coefficient under suitable normalization. The BHF on the unit disk, \(\mathbb{D}\), was deduced by Lui _et al._ in [37], where it was proposed as a tool for the optimization of registrations between static pairs of surfaces. A more formal treatment of the subject matter can be found therein. For our purposes it is sufficient to consider a time dependent Beltrami coefficient \(\mu(z,t)\) defined over \(\mathbb{D}\). As \(\mu\) changes, it will induce a corresponding flow in the unique quasiconformal map \(w\), where \(w(0)=0\), \(w(1)=1\), and \(\mu=\partial_{z}w/\partial_{z}w\). This flow is given by \[w(z,t+\delta t)=w(z,t)+\delta t\,h_{t}[w,\dot{\mu}](z)+\mathcal{O}(\delta t^{2}) \tag{11}\] where \[h_{t}[w,\dot{\mu}](z)=\int_{\mathbb{D}}K(z,\zeta)\,d\eta^{1}\,d\eta^{2}, \tag{12}\] and \[\begin{split}& K(z,\zeta)=-\frac{w(z)\left(w(z)-1\right)}{\pi}\quad \times\\ &\quad\bigg{(}\frac{\dot{\mu}(\zeta)\left(\partial_{\zeta}w( \zeta)\right)^{2}}{w(\zeta)\left(w(\zeta)-1\right)\left(w(\zeta)-w(z)\right)} \\ &\quad+\frac{\overline{\dot{\mu}(\zeta)\left(\partial_{\zeta}w( \zeta)\right)^{2}}}{w(\zeta)\left(1-\overline{w(\zeta)}\right)\left(1- \overline{w(\zeta)}\,w(z)\right)}\bigg{)}.\end{split} \tag{13}\] for \(\zeta=\eta^{1}+i\eta^{2}\). The following form of \(h_{t}[w,\dot{\mu}](z)\) will be useful when we discuss the discretization of the BHF: \[h_{t}[w,\dot{\mu}](z)=\int_{\mathbb{D}}\begin{pmatrix}G_{1}\,\dot{\mu}_{1}+G _{2}\,\dot{\mu}_{2}\\ G_{3}\,\dot{\mu}_{1}+G_{4}\,\dot{\mu}_{2}\end{pmatrix}\,d\eta^{1}\,d\eta^{2} \tag{14}\] where \(\dot{\mu}=\dot{\mu}_{1}+i\,\dot{\mu}_{2}\) and \(G_{1},G_{2},G_{3},G_{4}\) are real valued functions defined on \(\mathbb{D}\) whose form can be deduced from \(K(z,\zeta)\). Here, we identify \(A+iB\) as the column vector \(\begin{pmatrix}A\\ B\end{pmatrix}\). The BHF can be exploited to help optimize arbitrary functions of surface diffeomorphisms on topological disks. In particular, it provides us with the appropriate descent directions needed for gradient based optimization methods. Consider some functional \[\mathcal{S}[\mu]=\int_{\mathbb{D}}\mathcal{L}[\mu,w]\,dx^{1}\,dx^{2}, \tag{15}\] which may depend on \(\mu\) both explicitly and implicitly through the quasiconformal mapping \(w\). Direct calculation yields \[\begin{split}\frac{\delta\mathcal{S}[\mu]}{\delta\mu}(\zeta)& =(\nabla_{\mu}\mathcal{L}[\mu,w])(z)\\ &+\int_{\mathbb{D}}\begin{pmatrix}A(z)\,G_{1}(z,\zeta)+B(z)\,G_{ 3}(z,\zeta)\\ A(z)\,G_{2}(z,\zeta)+B(z)\,G_{4}(z,\zeta)\end{pmatrix}\,dx^{1}\,dx^{2},\end{split} \tag{16}\] where \((\nabla_{\mu}\mathcal{L}[\mu,w])(z)=(\partial\mathcal{L}/\partial\mu_{1}, \partial\mathcal{L}/\partial\mu_{2})^{T}\), \(\begin{pmatrix}A\\ B\end{pmatrix}=(\nabla_{w}\mathcal{L}[\mu,w])(z)\) denotes the gradient of \(\mathcal{L}\) with respect to the map \(w\), and \(\begin{pmatrix}A\\ B\end{pmatrix}\cdot\begin{pmatrix}C\\ D\end{pmatrix}\) should be interpreted as the standard dot product of two vectors. The quantity \(\delta\mathcal{S}[\mu]/\delta\mu\) can be used to define a descent direction for a flow on the space of \(\mu\) that optimizes \(\mathcal{S}[\mu]\). ## Appendix B Numerical Methods for Optimal Growth Pattern Selection Numerical solutions to the problem of optimal growth were generated using a geometric finite difference method [53]. Consider a smooth surface \(\mathcal{M}_{t}\subset\mathbb{R}^{3}\) parameterized in the usual way by a set of coordinates \(\vec{\mathbf{x}}\in\mathbb{R}^{2}\). In our methods, this smooth parameterized 3D surface was approximated by a mesh triangulation \(\mathcal{M}=(\mathcal{F},\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) denotes the set of vertices comprising the mesh, \(\mathcal{F}\) denotes the connectivity list defining mesh faces, and \(\mathcal{E}\) denotes the set of mesh edges, defined parsimoniously through \(\mathcal{F}\). Each vertex \(v_{i}\in\mathcal{V}\) carries a set of 3D coordinates, \(\vec{\mathbf{R}}_{i}\), corresponding to that vertex's position in physical space, and also a set of 2D coordinates, \(\vec{\mathbf{x}}_{i}\), corresponding to that vertex's position in the domain of parameterization. In order to maintain a fixed mesh topology, we choose to order the vertices describing each face in a counter-clockwise fashion (Fig. 7(a)). The edges of the face are labelled by the vertex index opposite to that edge. A discrete metric on a triangular mesh is a set of edge lengths, \(L:\mathcal{E}\rightarrow\mathbb{R}^{+}\), which satisfies the triangle inequality on each face \(F=[v_{i},v_{j},v_{k}]\in\mathcal{F}\): \[L_{i}+L_{j}>L_{k},\,L_{j}+L_{k}>L_{i},\,L_{k}+L_{i}>L_{j}. \tag{17}\] Due to the fact that convex combinations of discrete metrics are still valid discrete metrics, it is possible to show that the space of all discrete Riemannian metrics on a triangular mesh of fixed topology is convex [54]. ### Non-Parametric Representations of Discrete Surfaces by Smooth Interpolants Computing the optimal growth pattern for a given input requires searching the space of parameterizations of the final target shape. We must therefore be able to evaluate dynamical fields on surface configurations corresponding to arbitrary parameterizations. In order to do so, we store the final surface as a smooth interpolant over the unit disk \(\mathbb{D}\) rather than as an explicit triangulation with fixed topology and 3D vertex positions. In particular, we utilize natural neighbor interpolation for scattered 2D data points [55, 56]. We will now explain this method in its general form and then show how it is applied for the specific case of evaluating surface configurations. Let \(\mathcal{P}=\{\mathbf{p}_{1},\ldots,\mathbf{p}_{n}\}\) be a set of \(n\) points in \(\mathbb{R}^{2}\) and let \(\mathbf{\tilde{\Phi}}\) be a vector-valued function defined on the convex hull of \(\mathcal{P}\). We assume that the function values are known at the points of \(\mathcal{P}\). In the context of surface evaluation, these are simply the 3D locations of the data points, i.e. \(\mathbf{\vec{R}}_{i}=\mathbf{\vec{\Phi}}(\mathbf{p}_{i})\). The point set \(\mathcal{P}\) defines a Voronoi tessellation of \(\mathbb{R}^{2}\), Equivalently, there is a unique Delaunay triangulation associated to this point set. We also require knowledge of the gradient at each point \(\mathbf{G}_{i}=\nabla\mathbf{\vec{\Phi}}(\mathbf{p}_{i})\). If the user has analytic knowledge of the function \(\mathbf{\vec{\Phi}}\) these gradients can be supplied as inputs. Otherwise, they are estimated by fitting high order Taylor polynomials to local neighborhoods of data points and extracting the first-order coefficients. Experiments show that fitting a 3rd order Taylor polynomial produces high quality results without being too computationally expensive. The interpolation is carried out for an arbitrary query point \(\mathbf{q}\) on the convex hull of \(\mathcal{P}\). When simulating the insertion of the query point \(\mathbf{q}\) into the Voronoi diagram of \(\mathcal{P}\), the virtual Voronoi cell of \(\mathbf{q}\) "steals" some area from the existing existing cells. This construction is illustrated in Figure 5. Let \(A(\mathbf{q})\) denote the area of the virtual Voronoi cell of \(\mathbf{q}\) and let \(A_{i}(\mathbf{q})\) denote the area of the sub-cell that would be stolen from the cell of \(\mathbf{p}_{i}\) by the cell of \(\mathbf{q}\). The natural neighbor coordinates of \(\mathbf{q}\) with respect to the data point \(\mathbf{p}_{i}\in\mathcal{P}\) are defined to be \[\lambda_{i}(\mathbf{q})=\frac{A_{i}(\mathbf{q})}{A(\mathbf{q})}. \tag{10}\] These coordinates have the following properties: * \(\mathbf{q}=\sum_{i=1}^{n}\lambda_{i}(\mathbf{q})\,\mathbf{p}_{i}\) (barycentric coordinate property) * For any \(i,j\leq n\), \(\lambda_{i}(\mathbf{p}_{j})=\delta_{ij}\) * \(\sum_{i=1}^{n}\lambda_{i}(\mathbf{q})=1\) (partition of unity property) Furthermore, the natural neighbor coordinates depend continuously on the planar coordinates of \(\mathbf{q}=(q^{x},q^{y})\). In fact, one can calculate the gradient of the "stolen" sub-cell area \(A_{i}(\mathbf{q})\) with respect to the components of \(\mathbf{q}\). Let the \(m\) natural neighbors of \(\mathbf{q}\) be denoted \(\{\mathbf{p}_{1},\ldots,\mathbf{p}_{m}\}\) and be arranged in counter-clockwise order around \(\mathbf{q}\) (this numbering is for convenience in the calculation of local quantities with respect to \(\mathbf{q}\) and does not have to match the global numbering scheme used in \(\mathcal{P}\)). Additionally, let the set \(\{\mathbf{a}_{1},\ldots,\mathbf{a}_{m}\}\) refer to the counter-clockwise ordered vertices of the virtual Voronoi cell of \(\mathbf{q}\). It can be shown that \[\nabla A_{k}(\mathbf{q})=\frac{f_{k}}{d_{k}}\,\left(\frac{\mathbf{a}_{k}+ \mathbf{a}_{k+1}}{2}-\mathbf{q}\right)=\frac{f_{k}}{d_{k}}\,\left(\frac{ \mathbf{\vec{v}}_{k}+\mathbf{\vec{v}}_{k+1}}{2}\right) \tag{11}\] where \(f_{k}=||\mathbf{a}_{k+1}-\mathbf{a}_{k}||\), \(d_{k}=||\mathbf{p}_{k}-\mathbf{q}||\), and \(\mathbf{\vec{v}}_{k}=\mathbf{a}_{k}-\mathbf{q}\). The gradient of \(\lambda_{i}(\mathbf{q})\) follows trivially from application of the chain rule and the fact that \(A(\mathbf{q})=\sum_{k}A_{k}(\mathbf{q})\). Having calculated the natural neighbor coordinates of \(\mathbf{q}\), we can infer the function value \(\mathbf{\vec{\Phi}}(\mathbf{q})\) via interpolation with respect to these coordinates. In particular, we choose to use Sibson's \(C^{1}\) interpolant [55]. Let \[\mathbf{\vec{Z}}^{0}(\mathbf{q})=\sum_{i}\lambda_{i}(\mathbf{q})\,\mathbf{ \vec{\Phi}}(\mathbf{p}_{i}) \tag{12}\] denote the linear combination of the neighbor's function values weighted by the natural neighbor coordinates. Furthermore, we define the functions \[\begin{split}\mathbf{\vec{\xi}}_{i}(\mathbf{q})&= \mathbf{\vec{\Phi}}(\mathbf{p}_{i})+\mathbf{G}_{i}^{T}(\mathbf{q}-\mathbf{p} _{i})\\ &\implies\mathbf{\vec{\xi}}(\mathbf{q})=\frac{\sum_{i}\frac{ \lambda_{i}(\mathbf{q})}{||\mathbf{q}-\mathbf{p}_{i}||}\,\mathbf{\vec{\xi}}_{ i}(\mathbf{q})}{||\mathbf{q}-\mathbf{p}_{i}||},\end{split} \tag{13}\] w Figure 5: Natural neighbor coordinate construction. The set \(\{\mathbf{p}_{i}\}\) denote the data points whose functional values are being interpolated and the set \(\{\mathbf{a}_{i}\}\) denote vertices of the virtual Voronoi polygon associated with the query point \(\mathbf{q}\). Colors indicate area weights of data point contributions to the interpolated function value at \(\mathbf{q}\). \[\alpha(\mathbf{q})=\frac{\sum_{i}\lambda_{i}(\mathbf{q})\:||\mathbf{q}-\mathbf{p}_{i }||}{\frac{\lambda_{i}(\mathbf{q})}{||\mathbf{q}-\mathbf{p}_{i}||}}, \tag{10}\] and \[\beta(\mathbf{q})=\sum_{i}\lambda_{i}(\mathbf{q})\:||\mathbf{q}-\mathbf{p}_{i} ||^{2}. \tag{11}\] In terms of these quantities, the final interpolant is defined to be \[\vec{\mathbf{Z}}^{1}(\mathbf{q})=\frac{\alpha(\mathbf{q})\:\vec{\mathbf{Z}}^{ 0}(\mathbf{q})+\beta(\mathbf{q})\:\vec{\mathbf{\xi}}(\mathbf{q})}{\alpha( \mathbf{q})+\beta(\mathbf{q})} \tag{12}\] Sibson noticed that this interpolant is \(C^{1}\) continuous with respect to the coordinates of the query point \(\mathbf{q}\). In other words, we can calculate analytic gradients of \(\vec{\mathbf{Z}}^{1}(\mathbf{q})\) with respect to \(\mathbf{q}\) (making use of the gradient in Eq. (10) in a series of chain rule calculations). This property makes \(\vec{\mathbf{Z}}^{1}\) suitable for use in gradient based optimization procedures. The application of natural neighbor interpolation to storing surfaces is straight forward. We provide as input a set of points \(\mathcal{V}\) defining a surface, along with a set of 3D coordinates \(\{\vec{\mathbf{R}}_{i}\}\) and 2D coordinates \(\{\vec{\mathbf{x}}_{i}\}\) for each point \(v_{i}\in\mathcal{V}\). Note that we do not need to supply a face connectivity list \(\mathcal{F}\) - any association between the points will be defined through the Delaunay triangulation of \(\mathcal{V}\) in \(\mathbb{R}^{2}\). We can now evaluate updated surface triangulations for _any_ updated parameterization \(\{\vec{\mathbf{x}}^{\prime}_{i}\}\). This is shown in Figure 6. ### Discretization of the Beltrami Holopmorphic Flow In this section, we explain our numerical computation of the variation in a quasiconformal map \(w:\mathbb{D}\rightarrow\mathbb{D}\) under the variation of its associated Beltrami coefficient \(\mu\). This implementation is based on the one formulated by Lui _et al_. in Lui _et al._ (2017). Consider a triangulation of the unit disk defined by a face connectivity list \(\mathcal{F}\) and a set of vertices \(\mathcal{V}\), where \(\vec{\mathbf{x}}_{i}\in\mathbb{D}\) denotes the 2D coordinates of the \(i\)th vertex \(v_{i}\in\mathcal{V}\). Let \(\vec{\mathbf{u}}_{i}=(u^{1},u^{2})\in\mathbb{D}\) denote the updated coordinates of the \(v_{i}\) as the result of a quasiconformal mapping. In the continuous setting, the map \(w:\mathbb{D}\rightarrow\mathbb{D}\) has an associated Beltrami coefficient \[\mu=\frac{\partial_{z}w}{\partial_{z}w}, \tag{13}\] where \(z=x^{1}+ix^{2}\) and \(w=u^{1}+iu^{2}\). In the discrete setting, the map \(w:\mathbb{D}\rightarrow\mathbb{D}\), i.e. updated vertex coordinates, defines a set of piece-wise constant affine transformations for each triangle in the mesh. We can define a corresponding piece-wise constant Beltrami coefficient on each face \(F\in\mathcal{F}\) by discretizing Eq (13) using the finite element method (FEM) gradient 1. For a face \(F=[\vec{\mathbf{x}}_{i},\vec{\mathbf{x}}_{j},\vec{\mathbf{x}}_{k}]\), the gradient of a quantity \(f\) defined on each vertex is given by: Footnote 1: The use of the term \(\mu\) is not necessary for the computation of the variation in the \(\mathcal{F}\). \[\nabla f=\left(\partial_{x}f,\,\partial_{y}f\right)^{T}\] \[=\frac{1}{2A_{F}}\,\left((f_{j}-f_{i})(\vec{\mathbf{x}}_{i}-\vec{ \mathbf{x}}_{k})^{\perp}+(f_{k}-f_{i})(\vec{\mathbf{x}}_{j}-\vec{\mathbf{x}}_{ i})^{\perp}\right), \tag{14}\] where \(A_{F}\) is the area of face \(F\) and the symbol \(\perp\) denotes a counter-clockwise rotation by \(90^{\circ}\) in the plane of the face. Geometrically, this operation converts a scalar quantity defined on mesh vertices to a tangent vector defined on each mesh face. The Beltrami coefficient on each face is therefore given by \[\mu_{F}=\frac{\left(\partial_{x^{1}}u^{1}-\partial_{x^{2}}u^{2}\right)+i( \partial_{x^{1}}u^{2}+\partial_{x^{2}}u^{1})}{\left(\partial_{x^{1}}u^{1}+ \partial_{x^{2}}u^{2}\right)+i(\partial_{x^{1}}u^{2}-\partial_{x^{2}}u^{1})}, \tag{15}\] where the discrete partial derivative operators are defined through Eq. (14). For applications where we need to define the Beltrami coefficient on vertices, we can simply average the face-based quantity in Eq. (15) \[\mu_{v}=\sum_{F\in\mathcal{N}_{v}}\alpha_{F}\,\mu_{F}, \tag{16}\] using a set of normalized weights \(\{\alpha_{F}\}\), where \(\mathcal{N}_{v}\) denotes the set of faces \(F\) attached to vertex \(v\). Experiments show that weighting this average by the normalized internal angle adjacent to the vertex within each face produces better results than simple averaging or area-weights. For notational convenience, we define a modified partial derivative operator \(D\) that includes this angle weighted averaging step, i.e. \(D_{x}f\) maps scalar vertex quantities to vectors in the tangent space of each vertex. We can define the vertex-based Beltrami coefficient directly in terms of these modified operators \[\mu_{v}=\frac{\left(D_{x^{1}}u^{1}-D_{x^{2}}u^{2}\right)+i\left(D_{x^{1}}u^{2}+D _{x^{2}}u^{1}\right)}{\left(D_{x^{1}}u^{1}+D_{x^{2}}u^{2}\right)+i\left(D_{x^ {1}}u^{2}-D_{x^{2}}u^{1}\right)}. \tag{17}\] For both quasiconformal map reconstruction and surface function optimization, the key step is the computation of the variation \(h_{e}[w,\delta\mu]\) of the map \(w\) under the Figure 6: Evaluation of surfaces via natural neighbor interpolation for different 2D parameterizations variation \(\mu\mapsto\mu+\epsilon\,\delta\mu\). We use \(\epsilon\) rather than \(t\) and \(\delta\mu\) rather than \(\dot{\mu}\) to emphasize that the flow optimizing our surface function does not correspond to a physical anisotropy field changing in time. We have \[h_{\epsilon}[w,\delta\mu](z)=\int_{\mathbb{D}}K(z,\zeta)\,d\eta^{1}\,d\eta^{2}, \tag{100}\] which is defined explicitly in Eq. (101), replacing \(\dot{\mu}\) with \(\delta\mu\). The quantities \(w\) and \(\delta\mu\) are defined on each vertex \(v\). The derivative \(\partial_{z}w\) can be approximated as \[(\partial_{z}w)_{v}\approx\frac{(D_{x^{1}}u^{1}+D_{x^{2}}u^{2})+i\,(D_{x^{1}}u ^{2}-D_{x^{2}}u^{1})}{2}. \tag{101}\] For each pair of vertices \((v_{j},v_{k})\), the kernel \(K(v_{j},v_{k})\) can be assembled on a per-vertex basis in terms of the sets \(\{w_{v}\}\), \(\{(\partial_{z}w)_{v}\}\), and \(\{\delta\mu_{v}\}\). When \(K(v_{j},v_{k})\) is singular, we set \(K(v_{j},v_{k})=0\). Let \(A_{v}\) denote the barycentric area of each vertex, i.e. \[A_{v}=\frac{1}{3}\,\sum_{F\in\mathcal{N}_{v}}A_{F}. \tag{102}\] Then, \(h_{\epsilon}[w,\delta\mu]\) is can be approximated by \[h_{\epsilon}[w,\delta\mu](v_{k})=\sum_{v_{j}}K(v_{k},v_{j})\,A_{v_{j}}. \tag{103}\] Just as in the continuous setting, we can write out this sum explicitly in terms of its real and imaginary components \[\begin{split}& h_{\epsilon}[w,\delta\mu](v_{k})\\ &=\sum_{v_{j}}\,\begin{pmatrix}G_{1}(v_{k},v_{j})\,\delta\mu_{1}( v_{j})+G_{2}(v_{k},v_{j})\,\delta\mu_{2}(v_{j})\\ G_{3}(v_{k},v_{j})\,\delta\mu_{1}(v_{j})+G_{4}(v_{k},v_{j})\,\delta\mu_{2}(v_{ j})\end{pmatrix}\,A_{v_{j}},\end{split} \tag{104}\] where \(\delta\mu=\delta\mu_{1}+i\,\delta\mu_{2}\). ### Evaluation and Minimization of the Optimal Growth Energy From Eq. (15), the optimal growth energy for constant growth patterns with \(\Gamma=\dot{\gamma}=0\) is defined to be \[\begin{split}&\mathcal{S}=\lambda\,D_{SEM}[\mathbf{g}_{T};\, \mathcal{M}_{T}]\quad+\\ & T\int_{\mathcal{B}}d^{2}\mathbf{\vec{x}}\,\sqrt{g_{0}}\,\, \left[c_{1}|\nabla\Gamma|^{2}+c_{2}|\nabla\gamma|^{2}\right].\end{split} \tag{105}\] In practice, however, implementing such constrained optimization methods is difficult and it is an attractive to reformulate Eq. (105) as an unconstrained problem. Such a reformulation is possible by making the approximation \(\gamma\approx\dot{\mu}\). From Eq. (9), we see that this approximation holds for \(|\mu|^{2}<<1\) and \(e^{i\psi}=(\overline{\partial_{z}w})/(\partial_{z}w)\approx 1\). Writing \(z=r\,e^{i\theta}\), this latter condition is exactly true, for instance, when \(w(z,t)=\rho(r,t)\,e^{i\theta}\), which applies to a broad variety of relevant growth patterns, including the growth pattern generating the hemispherical cap shown in Fig. 3(a-c). In this new approximation scheme, since \(\ddot{\mu}\approx\dot{\gamma}=0\), the Beltrami coefficient as a function of time is simply \(\mu(\mathbf{\vec{x}},t)\approx t\,\dot{\mu}(\mathbf{\vec{x}})\approx t\,\gamma (\mathbf{x})\), where we have exploited the fact that we are always free to choose a conformal parameterization for our initial time point. The modified constant growth functional is given by \[\bar{\mathcal{S}}=\int_{\mathcal{B}}d^{2}\mathbf{\vec{x}}\,\sqrt{g_{0}}\,\, \left[T\,c_{1}|\nabla\Gamma|^{2}+T^{-1}\,c_{2}|\nabla\mu(T)|^{2}\right] \tag{106}\] Given an initial configuration, both \(\Gamma\) and \(\mu(T)\) can be found directly from a candidate final configuration without having to calculate any intermediate geometries. In other words, given an initial configuration and final target shape, we can solve an _unconstrained_ optimization problem over the space of parameterizations of the final shape using the BHF to find the fields \(\Gamma\) and \(\mu(T)\) that minimize the reduced cost function in Eq. (106). Since we are directly optimizing over the space of parameterizations of the final shape, the shape constraint is satisfied by construction, which obviates the need for an explicit Lagrange multiplier. We can then find all of the intermediate metric tensors using the previously discussed properties of constant growth patterns. With these intermediate metrics in hand, we can embed the time-dependent target geometries in \(\mathbb{R}^{3}\) to generate a complete prediction for both the coarse-grained shape and flow of a growing tissue over time. The input to our numerical optimization method requires a mesh triangulation with a set of 3D vertex coordinates \(\mathbf{\vec{R}}_{0}\) defining the initial 3D configuration, a set of 2D vertex coordinates defining the Lagrangian parameterization \(\mathbf{\vec{x}}_{0}\), and a set of 3D vertex coordinates \(\mathbf{\vec{X}}_{T}\), which will be converted into a non-parametric representation of the final shape using natural neighbor interpolation. At each step of the minimization process, the the energy is evaluated in terms of an updated set of 2D coordinates \(\mathbf{\vec{w}}_{T}\) and a corresponding set of final 3D vertex coordinates \(\mathbf{\vec{R}}_{T}\), for which \(\mathbf{\vec{x}}_{0}\) and \(\mathbf{\vec{X}}_{T}\) serve as an initial guess. Let \(A_{F}^{(0)}\) denote the 3D area of face \(F\) in the initial configuration at time \(t=0\) and let \(A_{F}\) denote the 3D area of face \(F\) in the final configuration at time \(t=T\). The constant growth rate \(\Gamma\) can be approximated on each face through \[A_{F}=A_{F}^{(0)}\,e^{\Gamma_{F}\,T}\implies\Gamma_{F}=\frac{1}{T}\log\left[ \frac{A_{F}}{A_{F}^{(0)}}\right], \tag{107}\] and then mapped to values on vertices via the same angle-weighted averaging procedure defined in Eq. (101) for face-based Beltrami coefficients. The optimal growth energy can therefore be approximated as \[\begin{split}\tilde{\mathcal{S}}&=\sum_{F\in\mathcal{F}}A_{F }^{(0)}\,\left(\tilde{c}_{1}\left|\left|\frac{A_{F}^{(0)}}{A_{F}}\right.\right. \right.\left.\left(\nabla\frac{A}{A^{(0)}}\right)_{F}\left|\left|\right|^{2}+ \tilde{c}_{2}|(\nabla\mu)_{F}|^{2}\right)\\ &=\underbrace{\sum_{F\in\mathcal{F}}A_{F}^{(0)}\,\mathcal{H}( \mathbf{\tilde{R}}(w[\mu]))_{F}}_{\equiv\tilde{\mathcal{S}_{1}}}+\underbrace{ \tilde{c}_{2}\sum_{F\in\mathcal{F}}A_{F}^{(0)}\,|(\nabla\mu)_{F}|^{2}}_{ \equiv\tilde{\mathcal{S}_{2}}},\end{split} \tag{100}\] where we have absorbed factors of \(1/T\) into the modified constants \(\tilde{c}_{1}\) and \(\tilde{c}_{2}\) for notational simplicity. In the second equality, we introduced the functional \(\mathcal{H}(\mathbf{\tilde{R}}(w[\mu]))\) to explicitly show that \(\tilde{\mathcal{S}}_{1}\) only depends on \(\mu\) through the 3D embedding of the updated planar coordinates \(\mathbf{\tilde{R}}(w[\mu])\). This energy can now be minimized using standard gradient descent methods over the space of vertex based Beltrami coefficients \(\mu_{v}\). For our purposes, the gradients of Eq. (100) were calculated by hand. The gradients of \(\tilde{\mathcal{S}}_{2}\), while tedious to tabulate, are conceptually trivial. Calculating the gradients of \(\tilde{\mathcal{S}}_{1}\) requires synthesizing all of the previously described methods comprising our numerical machinery. \[\begin{split}&\nabla_{\mu}\tilde{\mathcal{S}}_{1}=\sum_{F\in \mathcal{F}}A_{F}^{(0)}\,\nabla_{\mu}\mathcal{H}(\mathbf{\tilde{R}}(w[\mu]))_{ F}\\ &=\sum_{F\in\mathcal{F}}A_{F}^{(0)}\,\left(\nabla_{\mathbf{\tilde {R}}}\mathcal{H}(\mathbf{\tilde{R}})_{F}\right)\cdot\left(\nabla_{w}\mathbf{ \tilde{R}}\right)\cdot\left(\nabla_{\mu}w\right)\end{split} \tag{101}\] The gradients \(\nabla_{\mathbf{\tilde{R}}}\mathcal{H}(\mathbf{\tilde{R}})\) with respect to the 3D vertex coordinates \(\mathbf{\tilde{R}}\) are calculated in the standard way. The gradients \(\nabla_{w}\mathbf{\tilde{R}}\) with respect to the updated 2D vertex coordinates are determined analytically by our natural neighbor interpolation scheme, as detailed in Appendix B.1. Next, the gradients \(\nabla_{\mu}w\) with respect to the vertex-based Beltrami coefficients are calculated according to the BHF scheme detailed in Appendix A.2. Finally, the full gradients of \(\tilde{\mathcal{S}}_{1}\) with respect to the \(\mu_{v}\) are computed via the chain rule in terms of these quantities (Eq. (101)). For future applications involving some novel optimality criteria \(\mathcal{S}^{\prime}(\mathbf{\tilde{R}}(w[\mu]))\), the gradients \(\nabla_{w}\mathcal{S}^{\prime}(\mathbf{\tilde{R}}(w))\) may be computed via automatic differentiation methods optimized for discrete geometry processing, e.g. [58], and strung together with the application independent gradients \(\nabla_{\mu}w\) determined by the BHF. In this way, the useful analytic properties of the BHF can be fully exploited while avoiding the laborious task of manual differentiation. ### Embedding Intrinsic Geometries in 3D Once our numerical machinery has produced a solution for the optimal time course of intrinsic geometry, this geometry must be embedded in \(\mathbb{R}^{3}\) in order to produce a complete prediction for the shape and flow of the growing tissue. As before, we consider a smooth surface \(\mathcal{M}_{t}\subset\mathbb{R}^{3}\) parameterized in the usual way by a set of coordinates \(\mathbf{\vec{x}}\in\mathbb{R}^{2}\). At each point \(\mathbf{\vec{R}}\in\mathcal{M}_{t}\), The geometry of the surface is captured by the first fundamental form, \(g_{\alpha\beta}=\partial_{\alpha}\mathbf{\vec{R}}\cdot\partial_{\beta}\mathbf{ \vec{R}}\), and the second fundamental form, \(b_{\alpha\beta}=\partial_{\alpha}\partial_{\beta}\mathbf{\vec{R}}\cdot\mathbf{ \hat{n}}\), where \(\mathbf{\hat{n}}\) is the unit normal to the surface at \(\mathbf{\vec{R}}\). It is now necessary to explicitly distinguish the physical geometry from the intrinsic geometry. We characterize the time-dependent intrinsic geometry of the surface in terms the target metric, \(\mathbf{\overline{g}}(t)\), and the target curvature tensor, \(\mathbf{\overline{b}}(t)\). We define the components of the inverse target metric tensor by \(\bar{g}^{\alpha\beta}\) by \(\bar{g}^{\alpha\sigma}\bar{g}_{\sigma\beta}=\delta_{\beta}^{\alpha}\), so that indices of tensorial quantities are raised and lowered with respect to the target metric. We embed the target geometry in \(\mathbb{R}^{3}\) utilizing a method motivated by continuum mechanics. We assume that the tissue is an incompatible elastic shell with energy \[\begin{split}& E=E_{S}+E_{B}=\frac{Y}{2(1-\nu^{2})}\int d \mathbf{\vec{x}}^{2}\sqrt{\bar{g}}\,\bigg{\{}\\ &\frac{h}{4}\left[\,\nu\,\mathrm{Tr}[\mathbf{\bar{g}}^{-1}( \mathbf{g}-\mathbf{\bar{g}})]^{2}+(1-\nu)\,\mathrm{Tr}[(\mathbf{\bar{g}}^{-1} (\mathbf{g}-\mathbf{\bar{g}}))^{2}]\,\right]+\\ &\frac{h^{3}}{12}\,\left[\,\nu\,\mathrm{Tr}[\mathbf{\bar{g}}^{-1} (\mathbf{b}-\mathbf{\overline{b}})]^{2}+(1-\nu)\,\mathrm{Tr}[(\mathbf{\bar{g}}^{- 1}(\mathbf{b}-\mathbf{\overline{b}}))^{2}]\,\right]\bigg{\}},\end{split} \tag{102}\] where \(h\) is the thickness of the tissue, \(Y\) is Young's modulus, and \(\nu\) is the Poisson ratio. This mechanical energy has been studied both in the context of morphogenesis and the design and manipulation of synthetic surface structures for both monolayer and bilayer configurations [23; 31; 59]. The stretching energy \(E_{S}\propto h\) describes the mismatch between the target rest lengths and angles of the surface and the physical rest lengths and angles. Analogously, the bending energy \(E_{B}\propto h^{3}\) describes the mismatch between the target and physical curvatures. Essentially, the system will adopt an equilibrium configuration that matches the target geometry as much a possible while balancing the competition between stretching and bending. Importantly, it is not assumed that \(\bar{g}\) and \(\bar{b}\) satisfy the Gauss-Codazzi-Mainardi-Peterson compatibility conditions [60]. If this is the case, then there is no attainable 3D configuration for which the energy vanishes identically. In this situation, the equilibrium configuration will harbor residual stresses. Similarly to the growth pattern optimization machinery, equilibrium configurations of this energy are computed using a geometric finite difference method [53]. The 3D surface is approximated by a mesh triangulation. Each face \(F\in\mathcal{F}\) is ordered in a counter-clockwise fashion, i.e. \(F=[\mathbf{\tilde{R}}_{i},\mathbf{\tilde{R}}_{j},\mathbf{\tilde{R}}_{k}]\) where a vertex \(\mathbf{\tilde{R}}_{i}\in\mathbb{R}^{3}\). The edges of the face are labelled by the vertex index opposite to that edge. The normal vector of the face, is simply given by \(\mathbf{\vec{n}}=\mathbf{\vec{e}}_{i}\times\mathbf{\vec{e}}_{j}=\mathbf{\vec{e }}_{j}\times\mathbf{\vec{e}}_{k}=\mathbf{\vec{e}}_{k}\times\mathbf{\vec{e}}_{i}\). The unit normal vector is therefore \(\mathbf{\hat{n}}=\mathbf{\vec{n}}/2A\) where \(A\) is the area of the face. For simplicity, we define the three in-plane mid-edge normals, \(\mathbf{\vec{t}}_{i}\), \(\mathbf{\vec{t}}_{j}\), and \(\mathbf{\vec{t}}_{k}\), which are simply the corresponding edge vectors rotated clockwise by \(90^{\circ}\) in the plane of the face, i.e. \(\mathbf{\vec{t}}_{i}=\mathbf{\vec{e}}_{i}\times\mathbf{\hat{n}}\). This construction is illustrated in Fig. 7(a). Since the total elastic energy of the surface is calculated by evaluating an integral, we must choose a stencil of finite area to serve as the discrete analog of the integral measure. For this purpose we choose a 'face-with-flaps' stencil. The structure and nomenclature associated with this stencil is shown in Fig. 7(b). The extra flaps are included in order to evaluate the bending energy. Here, the bending energy is evaluated at the shared edges of the triangulation, i.e. along the triangulation hinges. The structure of a single hinge is shown in Fig. 7(c) and Fig. 7(d). Notice that \(\theta\) denotes the _bending_ or _hinge_ angle between the unit normal vectors of adjacent faces. The dihedral angle of the edge is therefore \(\pi-\theta\). In general, \(\theta\) is a _signed_ quantity which depends on edge orientation. The sign of \(\theta\) for a given edge orientation is chosen arbitrarily, but consistently, to be the same as the sign of \((\mathbf{\tilde{n}}_{1}\times\mathbf{\tilde{n}}_{2})\cdot\mathbf{\tilde{e}}_{0}\), where \(\mathbf{\hat{n}}_{1}\) is the unit normal of the current face, \(\mathbf{\tilde{n}}_{2}\) is the unit normal of the adjacent face flap, and \(\mathbf{\tilde{e}}_{0}\) is the associated edge with orientation defined counter-clockwise relative to \(\mathbf{\hat{n}}_{1}\). In this way, \(\theta\) is positive when the normals point away from each other and negative when they point towards each other. Our goal is to build a discrete formulation of the energy in Eq (144). The target geometry of the system can be represented by a set of target edge lengths and bending angles. The target edge lengths must satisfy the triangle inequality on each face in order to constitute a valid geometry (see Eq. (145)). In the discrete setting, the tensors \(\mathbf{g}\) and \(\mathbf{b}\) are approximated by piece-wise constant symmetric \(2\times 2\) matrices defined on mesh faces. The components of any such matrix defined over a subset of \(\mathbb{R}^{2}\) can be uniquely determined by the its action on three linearly independent vectors in the plane. Analogously to its function in the continuous setting, the discrete metric tensor on a face is the unique matrix that returns \(\mathbf{\tilde{e}}_{i}^{T}\,\mathbf{g}\,\mathbf{\tilde{e}}_{i}=\ell_{i}^{2}\) for each edge \(\mathbf{\tilde{e_{i}}}\) in the face with length \(\ell_{i}\). Similarly, the discrete curvature tensor should encode the bending angles on each edge. In terms of these representations, we define the discrete strain tensor \[\begin{split}\mathbf{\varepsilon}&=\mathbf{\tilde{g}}^{ -1}\left(\mathbf{g}-\mathbf{\tilde{g}}\right)\\ &=-\frac{1}{8\bar{A}^{2}}\,\sum_{(ijk)}\left[\ell_{i}^{2}-\ell_{j} ^{2}-\ell_{k}^{2}-\left(L_{i}^{2}-L_{j}^{2}-L_{k}^{2}\right)\right]\,\mathbf{ \tilde{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\,\,\,\,\,\,\,\,\,\,\\\,\,\\\\\,\,\\,\\\,\ \[\begin{split}\tilde{E}=E/h&=\frac{1}{2}\sum_{T}\bar{A}_{T} \left\{\frac{1}{4}\,\left[\nu\left(\sum_{i}\frac{-E_{i}}{8\bar{A}_{T}^{2}}L_{i} ^{2}\right)^{2}+(1-\nu)\left(\sum_{i}\sum_{j}\frac{E_{i}\,E_{j}}{64\bar{A}_{T}^ {4}}(\vec{\mathbf{\Xi}}_{i}\cdot\vec{\mathbf{\Xi}}_{j})^{2}\right)\right]\\ &\qquad+\frac{h^{2}}{12}\left[\nu\left(\sum_{i}\frac{\Phi(\theta_ {i},\Theta_{i})L_{i}}{2\bar{A}_{T}}\right)^{2}+(1-\nu)\left(\sum_{i}\sum_{j} \frac{\Phi(\theta_{i},\Theta_{i})\,\Phi(\theta_{j},\Theta_{j})}{4\bar{A}_{T}^{ 2}L_{i}L_{j}}(\vec{\mathbf{\Xi}}_{i}\cdot\vec{\mathbf{\Xi}}_{j})^{2}\right) \right]\,\right\}\end{split} \tag{100}\] where the sum is over the faces \(T\) of the triangulation and we have set \(Y=1-\nu^{2}\) (\(Y\) is irrelevant to the computation of the equilibrium configuration since it simply rescales the energy). The terms \((\vec{\mathbf{\Xi}}_{i}\cdot\vec{\mathbf{\Xi}}_{j})\) can be determined entirely in terms of the target edge lengths. We minimize this energy using a a custom built quasi-Newton method using an L-BFGS Hessian approximation [42]. Strictly speaking, the Bonnet theorem tells us that it is necessary to specify both \(\vec{\mathbf{g}}(t)\) and \(\vec{\mathbf{b}}(t)\) to uniquely specify the surface up to a rigid motion [60]. For small \(h\), however, the bending energy is small relative to the stretching energy, i.e. \(E_{B}<<E_{S}\). In this regime, the system's tendency to match \(\mathbf{g}(t)\) and \(\vec{\mathbf{g}}(t)\) will overwhelm it's tendency to minimize the bending energy. The optimization of Eq. 100 essentially becomes a machinery for producing isometric embeddings of the target metric \(\vec{\mathbf{g}}(t)\) with \(E_{B}\) playing the role of a regularizer. This machinery for computing the embeddings of an instantaneous target geometry can also be used to generate embeddings of entire time courses of growth patterns. When the timescale of growth in the system (i.e. rate of cell division, etc.) is long compared to the timescale of mechanical relaxation, the tissue will always effectively remain in mechanical equilibrium. In this quasistatic regime, the physical configuration will always be a minimizer of the energy Eq. (100) given an instantaneous target geometry. Over a short time \(\Delta t\), the target geometry will change, i.e \(\vec{\mathbf{g}}(t)\rightarrow\vec{\mathbf{g}}(t+\Delta t)\) and \(\vec{\mathbf{b}}(t)\rightarrow\vec{\mathbf{b}}(t+\Delta t)\). At each new time step, we minimize the new elastic energy using the previous time point's configuration as an initial guess. The result is a full time course of growth embedding the various target geometries. For our purposes, we chose to set \(\vec{\mathbf{b}}(t)\) by interpolating the angles between the supplied initial configuration and the optimal final configuration. However, numerical experiments setting \(\vec{\mathbf{b}}(t)=0\), i.e. a plate-like target geometry, showed that time-dependent growth pattern results were not strongly sensitive to the choice of \(\vec{\mathbf{b}}(t)\) for sufficiently small \(\Delta t\). Larger steps may be possible, for instance, through the implementation of a regime that calculates updates \(\vec{\mathbf{b}}(t)\rightarrow\vec{\mathbf{b}}(t+\Delta t)\) that are always compatible with the optimal target metric [61]. ## Appendix C Analysis of growing appendage in _Parhyale hawaiensis_ The recording of the transgenic _Parhyale_ embryo with a construct for heat-inducible expression of a nuclear marker (H2B-mRFPruby) was generated using multi-view lightsheet fluorescence microscopy (LSFM) beginning 3 days after egg lay (AEL). More details regarding data acquisition and pre-processing can be found in [5]. Our analysis focused on a period of dramatic outgrowth in the T2 appendage from \(95-109\)h AEL and utilized tissue cartography methods to generate coarse-grained flow patterns on cells on the growing limb [62; 63]. Segmented nuclei locations from [5] were used as seed points to reconstruct the limb surfaces. Sparse nuclei locations were used to generate smoothed, upsampled point clouds corresponding to the tissue surfaces [64]. These point clouds were then triangulated using an advancing front surface reconstruction method [65]. Time dependent surfaces were then mapped conformally into the unit disk using a custom-implementation of the discrete Ricci flow [54]. The conformal degrees of freedom in the time dependent parameterization were pinned by finding an optimal Mobius transformation that matched the neighborhood structure of nuclei locations at subsequent times without explicit reference to nuclei identity [66]. Once pulled back into the plane, an updated tracking for the nuclei was performed in 2D using a custom built MATLAB GUI enabling the reconstruction of nuclear lineages and cell tracks. Subsequent time points were then registered using an algorithm that produces smooth quasi-conformal mappings subject to nontrivial landmark constraints [67]. The Beltrami coefficients from these instantaneous mappings were then smoothed spectrally by decomposing them onto a basis of the eigenvectors of the discrete Laplace-Beltrami operator defined on the 2D triangulations and keeping only the lowest five modes [57]. The full time course of material flow was then reconstructed by composing the infinitesimal mappings produced from the smooth Beltrami coefficients using our implementation of the BHF for mapping reconstruction [37].
2303.08823
Wireless Sensor Networks anomaly detection using Machine Learning: A Survey
Wireless Sensor Networks (WSNs) have become increasingly valuable in various civil/military applications like industrial process control, civil engineering applications such as buildings structural strength monitoring, environmental monitoring, border intrusion, IoT (Internet of Things), and healthcare. However, the sensed data generated by WSNs is often noisy and unreliable, making it a challenge to detect and diagnose anomalies. Machine learning (ML) techniques have been widely used to address this problem by detecting and identifying unusual patterns in the sensed data. This survey paper provides an overview of the state of the art applications of ML techniques for data anomaly detection in WSN domains. We first introduce the characteristics of WSNs and the challenges of anomaly detection in WSNs. Then, we review various ML techniques such as supervised, unsupervised, and semi-supervised learning that have been applied to WSN data anomaly detection. We also compare different ML-based approaches and their performance evaluation metrics. Finally, we discuss open research challenges and future directions for applying ML techniques in WSNs sensed data anomaly detection.
Ahsnaul Haque, Md Naseef-Ur-Rahman Chowdhury, Hamdy Soliman, Mohammad Sahinur Hossen, Tanjim Fatima, Imtiaz Ahmed
2023-03-15T15:02:11Z
http://arxiv.org/abs/2303.08823v1
# Wireless Sensor Networks anomaly detection using Machine Learning: A Survey ###### Abstract Wireless Sensor Networks (WSNs) have become increasingly valuable in various civil/military applications like industrial process control, civil engineering applications such as buildings' structural strength monitoring, environmental monitoring, border intrusion, IoT (Internet of Things), and healthcare. However, the sensed data generated by WSNs is often noisy and unreliable, making it a challenge to detect and diagnose anomalies. Machine learning (ML) techniques have been widely used to address this problem by detecting and identifying unusual patterns in the sensed data. This survey paper provides an overview of the state of-the-art applications of ML techniques for data anomaly detection in WSN domains. We first introduce the characteristics of WSNs and the challenges of anomaly detection in WSNs. Then, we review various ML techniques such as supervised, unsupervised, and semi-supervised learn ing that have been applied to WSN data anomaly detection. We also compare different ML-based approaches and their performance evalu ation metrics. Finally, we discuss open research challenges and future directions for applying ML techniques in WSNs sensed data anomaly detection. Keywords: Wireless Sensor Network, Anomaly Detection, Machine Learning [16], Survey, Energy Efficiency, Hybrid Networks, Techniques, Algo rithms, Data, Performance Metrics + Footnote †: journal: Journal of Computer Vision ## 1 Introduction Wireless Sensor Networks (WSNs) are widely used in various types of applications, including environmental monitoring, Internet-Of-Things (IoT), healthcare, security[18], and industrial control. Some WSNs[20] consist of numerous tiny sensors that are distributed in an area to col lect data and send it to a base station. However, due to the resource constrained nature of sensor nodes, WSNs[21] face several challenges, such as limited energy, computational power, and memory, making it difficult to make decisions and analyze data locally[1, 2, 31, 33]. 2 A. Haque, N. Chowdhury, H. Soliman et al. Anomaly detection is a critical task in WSNs since it helps detect unusual events and abnormal behavior in the sensed data. Anomalies may indicate a malfunction in the system's sensors, equipment fail ure, or potential security threats, which need to be addressed imme diately, and appropriately. However, traditional rule-based anomaly detection techniques may not be suitable for WSNs since they re quire predefined rules that are challenging to design for complex WSNs systems with high-dimensional data[3, 5]. ML has emerged as a promising technique for anomaly detection in WSNs. ML algorithms can learn from the sensor data and discover patterns that are indicative of normal and abnormal behavior, with out the need for manual rule definition. Moreover, ML techniques can adapt to changes in the system and the environment and pro vide accurate and timely detection of anomalies, thus improving the efficiency and reliability of the WSNs[4]. A common ML-based tech nique is to use the Ensemble Learning method[11, 12], where multiple ML models are used to train and test, then pick the best-performing models. This approach is not only effective in WSN but also widely used in many other anomalies/malware detection[11][12]. Given the growing interest in using ML for anomaly detection in WSNs, it is essential, for life-saving critical applications, to survey the state-of-the-art techniques and identify the challenges and opportunities in this field. This paper aims to provide a comprehen sive survey of ML-based anomaly detection methods in WSNs. The survey covers various ML algorithms, including supervised[3], unsu pervised[13], semi-supervised[5], and deep learning[7], and discusses their advantages in modeling sensed data anomalies and limitations in WSNs. The paper also discusses the challenges of deploying ML based anomaly detection systems in WSNs, such as limited resources, collected data heterogeneity, and privacy/security concerns, and sug gests potential solutions to address these challenges. Overall, this survey can serve as a useful reference for researchers and practition ers working on anomaly detection in WSNs using ML techniques[6- 9,20-24,27,28,30-32,34-38,40,41]. The rest of this paper is organized as follows. In the second sec tion, we will discuss the most recent Criterion of Anomalies. The fol lowing section will discuss the Anomaly Detection Techniques used to date. In Section 4, we will compare the different anomaly detection approaches for WSNs. In the final section, we conclude by shedding WSN anomaly detection using Machine Learning: A Survey 3 some light on the most useful ML models to be deployed in the field of anomaly detection of collected WSNs sensed data. ## 2 Classification Criterion of Anomalies In recent years, WSNs have emerged as a captivating research field due to their ability to monitor vast regions, reach remote and haz ardous locations, respond in real time, and their relative simplicity of use[34, 36]. This technology has opened up a whole new world of possibilities for scientists. In addition to the aforementioned impor tant civil applications, WSNs have already been utilized in many other various military activities such as surveillance, target recogni tion, and environmental. These sensor networks typically consist of numerous small, inexpensive nodes distributed across a wide area, as shown in Figure 1. WSNs not only have the ability to s Figure 1: An example architecture of WSNs[2] nate their activities but also to communicate their results to end users, making them revolutionary in data collection across various domains[23]. However, the unique research and engineering chal lenges that arise during the deployment and design of these networks must be considered, as well as the limitations of their software devel opment[37, 42]. These limitations include their large intended area of deployment range, communication obstacles, random and hazardous deployment, high components' failure rate, and limited computa tional and energy/battery power. To ensure better critical decision- 4 A. Haque, N. Chowdhury, H. Soliman et al. making, it is crucial to maintain the quality of collected sensor data. Although cryptographic and key management techniques are used to protect the security of sensor nodes from intruder attacks, they are not sufficient to ensure the reliability and integrity of their sensed data[35, 41]. Thus, outlier detection techniques have been developed to iden tify any abnormal behavior in sensor data streams. WSNs are par ticularly susceptible to outliers due to several factors. Such factors include their use of weak and vulnerable sensors to collect data in real-world applications and their battery-powered nature. The po tential accumulation of errors when numerous sensors are used over wireless media, and the vulnerability of unguarded sensors in criti cal security and military applications to manipulation by intruders. Therefore, outlier detection is an integral part of any perilous data processing task that utilizes Fig. 2: Identification of Anomalies in WSN [22] WSNs. The following subsections out line the fundamental concepts, sources, and requirements of outlier detection in WSNs[38]. In WSNs, anomalies refer to any unusual, abnormal, or unex pected behavior or events in the collected sensor data stream that de viate from the expected or normal patterns. Anomalies can be caused by various factors, such as faulty sensors, environmental changes, malicious attacks, or random fluctuations of the sensed data, due to external random/asynchronous events in the terrain. Anomalies can be categorized into three major types, namely: noise, event, and attack anomaly sources. We pictured the classifi cation of anomalies in Figure 3. Noise or errors anomaly in WSNs refer to measurement inaccu facies or data sensed from sources like faulty or malfunctioning sen sors[28]. Outliers resulting from errors can occur frequently and are typically represented by a data point that differs significantly from the rest of the collected dataset. They can arise due to various envi nonmental factors, including bad deployment due to difficulties and harsh conditions [29]. To ensure data quality, both detected noisy and erroneous data should be eliminated or corrected, if possible[39]. Event anomaly sources in WSNs are defined as sudden changes in the real-world state, such as fire detection [31], Fig. 3: An example Anomalies with 2-dimensional dataset[5] earthquakes, weather changes, and air pollution [30]. Outliers caused by such anomaly sources tend to have a significantly lower probability of occurrence compared to those caused by errors. Such outliers typically last for a relatively long period of time and can alter the historical pattern of sensor data[24]. Removing event outliers from the dataset can lead to the loss of crucial information about the events [32]. Outliers that are similar in size to random errors can only be identified through the application of outlier tests. In WSNs, malicious attack anomaly sources are associated with network security, and researchers in [33] have addressed this issue. Due to the unattended nature of the deployment of the sensors, intruders can gain access to and control, damage, and/or hijack spe cific nodes to launch attacks, which can deplete the network's limited resources or inject false and corrupted data. Malicious attacks can be broadly classified into two categories: passive and active attacks. Passive attacks involve obtaining data interchanged in the network 6 A. Haque, N. Chowdhury, H. Soliman et al. without interrupting communication, while active attacks aim to dis rupt the normal functioning of the network[40]. ## 3 Anomaly Detection Techniques in WSN ### Classification of the Approaches There are three main types of modeling applications categories of ML[14] for anomaly detection in WSNs namely: supervised, unsu pervised, and semi-supervised. The appropriate category should be chosen based on the WSN anomaly detection task's specific require ments, as each category has its own advantages and disadvantages. Based on the characteristics of the Data-set and the user's needs we can use different ML approaches. Figure 4 is showing the demon station of the ML approaches in detecting the anomalies for WSNs. SVM, KNN, Random Forest, Decision Tree, ANN, K-means Clus tering, Density-Based Clustering, Auto-Encoders, Reinforcement Learn ing, Self Training, Co-Training, and Label Propagation, are popular ML algorithms used for anomaly detection in WSN[1, 4, 5, 7]. Each algorithm has its own strengths and weaknesses depending on the specific use case. Next, we will shed some light on each of such ML algorithms to justify its utilization in some applications. WSN anomaly detection using Machine Learning: A Survey 7 In the next three sub-sections, we will survey the detailed literature works in each of the aforementioned training/learning categories: supervised, unsupervised, and semi-supervised. ### Supervised Learning Approaches Labeled training data are used by supervised learning[3, 5, 7, 8, 9] al gorithms, and along with teacher supervision, to learn the normal network behavior for later identification of anomalies based on de viations from the normal learned behavior. When labeled data for training are available and the network's normal behavior class is well-defined, this strategy is appropriate. Next, we will briefly introduce some of the prominent supervised ML Figure 4: ML Approaches to detect anomalies in WSN[22] models utilized in WSNs data classification for anomaly detection and explore some related literature works. #### 3.2.1 A brief Introduction for Supervised ML models SVM[15] is a statistical-based[17] approach that is effective in than dling high-dimensional data and can separate classes with large mar gins, but it requires tuning of hyperparameters and can be compu tationally expensive[3, 8]. KNN[22] is a clustering-based method that is useful for detecting local anomalies and is simple to implement, but it is sensitive to the choice of distance metric and can be computationally expensive. Random Forests[9, 10] is a ML-based approach that is useful for handling noisy data, can identify feature importance, and is paral lelizable, but it has limited interpretability and can overfit the data. ANN[9] is another ML-based approach that is good at handling non-linear relationships, can handle complex data, and is paralleliz able, but requires careful selection of architecture and hyperparam eters and can be computationally expensive. Decision Tree[22] is a ML-based approach that is useful for handling noisy data, is easy to interpret, and can handle missing values, but it is prone to overfitting and sensitive to the choice of split criteria. Ultimately, the choice of algorithm depends on the specific characteristics of the data and the requirements of the application. #### 3.2.2 Literature work in Supervised Learning 8 A. Haque, N. Chowdhury, H. Soliman et al. In [1], the authors reviewed the challenges faced by WSNs and pro posed the use of ML techniques to improve their energy efficiency and detect anomalies in their collected data. The authors provide a comprehensive literature review of ML algorithms that have been ap plied to WSNs, including K-Means, ANN, Decision Trees, SVM[19], and Bayesian Networks[22, 2, 5, 13]. The author presents two case studies that demonstrate the effectiveness of ML techniques for en gray efficiency and anomaly detection in WSNs. Overall, the paper contributes to the growing interest of research on the use of ML for WSNs and highlights the potential of these techniques for improving the performance and reliability of WSNs. The authors in [4] proposed a method for detecting spatial anoma lies in sensor networks using neighborhood information. The authors provide a literature review of related work in anomaly detection in WSNs and highlight the challenges of detecting spatial anomalies due to the complexity of spatial data. The paper presents the proposed method, which uses neighborhood information to identify anomalies in spatial data (consisting of latitude and longitude). The authors demonstrate the effectiveness of their method through experiments conducted on simulated and real-world data sets. Overall, the pa per contributes to the growing interest in research on using sensor networks for anomaly detection and highlights the potential of using neighborhood information to improve such systems' accuracy and reliability. The authors in [5] proposed a new approach to anomaly detection in WSNs using support vector data description (SVDD). The authors provide a literature review of related work in the field of anomaly detection in WSNs and highlight the limitations of existing approaches, such as high false alarm rates and low detection accu ray. The paper presents the proposed method, which uses SVDD to construct a boundary around normal data and detect anomalies outside this boundary. The authors demonstrate the effectiveness of their method through experiments conducted on real-world data sets, showing that their method outperforms existing methods in terms of detection accuracy and false alarm rate. Overall, the paper contributes to the field of smart anomaly detection in WSNs and highlights the potential of using SVDD to improve such systems' accuracy and reliability. \begin{tabular}{c} WSN anomaly detection using Machine Learning: A Survey 9 \\ \end{tabular} The author in [7] proposed a data-driven approach for hyperpa rameter optimization of one-class SVMs for anomaly detection in WSNs. The authors provided a literature review of related work on anomaly detection in WSNs and highlight the importance of hyper parameter optimization for improving the accuracy and efficiency of SVM-based methods. The paper presents the proposed method, which uses a genetic algorithm to optimize the hyperparameters of the one-class SVM model. The authors demonstrate the effective ness of their method through experiments conducted on real-world data sets, showing that their method outperforms existing methods in terms of detection accuracy and computational efficiency. Over all, the paper contributes to smart anomaly detection in WSNs and highlights the potential of using data-driven approaches for hyper parameter optimization of SVM-based methods. The authors in [3] proposed a new algorithm for anomaly detec tion in WSNs using a combination of density-based spatial cluster ing of applications with noise (DBSCAN) and SVM. The authors provided a literature review of related work on anomaly detection in WSNs and highlight the challenges faced in implementing such systems, including limited computational resources, communication bandwidth, and energy constraints. The paper presents the proposed DBSCAN algorithm and demonstrates its effectiveness in detecting anomalies in WSNs using simulations. Overall, the paper contributes to the smart use of ML techniques for anomaly detection in WSNs and highlights the potential of combining DBSCAN and SVM for improving the accuracy and reliability of such systems. The authors in [8] presented a comprehensive literature review of existing intrusion detection systems (IDSs) for WSNs and investi gated the applicability of computational intelligence techniques for enhancing the performance of IDSs. The authors discussed the chal lenges and requirements of intrusion detection in WSNs, including limited resources, wireless communication, and distributed deploy ment. Then, they reviewed the use of various soft-computational intelligence techniques for IDSs, such as ANNs, decision trees, fuzzy logic, and genetic algorithms. The paper also provides an overview of existing benchmark datasets and evaluation metrics for IDSs in WSNs. Overall, the paper highlights the potential of using soft computing intelligence techniques for improving the accuracy. The authors in [9] proposed a distributed anomaly detection scheme based on autoencoder neural networks for WSNs. The authors aim to address the limitations of existing centralized anomaly detection methods that may not be suitable for large-scale WSNs due to band width constraints, power consumption, and privacy concerns. The proposed scheme involves training autoencoder neural networks at each sensor node to learn the normal behavior of the local envi ronnent and detect anomalies based on reconstruction errors. The paper also discusses the implementation of the proposed scheme on a testbed and evaluates its performance using various metrics. The results show that the proposed scheme achieves high accuracy in detecting anomalies while maintaining low communication overhead and energy consumption. Hence, such an approach will be suitable for large-scale WSNs in IoT applications. Overall, the paper provides a valuable contribution to the field of distributed anomaly detection in WSNs and demonstrates the potential of using autoencoder neural networks for improving the efficiency and effectiveness of anomaly detection in IoT applications. The authors in [10] proposed a deep learning-based approach for developing middleware for WSNs. The authors aim to address the limitations of existing middleware ap proaches that may not be able to handle the complexity and het erogeneity of WSNs, leading to low accuracy and reliability. The proposed approach involves training a deep neural network (DNN) to predict the behavior of the WSN based on historical data and using the predictions to adjust the middleware parameters in real time. The paper also discusses the implementation of the proposed approach on a testbed and evaluates its performance using various metrics. The results show that the proposed approach achieves high accuracy and reliability in adapting the middleware to the dynamic behavior of the WSN, making it suitable for various WSN appli cations. Overall, the paper provides a valuable contribution to the field of WSN middleware development and demonstrates the poten tail of using deep learning for improving the accuracy and reliability of WSN middleware. The assumption that anomalies are data points that are significantly distinct from most of the data points serves as the foundation for WSN anomaly detection using Machine Learning: A Survey 11 unsupervised learning algorithms[6, 25, 26, 27]. Hence, no need for a su pervisor teacher, the ML model will cluster anomaly data separately from normal data, even with unlabeled data based on the data quality. When labeled data is unavailable or the network's normal be havior is unclear, this method is appropriate. Next, we will briefly introduce some of the prominent unsupervised ML models utilized in WSNs data classification for anomaly detection and explore some related literature works. #### 3.3.1 Brief Introduction to unsupervised ML models K-means clustering[6] is a popular unsupervised clustering method that is used for identifying clusters of data points with similar charac teristics[6, 26, 27]. It is useful for detecting global anomalies and can handle large datasets, but it requires the selection of the appropri ate number of clusters and can be sensitive to outliers. Density-based clustering, on the other hand, is useful for detecting local anomalies and does not require specifying the number of clusters beforehand[2, 25]. However, it can be computationally expensive and sensitive to the choice of parameters. Auto-encoders[26] are a type of neural network that is used for unsupervised feature learning and anomaly detection. They are use ful for detecting non-linear relationships in the data and can handle complex data, but they require a careful selection of architecture and hyper-parameters and can be computationally expensive. Ultimately, the choice of technique depends on the specific characteristics of the data and the requirements of the application[25, 26]. Density-based clustering[26] is a data clustering technique that identifies clusters of points based on the density of their distri bution. It is particularly useful for identifying clusters of arbitrary shape and can handle noise and outliers effectively. In [2], the authors presented a method for detecting anomalies in a WSN used in a smart home system. The authors propose using sta tistical analysis and ML techniques to detect abnormal behavior in 12 A. Haque, N. Chowdhury, H. Soliman et al. the network, which can be used to identify security breaches, faults, and other anomalies. The paper provides a literature review of re lated work in the field of anomaly detection in WSNs and highlights the challenges of implementing such systems in smart home environ ments. The authors present the results of experiments conducted on a prototype smart home system, which demonstrate the effectiveness of their proposed method for detecting anomalies in WSNs. Overall, the paper contributes to the growing interest in the use of WSNs in smart home environments and highlights the potential of ML tech niques for improving the security and reliability of such systems. The authors in [6] proposed an attention-based multi-filter long short-term memory (AMF-LSTM)-based deep learning strategy for network anomaly detection. The authors present a review of related research on anomaly detection in networks and draw attention to the drawbacks of existing approaches, such as the need for domain knowledge and the inability to identify anomalies that were not pre viously known, hence the use of unsupervised learning. The proposed approach, which makes use of AMF-LSTM to capture the temporal and spatial correlations that exist in network traffic, is presented in the paper, along with a focus on the essential characteristics for anomaly detection. Experiments on real-world data sets used by the authors to demonstrate the efficacy of their method show that, in terms of detection accuracy and false alarm rate, it outperforms the peers. In general, the paper highlights the potential of using AMF LSTM to improve the accuracy and dependability of such systems and adds to the growing interest in research on the use of deep learn ing techniques for anomaly detection in networks. The authors in [25] proposed a new approach for detecting outliers in WSNs using k-means clustering and lightweight methods. They argue that outlier detection is essential for improving network perfor mance and reliability, and that existing methods have limitations in terms of computational complexity and memory requirements. The authors introduce a two-phase approach that divides sensor nodes into clusters based on similarity and then applies lightweight out lier detection methods to each cluster. To evaluate the effectiveness of the proposed approach, the authors conducted experiments using real-world data from a WSN. The results showed that the proposed approach outperformed existing outlier detection methods in terms of both detection accuracy and computational efficiency. WSN anomaly detection using Machine Learning: A Survey 13 The authors in [26] presented two novel approaches for detecting outliers in WSNs using density-based methods. The authors argue that density-based methods have advantages over other outlier de tection methods in terms of their ability to handle high-dimensional data and non-linear relationships. The proposed approaches use dif ### 3.4 Semi-Supervised Learning Approaches Semi-supervised Learning[23, 24, 30] is a mix of labeled and unla beled data is used by semi-supervised learning algorithms to learn normal network behavior. When labeled data is limited or inaccurate and the network behavior is complex and unclear, this strategy is ap propriate. Deep Belief Networks (DBNs) and Generative Adversarial Networks (GANs) are two examples of semi-supervised learning al gorithms utilized in WSN anomaly detection[2]. Reinforcement learning, label propagation, self-training, and co training are popular techniques used for anomaly detection in WSN with semi-supervised learning. #### 3.4.1 Brief Introduction to Semi-Supervised ML models 14 A. Haque, N. Chowdhury, H. Soliman et al. Reinforcement learning[23] is a type of ML that is useful for se potential decision-making problems where an agent interacts with an environment[23, 24]. It is useful for detecting anomalies in dynamic environments where the distribution of data changes over time, but it can be computationally expensive and requires careful selection of the appropriate reward function. Label propagation[29] is a semi-supervised learning approach that is useful for propagating labels in a network[23, 29, 30]. It is useful for detecting anomalies in large-scale networks and can handle missing data, but it requires a careful selection of parameters and can be sensitive to the choice of initialization. Self-training[24] is another semi-supervised learning approach that is useful for leveraging unlabeled data to improve the perfor mance of a classifier. It can also improve the accuracy of a classi fier, but it requires careful selection of the appropriate threshold for adding new labeled data. Co-training[28] is a semi-supervised learning approach that is useful for learning from multiple views of data[24, 28]. It is useful for detecting anomalies in multi-modal data and can improve the accuracy of a classifier, but it requires careful selection of the appro private number of views and can be sensitive to the choice of features. Ultimately, the choice of technique depends on the specific charac teristics of the data and the requirements of the application[42]. #### 3.4.2 Literature work in Semi-Supervised Learning The authors in [23] proposed a deep actor-critic reinforcement learning-based approach for anomaly detection in WSNs. The au thors utilize an actor-critic model to learn the mapping between the current state and the optimal action to take for the anomaly de tection task. They introduce a reward function that considers the detection accuracy, false positives, and false negatives to train the actor network. Additionally, the authors incorporate a replay buffer and target networks to stabilize the learning process. Their experi mental results show that the proposed approach outperforms several state-of-the-art anomaly detection methods in terms of detection acuracy and false positives. Authors in [24] proposed a novel anomaly detection method called AESMOTE, which combines adversarial reinforcement learning (ARL) and synthetic minority over-sampling technique (SMOTE). AES MOTE enables the system to learn the underlying distribution of normal and anomalous data, and it leverages SMOTE to generate synthetic samples that represent anomalies that may not have been observed in the original dataset. The proposed method is evaluated on multiple benchmark datasets, and the experimental results show that AESMOTE outperforms other state-of-the-art methods, achiev ing high detection rates with low false positive rates. The authors in [28] presented a novel approach for video anomaly detection using a self-trained prediction model and a novel anomaly score mechanism which can be useful in WSN anomaly detection also. The authors argue that existing methods for anomaly detection have limitations in terms of their ability to handle complex scenes and learn the underlying patterns of anomalies. The proposed ap proach trains a deep neural network on normal videos to learn the patterns of normal behavior and then uses it to predict future frames. An anomaly score mechanism is introduced to measure the deviation between predicted and actual frames, enabling the identification of anomalous events. The experimental results demonstrate the effec tiveness of the proposed approach on both synthetic and real-world datasets, showing its potential for anomaly detection in various ap placations. The authors in [29] proposed a novel approach for intrusion de tection using semi-supervised co-training and active learning tech niques. The authors argue that existing intrusion detection methods have limitations in terms of their ability to handle high-dimensional and diverse data, leading to low detection accuracy and high false alarm rates. The proposed approach utilizes multiple views of data to capture different aspects of intrusion behavior and uses semi supervised co-training and active learning to improve the perfor mance of the intrusion detection system. The experimental results demonstrate the effectiveness of the proposed approach in terms of detection accuracy and false alarm rate reduction, showing its po tential for intrusion detection in various applications. The authors in [30] proposed a novel approach for group anomaly detection in large-scale networks using adaptive label propagation. The authors argue that existing methods for group anomaly de tection have limitations in terms of their ability to handle large scale and dynamic network data, leading to low detection accuracy 16 A. Haque, N. Chowdhury, H. Soliman et al. and scalability issues. The proposed approach utilizes label propagation to propagate labels in the network and identify groups of anomalous nodes, while adaptively adjusting the propagation pro cess based on the local structure of the network. The experimental results demonstrate the effectiveness of the proposed approach in terms of detection accuracy and scalability, showing its potential for group anomaly detection in various applications, such as social networks and transportation systems. 4 Conclusion A growing area of research in recent years has been the application of ML to the detection of anomalies in WSNs. We surveyed Tradiional and most recent ML models utilized in WSNs data anomaly detection. The choice of any of such models is determined by the network's specific application and requirement factors. Hence, nu merous papers have investigated the impact of such factors on the deployment of ML in this field. In general, this survey has demonstrated the significance and effi cacy of utilizing ML for the purpose of detecting anomalies in WSNs. However, there is still a need for additional research in this area, particularly in creating algorithms that are more effective and efficient to be used on large and complex networks. In addition, more in-depth evaluations of the various methods employing real-world datasets are required to comprehend their performance and limitations.
2301.10815
Human-machine Hierarchical Networks for Decision Making under Byzantine Attacks
This paper proposes a belief-updating scheme in a human-machine collaborative decision-making network to combat Byzantine attacks. A hierarchical framework is used to realize the network where local decisions from physical sensors act as reference decisions to improve the quality of human sensor decisions. During the decision-making process, the belief that each physical sensor is malicious is updated. The case when humans have side information available is investigated, and its impact is analyzed. Simulation results substantiate that the proposed scheme can significantly improve the quality of human sensor decisions, even when most physical sensors are malicious. Moreover, the performance of the proposed method does not necessarily depend on the knowledge of the actual fraction of malicious physical sensors. Consequently, the proposed scheme can effectively defend against Byzantine attacks and improve the quality of human sensors' decisions so that the performance of the human-machine collaborative system is enhanced.
Chen Quan, Baocheng Geng, Yunghsiang S. Han, Pramod K. Varshney
2023-01-25T20:37:22Z
http://arxiv.org/abs/2301.10815v1
# Human-machine Hierarchical Networks for Decision Making under Byzantine Attacks ###### Abstract This paper proposes a belief-updating scheme in a human-machine collaborative decision-making network to combat Byzantine attacks. A hierarchical framework is used to realize the network where local decisions from physical sensors act as reference decisions to improve the quality of human sensor decisions. During the decision-making process, the belief that each physical sensor is malicious is updated. The case when humans have side information available is investigated, and its impact is analyzed. Simulation results substantiate that the proposed scheme can significantly improve the quality of human sensor decisions, even when most physical sensors are malicious. Moreover, the performance of the proposed method does not necessarily depend on the knowledge of the actual fraction of malicious physical sensors. Consequently, the proposed scheme can effectively defend against Byzantine attacks and improve the quality of human sensors' decisions so that the performance of the human-machine collaborative system is enhanced. ## I Introduction In high stake scenarios where human lives and assets are at risk, automatic physical sensor-only decision-making may not be sufficient [1, 2, 3]. Further, in some circumstances, such as remote sensing and emergency access systems, humans may possess additional side information in addition to the common observations available from both physical sensors and humans. Thus, it may be necessary to incorporate humans in decision-making, intelligence gathering, and decision control. The emerging human-machine inference networks aim to combine humans' cognitive strength and sensors' sensing capabilities to improve system performance and enhance situational awareness. Unlike physical sensors that can be programmed to operate with fixed parameters, human behavior and decisions are governed by psychological processes which are quite complex and uncertain. Hence, traditional signal processing and fusion schemes can not be adopted directly for integrating sensor measurements with human inputs. It is imperative to construct a framework to capture attributes associated with human-based sources of information so that they can be fused with data from physical sensors. There have been studies that employ statistical signal processing to address human-related factors in human-machine collaborative decision making [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. For instance, the authors of [4, 5] studied decision fusion performance when the individual human agents use different thresholds modeled as random variables to make local decisions regarding a given phenomenon of interest (PoI). The authors in [3] proposed a hybrid system that consists of multiple human sub-populations, with the thresholds of each sub-population characterized by non-identically distributed random variables and a limited number of machines (physical sensors) whose exact values of thresholds are known. For such a hybrid system, they derived the asymptotic performance at the fusion center in terms of Chernoff information. The authors in [1, 2] showed that adding human inputs may or may not improve the overall performance of human-sensor networks, and they derived the conditions under which performance is improved. Furthermore, collaborative decision-making in multi-agent systems was investigated when the rationality of participating humans is modeled using prospect theory [6, 7, 8, 9]. To a large extent, the literature on human-machine collaborative networks has not considered the distributed nature and the openness of wireless networks in which the physical sensors deployed in the network are low-cost, insecure, and vulnerable to various attacks, e.g., jamming [16], wiretap, spoofing [16, 17, 18, 19] and Byzantine attacks [20, 21, 22, 23, 24, 25]. In this paper, we are interested in Byzantine attacks, where physical sensors in the network might be compromised and send falsified data to the fusion center (FC). In contrast to most existing work, this paper aims to construct robust human-machine collaborative decision-making systems. We consider the general scenario where some sensors in the network are compromised by adversaries (Byzantines) so that they send falsified data to human agents. A belief updating and reputation-based scheme, where human agents and physical sensors interact with each other in decision-making, is proposed to mitigate the effect of Byzantine attacks. The proposed scheme consists of three parts: belief updating at human agents, decision-making at the FC, and reputation updating at the FC. In the belief updating part, the human agents make their local decisions based on their observations regarding the PoI and the decisions received from the physical sensors over a short time window. Within this short window, the human agents update their beliefs of the physical sensors' behavioral identities and further update their likelihood ratios (LRs) about the PoI. The belief updating phase involves collecting information from human agents and physical sensors to contribute to the decision-making at the FC. However, the belief-updating processes at the human agents based on short-term information, i.e., local decisions made by the sensors, may only reflect sensors' behavior over a short period. Consequently, the reputations of physical sensors are also updated over time at the FC to assist in the identification of Byzantine sensors and in mitigating their impact during the decision-making process. Moreover, we study under which conditions human agents can improve the quality of their decisions by using their side information if available. Our simulation results show that the proposed scheme can effectively defend against Byzantine attacks and enhance the quality of human agents' decisions. ## II System Model and the Proposed Scheme In this section, we consider a network model consisting of one FC, \(M\) human agents (human sensors), and \(N\) physical sensors, all of which make threshold-based binary decisions based on independent observations regarding the PoI. Unlike physical sensors, which employ deterministic thresholds, human sensors are assumed to use random thresholds to make decisions, which account for humans' cognitive biases. We also assume that the human agents have a similar background, e.g., culture, education level, and experience. 1 To account for the similar background they are assumed to have, it is reasonable to assume that a known probability distribution characterizes the random thresholds used by human sensors in this work. The thresholds used by the physical sensors are assumed to be the same and deterministic, which are \(\mathbf{\tau}=[\tau_{1},\ldots,\tau_{N}]^{T}\). The thresholds used by the human sensors are denoted by \(\mathbf{\xi}=[\xi_{1},\ldots,\xi_{M}]^{T}\) and they are independent identically distributed (i.i.d.) random variables where \(\xi_{i}\) follows a probability density function (pdf) \(f(\xi)\) for \(i=1,\ldots,M\). In this work, we assume that all the human sensors are honest and put in their best effort to make decisions. We also assume that a fraction \(\alpha\) of the \(N\) physical sensors are Byzantine nodes and the FC is unaware of the identity of Byzantine nodes in the network. Hence, each sensor has the probability of \(\alpha\) being a Byzantine node. As a result of the cognitive biases present in human sensors, some of them might perform worse than others when detecting the PoI. We utilize all useful information from the decisions coming from all the sensors (including physical and human sensors) in the network by employing a human-machine network that is constructed hierarchically, and a belief updating scheme is proposed. Footnote 1: According to studies on human behavior [3, 26, 27, 28, 26], different backgrounds have profound effect on a person’s decision-making process, the quality of decisions, as well as the ability to make decisions. To account for the diversity of human populations, we can assume that humans with different backgrounds use random thresholds to follow different distributions. In contrast, random thresholds used by humans with the same background follow the same distribution. ### _Belief-updating scheme_ The system model is shown in Fig. 1, where a hierarchical framework is established. All human agents are connected to a small set of physical sensors. Each human agent makes local decisions based on its raw observations and then updates its belief regarding the behavioral identity of the connected physical sensors and its LR based on the local decisions coming from the connected physical sensors' during the time interval \((nT,(n+1)T]\) for \(n=0,1,\ldots\). _Remark 1:_ Note that the behavioral identity and the LR are updated at \(nT+1\), \(nT+2\),...until \((n+1)T\). The human sensor uses this updated information to make a decision at time \((n+1)T\). In general, the human sensors collect information from the connected reference physical sensors to update the LR during \((nT,(n+1)T]\). We assume that \(T\) is not large so that the true underlying hypothesis does not change during \((nT,(n+1)T]\). At each time step \(t=T,2T,3T,\ldots\), the final decisions made by the FC regarding the presence of the PoI are based on the decisions received from the humans. Let \(y_{i}^{t}\) and \(z_{m}^{t}\) denote the observation of sensor \(i\) and human \(m\) at time \(t\), respectively.2 The LR of physical sensor \(i\in\{1,\ldots,N\}\) and human agent \(m\in\{1,\ldots,M\}\) at time \(t\) are given as \(L_{S,i}^{t}=\frac{f(y_{i}^{t}|\mathcal{H}_{1})}{f(y_{i}^{t}|\mathcal{H}_{0})}\) and \(L_{H,m}^{t}=\frac{f(z_{m}^{t}|\mathcal{H}_{1})}{f(z_{m}^{t}|\mathcal{H}_{0})}\), respectively, where hypothesis \(\mathcal{H}_{1}\) indicates the presence of the PoI and hypothesis \(\mathcal{H}_{0}\) indicates the absence of the Fig. 1: System model PoI. Thus, the decision rule of the physical sensor \(i\) at time step \(t\) is given by \[v_{i}^{t}=\left\{\begin{array}{ll}1&\quad L_{S,i}^{t}\geq\tau_{i}\\ 0&\quad otherwise\end{array}\right. \tag{1}\] Here, we assume that identical physical sensors are deployed, and identical thresholds are utilized at the sensors. Therefore, we have \(P_{d,i}=P_{d}\) and \(P_{f,i}=P_{f}\) for \(i=1,2,\ldots,N\). Let \(u_{i,m}^{t}\) denote the local decision sent by physical sensor \(i\) to the connected human agent \(m\) at time \(t\), where \(m\in\mathcal{M}_{m}\). If sensor \(i\) is malicious, i.e., \(i=B\), we assume that \(u_{i,m}^{t}=1-v_{i}^{t}\); if it is honest, i.e., \(i=H\), we assume \(u_{i,m}^{t}=v_{i}^{t}\). The decision rule of the human agent \(m\) based on its raw observation at time step \(t\) is given by \[b_{m}^{t}=\left\{\begin{array}{ll}1&\quad L_{H,m}^{t}\geq\xi_{m}\\ 0&\quad otherwise\end{array}\right. \tag{2}\] Some key notations used in this paper are listed in Table I for the convenience of readers. The belief-updating and decision-making process at each human sensor during any time interval \((nT,(n+1)T]\) proceeds as follows for \(n=0,1,\ldots\): **Belief-updating** 1. At time \(t\in(nT,(n+1)T]\), \(q_{m,i,t}\) and \(r_{m,i,t}\) are updated, respectively, as \[q_{m,i,t}= Pr(u_{i,m}^{t}=1|i_{t-1}=H)\] \[= \pi_{1,m,t-1}D_{i,H}+\pi_{0,m,t-1}F_{i,H}\] (3) and \[r_{m,i,t}= Pr(u_{i,m}^{t}=1|i_{t-1}=B)\] \[= \pi_{1,m,t-1}D_{i,B}+\pi_{0,m,t-1}F_{i,B},\] (4) for \(i\in\mathcal{M}_{m}\), where \(D_{i,X}=Pr(u_{i,m}^{t}=1|\mathcal{H}_{1},i=X)=\int_{\tau_{i}}^{\infty}Pr(y_{i} ^{t}|\mathcal{H}_{1},i=X)\mathrm{d}y_{i}^{t}\) and \(F_{i,X}=Pr(u_{i,m}^{t}=1|\mathcal{H}_{0},i=X)=\int_{\tau_{i}}^{\infty}Pr(y_{i} ^{t}|\mathcal{H}_{0},i=X)\mathrm{d}y_{i}^{t}\) for \(X=H\) or \(B\). Based on (3) and (4), the belief that physical sensor \(i\) is honest is updated as \[w_{m,i,t}(u_{i,m}^{t}=1)= \frac{Pr(i_{t-1}=H|u_{i,m}^{t}=1)}{Pr(i_{t-1}=B|u_{i,m}^{t}=1)}\] \[= \frac{Pr(u_{i,m}^{t}=1|i_{t-1}\!=\!H)Pr(i_{t-1}\!=\!H)}{Pr(u_{i,m }^{t}\!=\!1|i_{t-1}\!=\!B)Pr(i_{t-1}\!=\!B)}\] \[= w_{m,i,t-1}\frac{q_{m,i,t-1}}{r_{m,i,t-1}}\] (5) given \(u_{i,m}^{t}=1\) and \[w_{m,i,t}(u_{i,m}^{t}\!=\!0)= \frac{Pr(i_{t-1}=H|u_{i,m}^{t}=0)}{Pr(i_{t-1}=B|u_{i,m}^{t}=0)}\] \[= w_{m,i,t-1}\frac{1-q_{m,i,t-1}}{1-r_{m,i,t-1}}\] (6) given \(u_{i,m}^{t}=0\) for \(i\in\mathcal{M}_{m}\), where \(Pr(i_{t-1}=B(\text{or }H))\) denotes the probability of sensor \(i\) being malicious (or honest) at time step \(t\). Note that \(Pr(i_{nT+1}=B)=\alpha\) and \(Pr(i_{nT+1}=H)=1-\alpha\) for \(n=0,1,2\ldots\). Hence, the initial belief is \(w_{m,i,nT+1}=(1-\alpha)/\alpha\) for \(i=0,1,\ldots,N\). Given \(u_{i,m}^{t}\), the belief that sensor \(i\) is honest at time \(t\) is \(w_{m,i,t}=\frac{Pr(i_{t}=H)}{Pr(i_{t}=B)}=w_{m,i,t}(u_{i,m}^{t}=1)^{u_{i,m}^{t} }w_{m,i,t}(u_{i,m}^{t}=0)^{1-u_{i,m}^{t}}\). 2. For physical sensor \(i\) at time \(t\), \(\delta_{m,i,t}(u_{i,m}^{t}=h)\) is given by \[\delta_{m,i,t}(u_{i,m}^{t}=1)= \frac{D_{i,B}\!+\!D_{i,H}w_{m,i,t-1}}{F_{i,B}\!+\!F_{i,H}w_{m,i,t -1}}\] (7) for \(h=1\) and \[\delta_{m,i,t}(u_{i,m}^{t}\!=\!0)= \frac{(1\!-\!D_{i,B})\!+\!(1\!-\!D_{i,H})w_{m,i,t-1}}{(1-F_{i,B}) \!+\!(1\!-\!F_{i,H})w_{m,i,t-1}}\] (8) for \(h=0\). Hence, given \(u_{i,m}^{t}\), the LR at time \(t\) is \(\delta_{m,i,t}=\delta_{m,i,t}(u_{i,m}^{t}=0)^{1-u_{i,m}^{t}}\delta_{m,i,t}(u_{i, m}^{t}=1)^{u_{i,m}^{t}}\). 3. _Decision-making at human sensor \(m\) at time \(t=nT+1\)_: \[\lambda_{m,nT+1}\!=\!\frac{\pi_{1}}{\pi_{0}}\frac{\beta_{m}^{m+ \tau_{1}}}{\gamma_{m}^{\frac{m+\tau_{1}}{\mu_{m}}}(1-\beta_{m})^{1-b_{m}^{m+1}} }\underset{d_{m}^{m+1}}{\overset{d_{m}^{m+1}}{\gtr}}\kappa^{\prime},\] (9) where \(\beta_{m}=\int_{\xi_{m}}^{\infty}f(z_{m}^{t}|\mathcal{H}_{1})\mathrm{d}z_{m}^{t}\), \(\gamma_{m}=\int_{\xi_{m}}^{\infty}f(z_{m}^{t}|\mathcal{H}_{0})\mathrm{d}z_{m}^{t}\) are the probabilities of detection and false alarm for human agent \(m\). Decision-making at human sensor \(m\) at time \(t\in[nT+2,(n+1)T]\):_ \[\lambda_{m,t}\!=\!\lambda_{m,t\!-\!1}\frac{\beta_{m}^{k_{m}^{t}}(1\!-\!\beta_{m} )^{1\!-\!k_{m}^{t}}}{\gamma_{m}^{k_{m}}(1\!-\!\gamma_{m})^{1\!-\!k_{m}^{t}}}\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! conditions under which the quality of human sensors' decisions is improved by utilizing different operations to utilize side information. The derived conditions are stated in Theorem II.1. **Theorem II.1**: _When the following conditions are satisfied, the side information could help individual human sensors make better decisions. For a specific human sensor \(m\in\{1,\ldots,M\}\), we have_ * _the quality of decisions is improved by utilizing AND operation when_ \(\frac{\bar{\beta}}{\bar{\gamma}}<\frac{\pi_{0}(1-\gamma_{m,side})}{\pi_{1}(1- \beta_{m,side})}\)_._ * _the quality of decisions is improved by utilizing OR operation when_ \(\frac{\pi_{0}(1-\bar{\gamma})}{\pi_{1}(1-\beta)}\leq\frac{\beta_{m,side}}{\gamma _{m,side}}\)_._ * _OR operation performs better than AND operation when_ \(\pi_{0}(1-2\bar{\gamma})\gamma_{m,side}-\pi_{1}(1-2\bar{\beta})\beta_{m,side} \leq\pi_{1}\bar{\beta}-\pi_{0}\bar{\gamma}\)_._ Proof:: The above conditions can be derived by comparing the value of \(P_{e,m,side}^{AND}\) and \(P_{e}\), the value of \(P_{e,m,side}^{OR}\) and \(P_{e}\), and the value of \(P_{e,m,side}^{AND}\) and \(P_{e,m,side}^{OR}\). ## III Numerical Results Some numerical results are presented in this section. Assume \(y_{i}^{t}|\mathcal{H}_{h}\sim\mathcal{N}(\mu,\sigma_{h}^{2})\) and \(z_{m}^{t}|\mathcal{H}_{h}\sim\mathcal{N}(\mu_{h},\sigma_{h}^{2})\) for \(h=0,1\), where \(\mu_{1}=4\), \(\mu_{0}=0\) and \(\sigma_{1}^{2}=\sigma_{0}^{2}=2\). The human thresholds are assumed to follow the Gaussian distribution with parameters \((\mu_{\tau},\sigma_{\tau})\), where \(\mu_{\tau}=2\) and \(\sigma_{\tau}=2\). We set \(N=60\), \(M=20\), \(T=10\), \(\Delta=0.03\), \(\kappa^{\prime}=1\), \(\kappa=M/2\), \(\eta=0.2\), \(\tau_{i}=2\) for \(i=1,\ldots,N\) and \(\pi_{1}^{nT+1}=\pi_{0}^{nT+1}=0.5\) for \(n=0,1,\ldots\). Note that the term 'iterations' used in the following figures refers to iterations during the belief updating phase. In Table II, we show the comparison of the error probabilities of the systems that adopt CV (Chair-Varshney rule), MR (Majority rule), and MRH (Majority rule with human sensors) when \(\alpha\) is known. MRH only utilizes the decisions from human sensors, while MR and CV utilize both human and physical sensors. The MR system uses decisions from all the sensors (including physical and human sensors) by performing a simple majority vote to make a final decision. In contrast, our proposed scheme employs a hierarchical framework to construct the human-machine collaborative network. As seen in Table II, MR breaks down when most sensors participating in the decision-making process are malicious. However, our proposed scheme can still achieve comparable performance to the optimal CV rule. We can see in Fig. 2 that the fraction of humans making correct decisions increases significantly within a small number of iterations in our proposed scheme, which indicates a rapid improvement in the quality of humans' decisions. Although the proposed scheme requires the knowledge of \(\alpha\) when each human sensor updates the belief regarding the behavioral identity of the corresponding physical sensors, this knowledge is not necessarily needed to guarantee a good performance. Choosing an appropriate predefined \(\alpha\) can alleviate the performance degradation caused by the absence of knowledge of \(\alpha\). In Fig. 3 and Table III, we compare the performance of the different systems when \(\alpha\) is replaced with a predefined value \(\alpha_{e}\) in (5) and (6). Fig. 3 shows the fraction of the number of humans that make correct decisions given different values of \(\alpha_{e}\). We can observe that the system with \(\alpha=0.9\) performs worse when \(\alpha_{e}=0.3\) and the system with \(\alpha=0.1\) performs worse when \(\alpha_{e}=0.7\). However, \(\alpha_{e}=0.5\) works well for the system with any fraction of Byzantine nodes. This is because when \(\alpha_{e}\gg\alpha\) (or \(\alpha_{e}\ll\alpha\)), \(\alpha_{e}\) significantly overestimates (or underestimates) \(\alpha\) which results in performance degradation. Thus, \(\alpha_{e}=0.5\) is a good choice when we do not know \(\alpha\). Table III show that the system that adopts the proposed scheme can outperform the systems that adopt CV, MR, and MRH when \(\alpha\) is unknown. Although there is a performance degradation compared to the systems that are aware of \(\alpha\), i.e., the performance shown in Fig. 2 and Table II, the performance degradation is negligible for the proposed scheme. Thus, whether we know the actual \(\alpha\) or not, the proposed scheme can always achieve a good performance. In Fig. 4, we show that the proposed scheme performs well in identifying Byzantine nodes in both cases, i.e., the system is aware/unaware of \(\alpha\). values of \(\gamma_{sidde}\), the error probability of a human sensor decreases as \(\beta\)_side_ increases for both OR and AND operations. Given certain parameters \((\beta_{side},\gamma_{side})\), we can make a better choice among OR operation, AND operation, and no operation (i.e., no side information is utilized). For example, no operation is a better choice given \(\beta_{side}\leq 0.81\) and \(\gamma_{side}=0.1\) and AND operation is a better choice given \(\beta_{side}\geq 0.9\) and \(\gamma_{side}=0.3\). Our results shown in Fig. 5 are also consistent with Theorem 2.1 we obtained earlier. ## IV Conclusion In this paper, we have proposed a belief-updating scheme in a human-machine hierarchical network. The local decisions from physical sensors served as reference decisions to improve the quality of human sensor decisions. At the same time, the belief that each physical sensor is malicious was updated during the decision-making process. The impact of side information from an individual human sensor and comparing different operations used to incorporate the side information were also analyzed. Simulation results showed that the quality of human sensors' decisions could be improved by employing the proposed scheme even when most physical sensors in the system are malicious. Moreover, our proposed scheme did not require the knowledge of the actual fraction of malicious physical sensors to guarantee the performance of our proposed scheme. Hence, the proposed scheme can successfully defend against Byzantine attacks and improve the quality of human sensors' decisions.
2307.12354
Phase-field modeling and effective simulation of non-isothermal reactive transport
We consider single-phase flow with solute transport where ions in the fluid can precipitate and form a mineral, and where the mineral can dissolve and release solute into the fluid. Such a setting includes an evolving interface between fluid and mineral. We approximate the evolving interface with a diffuse interface, which is modeled with an Allen-Cahn equation. We also include effects from temperature such that the reaction rate can depend on temperature, and allow heat conduction through fluid and mineral. As Allen-Cahn is generally not conservative due to curvature-driven motion, we include a reformulation that is conservative. This reformulation includes a non-local term which makes the use of standard Newton iterations for solving the resulting non-linear system of equations very slow. We instead apply L-scheme iterations, which can be proven to converge for any starting guess, although giving only linear convergence. The three coupled equations for diffuse interface, solute transport and heat transport are solved via an iterative coupling scheme. This allows the three equations to be solved more efficiently compared to a monolithic scheme, and only few iterations are needed for high accuracy. Through numerical experiments we highlight the usefulness and efficiency of the suggested numerical scheme and the applicability of the resulting model.
Carina Bringedal, Alexander Jaust
2023-07-23T15:31:51Z
http://arxiv.org/abs/2307.12354v1
# Phase-field modeling and effective simulation of non-isothermal reactive transport ###### Abstract We consider single-phase flow with solute transport where ions in the fluid can precipitate and form a mineral, and where the mineral can dissolve and release solute into the fluid. Such a setting includes an evolving interface between fluid and mineral. We approximate the evolving interface with a diffuse interface, which is modeled with an Allen-Cahn equation. We also include effects from temperature such that the reaction rate can depend on temperature, and allow heat conduction through fluid and mineral. As Allen-Cahn is generally not conservative due to curvature-driven motion, we include a reformulation that is conservative. This reformulation includes a non-local term which makes the use of standard Newton iterations for solving the resulting non-linear system of equations very slow. We instead apply L-scheme iterations, which can be proven to converge for any starting guess, although giving only linear convergence. The three coupled equations for diffuse interface, solute transport and heat transport are solved via an iterative coupling scheme. This allows the three equations to be solved more efficiently compared to a monolithic scheme, and only few iterations are needed for high accuracy. Through numerical experiments we highlight the usefulness and efficiency of the suggested numerical scheme and the applicability of the resulting model. ## 1 Introduction Reactive transport is relevant for many environmental and technical applications. We are here especially interested in reactive transport with heterogeneous reactions, where solutes can leave a liquid phase and become part of a solid phase, or the other way around. This is a typical feature of mineral precipitation and dissolution [39], which is common in groundwater remediation, soil salinization, oil production, metallurgy and geothermal energy production, to mention some. When temperature plays a role, which is in particular the case with geothermal energy production, the interplay between temperature-dependent solubilities of minerals and reaction rates depending on temperature can give an interplay between varying temperature and solute concentrations [14]. For many of these applications, experiments are unfeasible, which rather turns the attention to having reliable models and numerical simulations to investigate these processes. However, the close coupling between transport of solutes, heat and chemical reactions can be challenging to model and simulate[44]. At the interface between fluid and mineral, solutes in the fluid can leave the liquid fluid phase and rather become part of the solid mineral phase. The transition of solutes from fluid to mineral corresponds to an evolving fluid-mineral interface and hence a free boundary of the model. Hence, where the fluid is and where the mineral is, pose additional challenges to modeling of such reactive transport processes. There are several modeling approaches to describe these reactive transport processes. One possibility is to use sharp-interface models, which includes to track the interface between the fluid and mineral and hence the fluid and mineral domains themselves, and at the same time solve the necessary conservation equations for mass and energy in the respective, time-evolving domains. The two domains are coupled to each other through boundary conditions on the evolving fluid-mineral interface, where these boundary conditions account for the exchange of mass and energy between the domains. To model the evolution of the interface one can in the general case use a level-set equation [30], for applications of level-set models with reactive transport see e.g. [5, 34, 26]. For simpler geometries, e.g. layering or circular shapes, a simple distance or radius function can be used [7, 41, 42]. Although correct and accurate, these sharp-interface approaches lead to singularities in the formulation in case of merging or break-up of two minerals, and can be numerically challenging due to the need of tracking the interface and divide the computational domains accordingly. Also, the interface itself appears as a shock in the domain, with discontinuities across it, leading to further numerical difficulties. We instead apply a diffuse-interface approach through a phase-field model. In a diffuse-interface model [2], the interface between fluid and mineral is modeled as a diffuse transition zone. Instead of having a sharp interface with boundary conditions and different model equations and material properties at each side the evolving interface, the model is rewritten to use the same model equations at the full domain, where a diffuse transition region represents the processes at the evolving interface. This diffuse transition region hence represents the fluid-mineral interface and should account for the exchange of mass and energy occurring here, but is modeled as smooth. Hence, a diffuse-interface model does not involve any shocks nor discontinuities as the transition between fluid and mineral is smoothed out over some region of non-zero width. The evolution of the fluid-mineral interface is modeled by an evolution of the location of the diffuse transition region, through a phase-field equation. However, note that the introduction of the diffuse transition region is a mathematical simplification and does not represent physics on the molecular level (as the real interface width would be at a much smaller scale [21]). Hence, the diffuse-interface model should be seen as an approximation to the real sharp-interface physics, introduced for convenient mathematical and numerical treatment. If the diffuse-interface model correctly approximates the sharp-interface physics; that is, the diffuse-interface model can recover the sharp-interface model as the diffuse interface width approaches zero, we consider the diffuse-interface model a valid approximation. This can be shown analytically by matched asymptotic expansions [10], or compared numerically for test cases [22, 45, 8]. The diffuse-interface model offers advantages of avoiding singularities in case of merging and break-up, and is generally easier to discretize and solve numerically as it is formulated on a fixed domain without any shocks and discontinuities due to the smooth, diffuse nature. However, these advantages come at the cost of applying a model that is an approximation of the sharp-interface physics, which can lead to unwanted evolution of the diffuse transition region. There are two main approaches for phase-field models describing evolving interfaces: Allen-Cahn [1] and Cahn-Hilliard [11]. Where the Allen-Cahn model is represented by a second-order non-linear partial differential equation (PDE), the Cahn-Hilliard model is a fourth-order non-linear PDE. Due to being second-order, the Allen-Cahn equation can be numerically discretized with most standard discretization schemes, and fulfills a maximum principle. However, the Allen-Cahn model includes curvature-driven motion of the interface [1], which in its original form is not conservative. That means, an Allen-Cahn phase field would evolve in a manner that corresponds to loss or gain of mineral that cannot be connected to mineral precipitation or dissolution reactions. On the other hand, the Cahn-Hilliard equation needs special care due to being fourth order and can have overshoot/undershoot of values, which need to be handled to stay within the physical bounds [17]. The Cahn-Hilliard model includes Mullins-Sekerka motion [40], which corresponds to a redistribution of the mineral to minimize surface area. However, this redistribution is conservative, which means that any net loss or gain of mineral is always connected to a mineral precipitation or dissolution reaction. Both approaches have been successfully used to model mineral precipitation and dissolution, coupled with fluid flow and solute transport [8, 36]. We here apply the Allen-Cahn equation due to its beneficial numerical properties and availability of maximum principle, and rather explore a reformulation of it to make the curvature-driven motion conservative. We build on the Allen-Cahn model found in [8]. Rubinstein and Sternberg [37] suggested to add a non-local term to overcome the issue of non-conservative motion. Then, the curvature-driven motion instead redistributes the mineral to obtain mean curvature [13]. This reformulation has been successfully applied in several contexts [13, 6, 31, 29]. Although conservative, the non-local term introduces numerical challenges in terms of solving the resulting non-linear system of equations if implicit time stepping is used. If applying Newton's method, the non-local term causes the Jacobian matrix to be full, which makes the Newton iterations expensive [6]. To avoid expensive Newton iterations caused by the non-local term of the conservative Allen-Cahn equation, we instead apply L-scheme iterations to solve the resulting non-linear system of equations [33]. Using L-scheme iterations relies on having a monotonically decreasing (or increasing) non-linearity, which is generally not the case for the conservative Allen-Cahn equation. Therefore we split the non-linearity in a decreasing and increasing part, which is solved implicitly and explicitly, respectively. By adding a stabilization parameter, we can guarantee that the L-scheme iterations form a contraction and always converge [24]. The L-scheme iterations are found to converge when the stabilization parameter is large enough, and converge independently of the time step size and initial guess. L-scheme iterations however only offer linear convergence, and can require many iterations depending on the choice of stabilization parameter and time-step size. For other equations approaches with using modified L-scheme iterations or switching to Newton when a good starting point is found, have been suggested [3, 28, 27]. However, it is important to emphasize that for L-scheme iterations, every iteration step is much cheaper compared to Newton iterations. To discretize the model equations, we apply the Finite Volume method [15]. Finite Volume method is discretely conservative and the conservative Allen-Cahn equation remains discretely conservative when a suitable splitting of the non-linearity into implicit and explicit parts are made [6]. To be able to model non-isothermal reactive transport, the Allen-Cahn equation must be coupled to conservation equations for mass and energy. As discussed above, these conservation equations are in this case defined on fixed domains and hence describe the conservation of mass and energy in the combined domain of both fluid and mineral. Hence, the conservation equations are modified to incorporate the diffuse transition that represents the mass and energy transfer between fluid and mineral. For this, we build on the isothermal model in [8] and extend it to non-isothermal conditions. We show how the conservation property of these equations are modified by coupling to the (conservative) Allen-Cahn equation. To solve the coupled system of conservative Allen-Cahn equation, conservation of mass and energy, we apply an iterative strategy where the different equations are solved sequentially at each time step, and iterate until convergence is achieved. This is done for a simpler case without fluid flow only. The advantage of using an iterative strategy is that each equation can be discretized according to its needs. In particular, as the conservation equations for mass and energy are linear (when the phase field is known), simpler strategies can be used to solve them. By introducing an additional stabilization parameter for the coupling, we can prove the convergence of the scheme with a restriction on the time-step size. The strategy and convergence proof build on the ideas in [3, 9]. Numerical simulations show that the resulting scheme performs well, requiring generally few coupling iterations to reach convergence. For larger time-step sizes, the coupling iterations still converge but the efficiency of the scheme is limited by the L-scheme iterations then requiring many iterations to reach convergence. The goal of this paper is hence to formulate a conservative phase-field model including an evolving fluid-mineral interface that can model fluid flow, solute transport and heat transport. Given the numerical challenges coming from such a highly non-linear and strongly coupled model, we aim to formulate an efficient numerical scheme based on L-scheme iterations for the non-linearities and coupling iterations for the coupled system of equations. To reach our goals, the remaining of this paper is structured as following. In Section 2 we consider the sharp-interface model that form the basis of the physical processes that we want to model. The corresponding diffuse-interface model is formulated and analyzed in terms of conservation in Section 3. In Section 4 we show that the sharp-interface limit of the diffuse-interface model is indeed the model from Section 2, with the additional curvature-driven motion. In Section 5 we formulate the numerical strategy and show the discrete conservation as well as prove convergence of the two iterative schemes. Numerical behavior of the Allen-Cahn equation and the coupled phase-field model is investigated in Section 6. Discussion and conclusion can be found in Section 7. ## 2 Sharp-interface model We here present the sharp-interface model for fluid flow, solute conservation and energy conservation in the fluid and mineral. The model is similar to the model considered in [5, 42]. Two domains are included: The fluid domain where fluid flow and the dissolved solute is found, and the mineral domain where the solute appears as a stationary mineral. Both domains are evolving in time as mineral dissolves becoming solute part of the fluid, and as solute from the fluid precipitates becoming part of the mineral. We will denote the fluid and mineral domains \(\Omega_{f}(t)\) and \(\Omega_{m}(t)\). The two domains are separated by the interface \(\Gamma(t)\). The considered chemical reaction can be written as \[n_{1}C_{1}+n_{2}C_{2}\rightleftharpoons C;\] that is, that \(n_{1}\) ions of type \(C_{1}\) go together with \(n_{2}\) ions of type \(C_{2}\) to form one minerals molecule of type \(C\). The reaction can go both ways, depending on the ion concentrations relative to the minerals solubility. ### Fluid domain In the fluid domain, we consider a fluid having density \(\rho_{f}\) and which is flowing with velocity \(\mathbf{v}\). Dissolved in the fluid, two solutes having concentration \(c_{1}\) and \(c_{2}\) (corresponding to the two ion types \(C_{1}\) and \(C_{2}\), respectively) are transported. Temperature variations are accounted for, and the temperature of the fluid is denoted \(T_{f}\). Since we will work with conservation of both fluid and solutes, it is convenient to also consider the molar concentration of the fluid \(m_{f}\). Note however that \(m_{f}\) denotes the molar concentration of the whole fluid phase; hence it accounts for both solvent (water) and solutes. The molar concentration and the density of the fluid is connected through \(\rho_{f}=m_{f}M\), where \(M_{f}\) is the (average) molar mass of the fluid. We assume \(M_{f}\) as well as \(\rho_{f}\) to be constant, which is a valid approximation when the variability of \(c_{1}\) and \(c_{2}\) is not too large compared to \(m_{f}\), or when the solvent and solutes have comparable molar masses. This means that we can still model the fluid with constant density. The model equations for the considered processes in the fluid domain \(\Omega_{f}(t)\) are: \[\nabla\cdot\mathbf{v} =0 \tag{1a}\] \[\partial_{t}(\rho_{f}\mathbf{v})+\nabla\cdot(\rho_{f}\mathbf{v} \otimes\mathbf{v}) =-\nabla p+\nabla\cdot\boldsymbol{\sigma}+\rho_{f}\mathbf{g}\] (1b) \[\partial_{t}c_{i}+\nabla\cdot(c_{i}\mathbf{v}) =D_{i}\nabla^{2}c_{i}\quad i=1,2\] (1c) \[\partial_{t}(c_{p,f}\rho_{f}T_{f})+\nabla\cdot(c_{p,f}\rho_{f}T _{f}\mathbf{v}) =k_{f}\nabla^{2}T_{f} \tag{1d}\] Here, since the fluid is incompressible, the stress tensor is \(\boldsymbol{\sigma}=\mu(T_{f})\nabla\mathbf{v}\), where the viscosity \(\mu\) is allowed to vary with fluid temperature. We account for a body force \(\rho_{f}\mathbf{g}\), e.g. gravity. Further, \(D_{i}\) is the diffusivity of the solutes and we will assume \(D_{1}=D_{2}=D\) and that it is constant. The heat capacity \(c_{p,f}\) and heat conductivity \(k_{f}\) are also assumed constant. Note that the first equation (1a) is the mass conservation equation for the incompressible fluid and is originally \(\partial_{t}m_{f}+\nabla\cdot(m_{f}\mathbf{v})=0\), but is simplified to (1a) as \(m_{f}\) is constant. The second equation (1b) is the conservation of linear momentum, while the third equation (1c) is the mass conservation equation of the solute. Finally, (1d) expresses the conservation of energy inside the fluid. Note that we use a rather simple energy equation by letting temperature be the primary variable, but this is a valid assumption as long we consider low fluid velocities and no evaporation of the fluid [25]. ### Mineral domain The mineral domain consists of a stationary mineral having molar concentration \(m_{m}\) and mass density \(\rho_{m}\). These two are connected through \(\rho_{m}=m_{m}M_{m}\), and are all considered constant. The mineral is a solid, hence flow cannot take place. The mineral can however conduct heat, hence an energy conservation equation is considered in the mineral domain \(\Omega_{m}(t)\): \[\mathbf{v} =\mathbf{0} \tag{2a}\] \[\partial_{t}(c_{p,m}\rho_{m}T_{m}) =k_{m}\nabla^{2}T_{m}, \tag{2b}\] where \(T_{m}\) is the temperature in the mineral, and \(c_{p,m}\) and \(k_{m}\) are the (constant) heat capacity and heat conductivity of the mineral, respectively. Here, (2a) expresses conservation of mass in the mineral, while (2b) represents conservation of energy in the mineral. ### Boundary conditions between the two domains The interface \(\Gamma(t)\) between the fluid and mineral domains is evolving due to the mineral precipitation and dissolution reaction. When minerals dissolve into ions, the ions become part of the fluid domain as solute. Hence, there is a transfer of ions across the interface, which also corresponds to a mass transfer across the interface. The interface obtains a new location due to the lost minerals. The opposite takes place due to precipitation. To ensure conservation of mass, ions and energy across the interface, we apply Rankine-Hugoniot jump conditions [16], which balances the jump in the flux with the jump of the conserved quantity across an evolving interface. For momentum we assume a no-slip condition; that is, that the tangential component of the fluid velocity is zero at the interface. The normal component of the fluid velocity will in general not be zero, but be proportional to the normal velocity of the evolving interface. Hence, on \(\Gamma(t)\) we have: \[-m_{f}\mathbf{v}\cdot\mathbf{n} =v_{n}((n_{1}+n_{2})m_{m}-m_{f}), \tag{3a}\] \[\mathbf{v} =v_{n}\frac{m_{f}-(n_{1}+n_{2})m_{m}}{m_{f}}\mathbf{n},\] (3b) \[(-c_{i}\mathbf{v}+D\nabla c_{i})\cdot\mathbf{n} =v_{n}(n_{i}m_{m}-c_{i})\quad i=1,2,\] (3c) \[(-k_{m}\nabla T_{m}-c_{p,f}\rho_{f}T_{f}\mathbf{v}+k_{f}\nabla T _{f})\cdot\mathbf{n} =v_{n}(c_{p,m}\rho_{m}T_{m}-c_{p,f}\rho_{f}T_{f}). \tag{3d}\] The normal vector \(\mathbf{n}\) is assumed to point into the mineral, and the normal velocity \(v_{n}\) is positive when \(\Gamma(t)\) moves towards the mineral domain, which corresponds to dissolution taking place. The top boundary condition (3a) considers conservation of mass, from a molar perspective. It expresses that one mineral molecule from the mineral domain dissolves into \(n_{1}+n_{2}\) ions in the fluid domain. This jump in mass is balanced by the jump in the mass flux, which comprises of the advective term from the mass conservation equation. The second boundary condition (3b) is a reformulation of the no-slip condition and states that the normal component of the fluid velocity from (3a) is the only component. The third boundary condition (3c) ensures conservation of the two solutes: As one mineral molecule forms \(n_{i}\) ion molecules, the factor \(n_{i}\) appears. The jump in solute is balanced by the jump in the solute flux. As the mineral is stationary, only the contribution from the fluid domain is needed. These boundary conditions (3a)-(3c) are the same as in [42]. Finally, the last boundary condition (3d) accounts for the energy conservation across the evolving interface. In this equation the jump in energy is balanced by the jump in the energy flux, which has contributions from both mineral and fluid. This boundary condition is also found in [7]. #### 2.3.1 Reaction rates for mineral precipitation and dissolution For the mineral precipitation/dissolution reaction, we formulate a reaction rate. Following [23, 42] and including temperature dependence as in [5] we get \[m_{m}v_{n}=(-f_{p}(T_{f},c_{1},c_{2})-f_{d}(T_{f},c_{1},c_{2},\mathbf{x})), \quad\mathbf{x}\in\Gamma(t), \tag{4}\] where \(f_{p}\) and \(f_{d}\) are the precipitation and dissolution rates, respectively. The rates are [12, 23] \[f_{p}(T_{f},c_{1},c_{2}) =k_{0}e^{-\frac{E}{RT_{f}}}\frac{c_{1}^{n_{1}}c_{2}^{n_{2}}}{K_{ m}(T_{f})}\text{ and } \tag{5a}\] \[f_{d}(T_{f},c_{1},c_{2},\mathbf{x}) =k_{0}e^{-\frac{E}{RT_{f}}}w(\text{dist}(\mathbf{x},\mathcal{G}), T_{f},c_{1},c_{2}) \tag{5b}\] where \(k_{0}\) is a positive rate constant, \(E\) is the activation energy, \(R\) the gas constant and \(K_{m}(T_{f})\) is the solubility product for the mineral. As dissolution can only take place when there is still mineral present, we introduce a distance measure that measures the distance from \(\Gamma(t)\) to the closest non-reactive solid or an external boundary, in (5b) denoted \(\mathcal{G}\). When there is no mineral present at a specific location, this distance will be zero. To incorporate zero dissolution rate when there is no mineral present, one can use a modified Heaviside graph [23] \[w(d,T_{f},c_{1},c_{2})=\left\{\begin{array}{ll}0&\text{if }d<0,\\ \min\{\frac{c_{1}^{n_{1}}c_{2}^{n_{2}}}{K_{m}(T_{f})},1\}&\text{if }d=0,\\ 1&\text{if }d>0,\end{array}\right.\] The temperature can both affect the solubility of the mineral and also the general speed of the reaction rates. #### 2.3.2 Volume changes due to mineral precipitation and dissolution The boundary condition ensuring conservation of mass (3a) and the modified no-slip condition (3b) also expresses any volume expansion occurring due to the chemical reaction. Letting \(K_{m}=\frac{m_{f}-(n_{1}+n_{2})m_{m}}{m_{f}}\), the boundary condition (3b) on \(\Gamma(t)\) reads \[\mathbf{v}=K_{m}v_{n}\mathbf{n}.\] Hence, the fluid near the evolving interface is moving proportionally with the normal velocity of the evolving interface, and is scaled with the factor \(K_{m}\). This is caused by the fluid having to move due to expansions/contractions caused by the chemical reaction. If \(K_{m}=0\), then \(m_{f}=(n_{1}+n_{2})m_{m}\) which corresponds to the volume taken up by one mineral molecule being the same as the volume taken up by \(n_{1}+n_{2}\) solute molecules. In this case, we have the more commonly used condition \(\mathbf{v}=\mathbf{0}\) on the interface. As the resulting velocity at the interface is generally small compared to flow velocities, it is usually a good approximation to let \(K_{m}=0\) even if it is non-zero [26]. If \(K_{m}>0\), the fluid density is larger which correspond to that the mineral phase takes up a larger volume compared to the fluid. This means, for example in the case of dissolution and \(v_{n}>0\), that the fluid has to move towards the mineral to account for the volume change. Hence the fluid flows towards the mineral with the velocity \(\mathbf{v}=K_{m}v_{n}\mathbf{n}\), which is positive in the direction of the normal vector pointing into the mineral. Oppositely, if \(K_{m}<0\), the fluid density is lower, hence the mineral phase takes up less volume compared to the fluid. In the case of dissolution and \(v_{n}>0\), the fluid has to move away from the mineral as it is pushed away by the released solute needing more room. The fluid flows away from the mineral with the velocity \(\mathbf{v}=K_{m}v_{n}\mathbf{n}\), which is now negative in the direction of the normal vector. Correspondingly, the opposite processes occurs when \(v_{n}<0\) as precipitation takes place. The volume changes coming from \(K_{m}\neq 0\) have to be accounted for when choosing external boundary conditions for the flow, as e.g. a closed container would lead to inconsistencies if there is overall net precipitation/dissolution inside the container. Instead the fluid must be allowed to escape/enter the domain in accordance with the mineral precipitation/dissolution. Hence, the necessary condition is \[\int_{\partial\Omega_{f}(t)}m_{f}\mathbf{v}\cdot\mathbf{n}ds=-\int_{\Omega_{f} (t)}\partial_{t}m_{f}dV-\int_{\Gamma(t)}m_{f}K_{m}v_{n}ds, \tag{6}\] where the integral on the left-hand side is over the external boundary of the fluid domain (that is, not facing a mineral), while the second integral on the right-hand side is over all fluid-mineral interfaces found inside the domain. Since we have constant density (\(\partial_{t}m_{f}=0\)), the inflow/outflow across the exterior boundary has to match the volume changes at the mineral boundary. If there is no expansion due to the reactions (\(K_{m}=0\)), (6) reduces to a standard boundary condition for incompressible flow. ### Domain evolution We mention for completeness how to model the evolving interface \(\Gamma(t)\) in the case of a sharp-interface model. The location of the evolving interface \(\Gamma(t)\), and hence also the evolution of the two domains, can be determined through a level-set approach. Letting \(S:[0,\infty)\times\Omega\to\mathbb{R}\), where \(\Omega=\Omega_{f}(t)\cup\Omega_{m}(t)\cup\Gamma(t)\), the level-set fulfills \[S(t,\mathbf{x})=\left\{\begin{array}{ll}<0&\mbox{ if }\mathbf{x}\in\Omega_{f}(t), \\ 0&\mbox{ if }\mathbf{x}\in\Gamma(t),\\ >0&\mbox{ if }\mathbf{x}\in\Omega_{m}(t).\end{array}\right.\] The level-set evolves following the level-set equation \[\partial_{t}S+v_{n}|\nabla S|=0\quad\mbox{ for }\mathbf{x}\in\Omega.\] Note that the level-set equation requires the normal velocity of the evolving interface to be defined in the entire domain \(\Omega\). This can be solved by, for each point \(\mathbf{x}\in\Omega\), picking the corresponding value of \(v_{n}\) from the point of \(\Gamma(t)\) that is closest to \(\mathbf{x}\). **Remark 1**.: _We will, without loss of generality, in the remainder of this paper only consider a simplified chemical setting, namely the case where \(C_{1}+C_{2}\leftrightarrow C\); hence, one molecule of each type of solute is needed to form one mineral molecule. We will also assume that initial and boundary conditions for the two solutes are the same. In this case, the two solutes will always have the same concentration, which we will denote \(c\). Hence only one equation (1c) is needed for the solute, and only one boundary condition (3c). The reaction rates (5) are modified such that \(c_{1}=c_{2}\) and \(n_{1}=n_{2}=1\)._ Diffuse-interface models Instead of dealing with a sharp interface, we now consider the transition between fluid and mineral to be a smooth transition. In this case there are no discontinuities between the two domains and all state variables will vary smoothly between the domains. To describe this setting we use a phase field \(\phi(t,\mathbf{x})\). We will consider a phase field approaching the value \(1\) in the fluid domain and \(0\) in the mineral domain, and with a smooth transition of width \(O(\lambda)\) between the domains. Hence, the interface between the fluid and mineral can no longer be said to be at an exact location, although it is common to let the value \(\phi=0.5\) represent the interface location for practical purposes, which we will also apply in Section 4. The parameter \(\lambda\) quantifies the width of the diffuse interface, and the sharp-interface model is expected to be recovered as \(\lambda\to 0\), which we will investigate in Section 4. ### Phase-field equations An Allen-Cahn equation [1] is adopted for the phase field. The standard Allen-Cahn equation is not conservative and contain curvature effects that may be non-physical. In this section we will consider a standard Allen-Cahn equation as well as a reformulation to ensure conservation. The standard Allen-Cahn equation for mineral precipitation and dissolution is \[\partial_{t}\phi=-\frac{1}{\lambda^{2}}\gamma P^{\prime}(\phi)+\gamma\nabla^{ 2}\phi-f_{\rm diff}(\phi,T_{f},c), \tag{7}\] Here, \(P(\phi)=8\phi(1-\phi^{2})\) is the double-well potential, ensuring that the phase field attains values close to \(0\) and \(1\) for small \(\lambda\). The parameter \(\gamma\) is a interface diffusion parameter and its role is explained in detail in Section 4. The reaction rate \(f_{\rm diff}\) is related to the reaction rate for the sharp-interface model (4), but is reformulated to incorporate the diffuse interface. We let \[f_{\rm diff}(\phi,T,c)=\frac{4}{\lambda}\phi(1-\phi)\frac{1}{m_{m}}\big{(}f_{ p}(T,c)-f_{d}(T,c)\big{)}.\] The factors \(\phi(1-\phi)\) ensures that the reaction is only non-zero in the diffuse transition zone where \(\phi\) is neither \(0\) nor \(1\). The fraction \(4/\lambda\) ensures correct scaling as will be shown in section 4. For this reaction rate the Heaviside graph is not needed as \(\phi\equiv 1\) if all mineral is dissolved and the resulting reaction rate is hence \(0\) in this case. Hence, the precipitation \(f_{p}\) and dissolution \(f_{d}\) rates will be same as in (5), but where \(w\equiv 1\) making the distance-measure superfluous. For convenience we will denote the net precipitation rate for \(f(T,c)=f_{p}(T,c)-f_{d}(T,c)\) in the following. As we will see in Section 3.3, the Allen-Cahn equation is not conserving the phase-field variable. Inspired by [13, 37] we consider the following reformulation of the Allen-Cahn equation to overcome this: \[\partial_{t}\phi=-\frac{1}{\lambda^{2}}\gamma P^{\prime}(\phi)+\gamma\nabla^{ 2}\phi-f_{\rm diff}(\phi,T,c)+\frac{\gamma}{\lambda^{2}}\frac{1}{|\Omega|} \int_{\Omega}P^{\prime}(\phi)dV. \tag{8}\] where \(|\Omega|=\int_{\Omega}1dV\) is the measure of the domain \(\Omega\). The reformulation (8) is conservative, which will be shown in Section 3.3. We denote (7) as the _original_ Allen-Cahn equation, and the reformulated (8) as the _conservative_ Allen-Cahn equations, respectively. The conservative reformulation (8) was recently also considered in [6]. ### Modified transport equations We now combine the conservation and transport equations from the fluid (1), mineral (2) and their separating boundary (3) to formulate a one-field model through incorporating the phase field. Letting \(\phi\) be modeled by an Allen-Cahn equation, such that it approaches the value \(1\) in the fluid and \(0\) in the mineral, the following model equations describes the fluid flow and ensures conservation of mass, ions and energy in the combined fluid-mineral domain \(\Omega\): \[\partial_{t}m_{\phi}+\nabla\cdot(m_{f}\phi{\bf v}) =0 \tag{9a}\] \[\partial_{t}(\rho_{f}\phi{\bf v})+\nabla\cdot(\rho_{f}\phi{\bf v} \otimes{\bf v}) =-\phi\nabla p+\nabla\cdot\boldsymbol{\sigma}_{\phi}-g(\phi, \lambda){\bf v}+\frac{1}{2}\rho_{f}{\bf v}\partial_{t}\phi+\rho_{f}{\bf g}\] (9b) \[\partial_{t}(\phi c+(1-\phi)m_{m})+\nabla\cdot(c\phi{\bf v}) =D\nabla(\phi\nabla c)\] (9c) \[\partial_{t}((c_{p}\rho)_{\phi}T)+\nabla\cdot(c_{p,f}\rho_{f}T\phi {\bf v}) =\nabla\cdot(k_{\phi}\nabla T) \tag{9d}\] Here, the newly introduced parameters \(m_{\phi}\), \((c_{p}\rho)_{\phi}\) and \(k_{\phi}\) are given by \[m_{\phi}=\phi m_{f}+(1-\phi)2m_{m},\quad(c_{p}\rho)_{\phi}=\phi c_{p,f}\rho_{ f}+(1-\phi)c_{p,m}\rho_{m},\quad k_{\phi}=\phi k_{f}+(1-\phi)k_{m}.\] This way they attain the expected value in the fluid when \(\phi=1\) and in the mineral when \(\phi=0\). The factor two in front of \(m_{m}\) is selected to ensure conservation of mass as \(n_{1}=n_{2}=1\). The temperature \(T\) describes the temperature of the entire system and hence replaces \(T_{f}\) and \(T_{g}\). The modified stress tensor is \(\boldsymbol{\sigma}_{\phi}=\nu(\nabla\cdot(\phi{\bf v})){\bf I}+\mu(\nabla( \phi{\bf v})+(\nabla(\phi{\bf v}))^{T})\). Hence, as the mixture of fluid and mineral is quasi-compressible due to the density differences, we here have a modified stress tensor, where \(\nu\) is a bulk viscosity for the fluid-mineral mixture. The term \(g(\phi,\lambda){\bf v}\) is added to ensure \({\bf v}=0\) in the mineral [8]. The interpolation function \(g(\phi,\lambda)\) is a decreasing, surjective and twice differentiable function fulfilling \(g(1,\lambda)=0\) and \(g(0,\lambda)>0\). This way, (9b) reduces to \({\bf v}={\bf 0}\) when \(\phi=0\). We use \(g(\phi,\lambda)=\frac{K_{\bf v}}{\lambda}(1-\phi)\) for a constant \(K_{\bf v}\), but mention that other possibilities are discussed in [4, 8, 18]. Note that, as pointed out already in Section 2.3.2, we need to take care of which boundary conditions to use on \(\partial\Omega\) for the mass conservation equation (9a). Directly letting \(\phi{\bf v}\cdot{\bf n}=0\) on \(\partial\Omega\) could lead to unphysical effects as the mass concentration \(m_{\phi}\) can increase and decrease due to the chemical reaction (through changes in \(\phi\)). Hence, to let fluid escape or enter through \(\partial\Omega\) accordingly, we need \[\int_{\partial\Omega}m_{f}\phi{\bf v}\cdot{\bf n}ds=-\int_{\Omega}\partial_{t }m_{\phi}dV. \tag{10}\] This condition is analog to (6). Note that if \(m_{f}=2m_{m}\), then is \(m_{\phi}=m_{f}\) and hence constant, which leads to the right-hand side of (10) to be zero. This corresponds to the case \(K_{m}=0\) as discussed in Section 2.3.2. ### Conserved quantities of the phase-field models We here shortly show why the phase field described by the standard Allen-Cahn equation (7) corresponds to a non-conserved order parameter. If we assume zero Neumann-conditions for the phase field on the external boundary \(\partial\Omega\), we have \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\phi dV =\int_{\Omega}\partial_{t}\phi dV=\int_{\Omega}\big{(}-\frac{1}{ \lambda^{2}}\gamma P^{\prime}(\phi)+\gamma\nabla^{2}\phi-\frac{4}{\lambda} \phi(1-\phi)\frac{1}{m_{m}}f(T,c)\big{)}dV\] \[=\int_{\Omega}\big{(}-\frac{1}{\lambda^{2}}\gamma P^{\prime}(\phi )-\frac{4}{\lambda}\phi(1-\phi)\frac{1}{m_{m}}f(T,c)\big{)}dV+\int_{\partial \Omega}\gamma\nabla\phi\cdot{\bf n}ds\] \[=\int_{\Omega}\big{(}-\frac{1}{\lambda^{2}}\gamma P^{\prime}(\phi )-\frac{4}{\lambda}\phi(1-\phi)\frac{1}{m_{m}}f(T,c)\big{)}dV,\] which is non-zero also in the case of no chemical reactions. For the conservative Allen-Cahn (8), the phase field is conserved globally up to the chemical reaction, that is \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\phi dV=\int_{\Omega}\big{(}-f_{ \mathrm{diff}}(\phi,T,c)\big{)}dV,\] hence we have \(\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\phi d{\bf x}=0\) in the case of no chemical reactions. Note that since the integrated phase field gives the volume of the fluid part of the domain, this integral can be interpreted as the porosity of the system. This means that for the conservative Allen-Cahn equation (8) only the chemical reaction can affect the porosity. The scaling factor of \(f_{\mathrm{diff}}\), \(\frac{4}{\lambda}\phi(1-\phi)\), corresponds to the surface area of the fluid-mineral interface when integrated. For total mass \(m_{\phi}\), total amount of ions \(\phi c+(1-\phi)m_{m}\) and total energy \((c_{p}\rho)_{\phi}T\) it is straightforward to show that these quantities are conserved under zero inflow (\(\phi{\bf v}\cdot{\bf n}=0\)) and zero Neumann (\(\phi{\bf V}c\cdot{\bf n}=0\) and \(k_{\phi}\nabla T\cdot{\bf n}=0\)) boundary conditions on \(\partial\Omega\), also in the case of chemical reactions. This is independent of which phase field is used to model the diffuse transition zone. If investigating conservation of \(\phi c\), which corresponds to the amount of solute in the fluid, we get \[\frac{{\rm d}}{{\rm d}t}\int_{\Omega}\phi cdV=\int_{\Omega}\partial_{t}(\phi c )dV=m_{m}\int_{\Omega}\partial_{t}\phi dV,\] where the last integral is either connected to the reaction rate only, or to the reaction rate together with curvature effects (if using the original Allen-Cahn equation (7)). Hence, in the case of no chemical reaction, \(\phi c\) is only conserved if using the conservative Allen-Cahn equation (8). In the case of chemical reactions, the change in \(\phi c\) corresponds to the changes caused by the reaction rate. ## 4 Sharp-interface limits of the diffuse-interface models To justify that the considered phase-field model formulated in Section 3 is meaningful, we consider the limit as the width of the diffuse interface \(\lambda\) approaches zero to show that the model reduces to the expected sharp-interface model from Section 2. We follow the ideas of [10] and separate between behavior far away from the interface and near it, and investigate the behavior as \(\lambda\to 0\). We apply so-called outer expansions for the behavior of the phase-field model far away from the interface, and it will be shown that these outer expansions will reduce to the model equations in the fluid and mineral domain as \(\lambda\to 0\). Behavior near the interface is described through so-called inner expansions of the unknowns and will be shown to capture the boundary conditions as \(\lambda\to 0\). The two expansions are connected through matching conditions in a transition region where both are valid. We first present the two expansions and their matching conditions, before considering the outer and inner expansions of the phase-field models. ### Inner and outer expansions and their matching conditions Away from the interface, we consider the outer expansions of \(\phi\), \(c\), \(p\), \({\bf v}\) and \(T\). For \(\phi\) this reads \[\phi(t,{\bf x})=\phi_{0}^{\rm out}(t,{\bf x})+\lambda\phi_{1}^{\rm out}(t,{ \bf x})+\lambda^{2}\phi_{2}^{\rm out}(t,{\bf x})+\ldots \tag{11}\] and similarly for the other variables. Note that \(\mu_{f}\) is considered (given) function of \(T\) and hence do not need an expansion. The inner expansions are valid near the interface, and it is hence convenient to make a change of coordinates into local, curvilinear coordinates. We let \(\Gamma_{\lambda}(t)\) denote the set of points \({\bf y}_{\lambda}\in\Omega\) such that \(\phi(t,{\bf y}_{\lambda})=1/2\). Note that the location of these points \({\bf y}_{\lambda}\) vary with time as the phase field evolves. Let \({\bf s}\) be an arc-length parametrization of the points on \(\Gamma_{\lambda}(t)\) (note that \({\bf s}\) is a scalar if \(\Omega\subset\mathbb{R}^{2}\) and consists of two coordinates if \(\Omega\subset\mathbb{R}^{3}\)) and \({\bf n}_{\lambda}\) the normal vector at \(\Gamma_{\lambda}(t)\) pointing into the mineral (that is, towards decreasing \(\phi\)). Further, we let \(r\) be the signed distance from a point \({\bf x}\) to the interface, being negative when inside the fluid (\(\phi>0.5\)) and positive inside the mineral (\(\phi<0.5\)). This way we can let each point \({\bf x}\) be described through \[{\bf x}={\bf y}_{\lambda}(t,{\bf s})+r{\bf n}_{\lambda}(t,{\bf s}). \tag{12}\] Through this transform, \((r,{\bf s})\) makes up a new, orthogonal coordinate system. It is shown in [10] that \[|\nabla r|=1,\quad\nabla r\cdot\nabla s_{i}=0,\quad\partial_{t}r=-v_{n},\quad \nabla^{2}r=\frac{\kappa+2\Pi r}{1+\kappa r+\Pi r^{2}},\] where \(\kappa\) and \(\Pi\) are the mean and Gaussian curvature of the interface. We assume that \({\bf y}_{\lambda}\) and \({\bf n}_{\lambda}\) fulfill expansions \({\bf y}_{\lambda}={\bf y}_{0}+O(\lambda)\) and \({\bf n}_{\lambda}={\bf n}_{0}+O(\lambda)\) where \({\bf y}_{0}\) is a point of the interface \(\Gamma_{0}^{\rm out}(t)\) defined through \(\phi_{0}^{\rm out}=1/2\) and \({\bf n}_{0}\) being the normal vector of \(\Gamma_{0}^{\rm out}(t)\) pointing into the mineral. Then, with \(z=r/\lambda\) we consider the inner expansions \[\phi(t,{\bf x})=\phi_{0}^{\rm in}(t,z,{\bf s})+\lambda\phi_{1}^{\rm in}(t,z,{ \bf s})+\lambda^{2}\phi_{2}^{\rm in}(t,z,{\bf s})+\ldots, \tag{13}\] and similarly for the other unknowns. In the local, curvilinear coordinates and when accounting for the scaling of \(z\), we need to rewrite derivatives. For a generic variable \(u\) or \(\mathbf{u}\), we obtain [10] \[\partial_{t}u =-\lambda^{-1}v_{n,0}\partial_{z}u^{\mathrm{in}}+(\partial_{t}+ \partial_{t}s\cdot\nabla_{\mathbf{s}})u^{\mathrm{in}}+O(\lambda), \tag{14a}\] \[\nabla_{x}u =\lambda^{-1}\partial_{z}u^{\mathrm{in}}\mathbf{n}_{0}+\nabla_{ \mathbf{s}}u^{\mathrm{in}}+O(\lambda),\] (14b) \[\nabla_{x}\cdot\mathbf{u} =\lambda^{-1}\partial_{z}\mathbf{u}^{\mathrm{in}}\cdot\mathbf{n} _{0}+\nabla_{\mathbf{s}}\cdot\mathbf{u}^{\mathrm{in}}+O(\lambda),\] (14c) \[\nabla_{x}^{2}v =\lambda^{-2}\partial_{z}^{2}u+\lambda^{-1}\kappa_{0}\partial_{ z}u+O(1), \tag{14d}\] where we have used that \(\nabla_{x}^{2}r=\kappa_{0}+O(\lambda)\) is the lowest order mean curvature and \(v_{n}=v_{n,0}+O(\lambda)\). Here, \(\kappa_{0}\) and \(v_{n,0}\) are the curvature and normal velocity of the interface \(\Gamma_{0}^{\mathrm{out}}(t)\). In the last condition (14d) we have used the properties \(|\nabla r|=1\) and \(\nabla r\cdot\nabla s_{i}=0\). The outer (11) and inner (13) expansions are tied together via (12). For fixed \(t\) and \(\mathbf{s}\), we let \(\mathbf{y}_{1/2\pm}\) denote the limit \(r\to 0^{+}\) (i.e., from the mineral side) and \(r\to 0^{-}\) (i.e., from the fluid side) of \(\mathbf{x}\). We demand that when the outer expansions approach the boundary they need to match the inner expansions as they leave the boundary. That is, the limit when \(r\to 0^{\pm}\) of the outer expansions need to match the limit when \(z\to\pm\infty\) of the inner expansions. Using Taylor expansions, one can show that the following matching conditions \[\lim_{z\to\pm\infty}\phi_{0}^{\mathrm{in}}(t,z,\mathbf{s}) =\phi_{0}^{\mathrm{out}}(t,\mathbf{y}_{1/2\pm}), \tag{15a}\] \[\lim_{z\to\pm\infty}\partial_{z}\phi_{0}^{\mathrm{in}}(t,z, \mathbf{s}) =0,\] (15b) \[\lim_{z\to\pm\infty}\partial_{z}\phi_{1}^{\mathrm{in}}(t,z, \mathbf{s}) =\nabla\phi_{0}^{\mathrm{out}}(t,\mathbf{y}_{1/2\pm})\cdot\mathbf{ n}_{0}, \tag{15c}\] are fulfilled, and similarly for the other variables [10]. ### Outer expansions We first apply the outer expansions (11) to the phase-field equations (7) and (8). Inserting the outer expansions in (7), (8), the dominating term when \(\lambda\to 0\) will in both cases be \[P^{\prime}(\phi_{0}^{\mathrm{out}})=0.\] Recalling that \(P(\phi)=8\phi^{2}(1-\phi)^{2}\), the above equation has the three solutions \(\phi_{0}^{\mathrm{out}}=0\), \(\phi_{0}^{\mathrm{out}}=0.5\) and \(\phi_{0}^{\mathrm{out}}=1\). Only \(\phi_{0}^{\mathrm{out}}=0\) and \(\phi_{0}^{\mathrm{out}}=1\) give stable solutions. We let \(\Omega_{m,0}(t)\) and \(\Omega_{f,0}(t)\) be the collection of points \(\mathbf{x}\in\Omega\) associated with where \(\phi_{0}^{\mathrm{out}}=0\) and \(\phi_{0}^{\mathrm{out}}=1\), respectively. For the remaining phase-field model equations (9), we consider the two limit cases of \(\phi_{0}^{\mathrm{out}}=0\) and \(\phi_{0}^{\mathrm{out}}=1\) separately. When \(\phi_{0}^{\mathrm{out}}=0\) (that is, in \(\Omega_{m,0}(t)\)), the dominating terms of the outer expansions (11) in (9) give that \[\partial_{t}m_{m} =0, \tag{16a}\] \[\mathbf{v}_{0}^{\mathrm{out}} =\mathbf{0},\] (16b) \[\partial_{t}m_{m} =0,\] (16c) \[\partial_{t}(c_{p,m}\rho_{m}T_{0}^{\mathrm{out}}) =k_{m}\nabla^{2}T_{0}^{\mathrm{out}}. \tag{16d}\] The first and third equations simply tell us that the mineral molar density is constant, which was also a model assumption, cf. Section 2.2. The equations (16) correspond to the ones formulated in (2) for the mineral domain. When \(\phi_{0}^{\mathrm{out}}=1\) (that is, in \(\Omega_{f,0}(t)\)), the dominating terms of the outer expansions (11) in (9) give that \[\partial_{t}m_{f}+\nabla\cdot(m_{f}\mathbf{v}_{0}^{\mathrm{out}}) =0, \tag{17a}\] \[\partial_{t}(\rho_{f}\mathbf{v}_{0}^{\mathrm{out}})+\nabla\cdot( \rho_{f}\mathbf{v}_{0}^{\mathrm{out}}\otimes\mathbf{v}_{0}^{\mathrm{out}}) =-\nabla p_{0}^{\mathrm{out}}+\nabla\cdot\mathbf{\sigma}_{0}+\rho_{f} \mathbf{g},\] (17b) \[\partial_{t}\epsilon_{0}^{\mathrm{out}}+\nabla\cdot(\epsilon_{0}^ {\mathrm{out}}\mathbf{v}_{0}^{\mathrm{out}}) =D\nabla^{2}c_{0}^{\mathrm{out}},\] (17c) \[\partial_{t}(c_{p,f}\rho_{f}T_{0}^{\mathrm{out}})+\nabla\cdot(c_{ p,f}\rho_{f}T_{0}^{\mathrm{out}}) =k_{f}\nabla^{2}T_{0}^{\mathrm{out}}, \tag{17d}\] where \(\mathbf{\sigma}_{0}=\nu(\nabla\cdot\mathbf{v}_{0}^{\mathrm{out}})\mathbf{I}+\mu(T_{0 }^{\mathrm{out}})(\nabla\mathbf{v}_{0}^{\mathrm{out}}+(\nabla\mathbf{v}_{0}^{ \mathrm{out}})^{T})\). Since \(m_{f}\) is constant, (17a) reduces to \(\nabla\cdot\mathbf{v}_{0}^{\mathrm{out}}=0\), which also means that we can simplify the stress tensor to \(\mathbf{\sigma}_{0}=\mu(T_{0}^{\mathrm{out}})\nabla\mathbf{v}_{0}^{\mathrm{out}}\). Note that we have here assumed that \(\mu(T)\) is Lipschitz-continuous. The equations (17) hence correspond to the ones formulated in (1) for the fluid domain. ### Inner expansions We now apply the inner expansions (13) to the model equations (7), (8) and (9) to recover the boundary conditions at the evolving interface. #### 4.3.1 Phase-field equations For the original Allen-Cahn equation (7), the dominating \(O(\lambda^{-2})\) terms when inserting the inner expansions (13) (and keeping in mind the rescaled derivatives (14)) are \[P^{\prime}(\phi_{0}^{\text{in}})=\partial_{z}^{2}\phi_{0}^{\text{in}}. \tag{18}\] From the first matching condition (15a) we have that \(\lim_{z\to\infty}\phi_{0}^{\text{in}}=0\) and \(\lim_{z\to\infty}\phi_{0}^{\text{in}}=1\). By multiplying (18) by \(\partial_{z}\phi_{0}^{\text{in}}\), integrating twice with respect to \(z\) and using the matching conditions and that \(\phi_{0}^{\text{in}}(z=0)=0.5\), we get that \[\phi_{0}^{\text{in}}(t,z,\mathbf{s})=\phi_{0}^{\text{in}}(z)=\frac{1}{1+e^{4z} }=\frac{1}{2}\tanh(2z) \tag{19}\] fulfills the equation. For the \(O(\lambda^{-1})\) terms one obtains \[\gamma\big{(}P^{\prime\prime}(\phi_{0}^{\text{in}})-\partial_{z}^{2}\big{)} \phi_{1}^{\text{in}}=(v_{n,0}+\gamma\kappa_{0})\partial_{z}\phi_{0}^{\text{ in}}-4\phi_{0}^{\text{in}}(1-\phi_{0}^{\text{in}})\frac{1}{m_{m}}f(T_{0}^{ \text{in}},c_{0}^{\text{in}}).\] We follow the same steps as in [35] and view the left-hand side as an operator \(\mathcal{F}\) depending on \(\phi_{0}^{\text{in}}\) and applied to \(\phi_{1}^{\text{in}}\). The operator \(\mathcal{F}\) is a Fredholm operator of index zero, hence the above equation has a solution if and only if the right-hand side, denoted by \(A(\phi_{0}^{\text{in}})\), is orthogonal to the kernel of \(\mathcal{F}\). It follows from (18) that \(\partial_{z}\phi_{0}^{\text{in}}\) is in the kernel of \(\mathcal{F}\). Since \(v_{n,0}\), \(\kappa_{0}\), \(T_{0}^{\text{in}}\) and \(c_{0}^{\text{in}}\) are independent of \(z\) (the two latter will be shown in the following subsections), the solvability condition implies \[0 =\int_{-\infty}^{\infty}A(\phi_{0}^{\text{in}})\partial_{z}\phi_ {0}^{\text{in}}dz\] \[=(v_{n,0}+\gamma\kappa_{0})\int_{-\infty}^{\infty}(\partial_{z} \phi_{0}^{\text{in}})^{2}dz-4\frac{1}{m_{m}}f(T_{0}^{\text{in}},c_{0}^{\text{ in}})\int_{-\infty}^{\infty}\phi_{0}^{\text{in}}(1-\phi_{0}^{\text{in}}) \partial_{z}\phi_{0}^{\text{in}}dz\] \[=\frac{2}{3}(v_{n,0}+\gamma\kappa_{0}+\frac{1}{m_{m}}f(T_{0}^{ \text{in}},c_{0}^{\text{in}})).\] We apply the matching condition (15a) for \(T\) and \(c\) at the moving interface and obtain \[v_{n,0}=-\gamma\kappa_{0}-\frac{1}{m_{m}}f(T_{0}^{\text{out}}(t,\mathbf{y}_{1 /2-}),c_{0}^{\text{out}}(t,\mathbf{y}_{1/2-})),\] which corresponds to the boundary condition (4) at the moving interface, with an additional curvature-driven motion, which is responsible for the Allen-Cahn equation not being conservative. Since the Allen-Cahn equation is derived from curvature-driven flow, it is expected that this term appears [1]. Although the curvature effects would be zero if \(\gamma=0\), this is not a valid choice for the Allen-Cahn equation (7). We address the sharp-interface limit of the conservative Allen-Cahn equation (8). We follow similar steps as [13] who investigated the sharp-interface limit for a conservative Allen-Cahn equation without chemical reactions. The equation (8) is first split in two equations: \[\partial_{t}\phi=-\frac{1}{\lambda^{2}}\gamma P^{\prime}(\phi)+ \gamma\nabla^{2}\phi-f_{\text{diff}}(\phi,T,c)+\frac{1}{\lambda}\gamma\omega(t) \tag{20a}\] \[\omega(t)=\frac{1}{\lambda}\frac{1}{|\Omega|}\int_{\Omega}P^{ \prime}(\phi)dV. \tag{20b}\] The average of \(P^{\prime}(\phi)\) appearing in (20b) is small and will decrease as \(\lambda\) decreases. It is shown in [13] that \(\omega(t)=O(\lambda^{0})\) as \(\lambda\to 0\). The inner expansions (13) of (20a) are dominated by the terms \[P^{\prime}(\phi_{0}^{\text{in}})=\partial_{z}^{2}\phi_{0}^{\text{in}},\] which has solution (19). The next order terms are affected by the newly introduced term: \[\big{(}P^{\prime\prime}(\phi_{0}^{\rm in})-\partial_{z}^{2}\big{)}\phi_{1}^{\rm in }=(v_{n,0}+\gamma\kappa_{0})\partial_{z}\phi_{0}^{\rm in}-4\phi_{0}^{\rm in}(1- \phi_{0}^{\rm in})\frac{1}{m_{m}}f(T_{0}^{\rm in},c_{0}^{\rm in})+\gamma\omega_ {0}^{\rm in},\] where \(\omega_{0}^{\rm in}=\frac{1}{\lambda}\frac{1}{|\Omega|}\int_{\Omega}P^{\prime}( \phi_{0}^{\rm in})dV\). As with the original Allen-Cahn equation we can use a solvability argument to proceed. The above equation only has a solution when \[\int_{-\infty}^{\infty}\Big{(}(v_{n,0}+\gamma\kappa_{0})\partial_{z}\phi_{0}^{ \rm in}-4\phi_{0}^{\rm in}(1-\phi_{0}^{\rm in})\frac{1}{m_{m}}f(T_{0}^{\rm in}, c_{0}^{\rm in})+\gamma\omega_{0}^{\rm in}\Big{)}\partial_{z}\phi_{0}^{\rm in}dz=0.\] Following same argumentation as with the original Allen-Cahn equation, and since \(\omega_{0}^{\rm in}\) is independent of \(z\) by definition, we get \[v_{n,0}=-\gamma\kappa_{0}-\frac{1}{m_{m}}f(T_{0}^{\rm in},c_{0}^{\rm in})+3 \gamma\omega_{0}^{\rm in}.\] Finally, [13] shows that the condition of \(\int_{\Omega}\partial_{t}\phi_{0}^{\rm in}d{\bf x}=0\) is equivalent to \[\omega_{0}^{\rm in}=\frac{1}{3}\frac{1}{|\Gamma(t)|}\int_{\Gamma(t)}\kappa_{0 }ds=\frac{1}{3}\overline{\kappa}_{0},\] where \(\overline{\kappa}\) is the average curvature along \(\Gamma(t)\). We hence arrive at \[v_{n,0}=-\gamma(\kappa_{0}-\overline{\kappa}_{0})-\frac{1}{m_{m}}f(T_{0}^{\rm out }(t,{\bf y}_{1/2-}),c_{0}^{\rm out}(t,{\bf y}_{1/2-}))\quad\mbox{on $\Gamma(t)$},\] This means that the interface velocity is still driven by both the chemical reaction and by curvature, but where the curvature-driven movement fulfills conservation of the phase-field parameter. The curvatureed-driven motion redistributes the mineral towards constant curvature; towards bubbles [38]. Hence, the fluid-solid interface moves due to the chemical reaction, and additionally in a conservative manner by redistributing mineral towards constant curvature. #### 4.3.2 Mass conservation equation Inserting the inner expansions (13) into the mass conservation equation (9a) gives the dominating \(O(\lambda^{-1})\) term \[-v_{n,0}\partial_{z}m_{\phi}+\partial_{z}(m_{f}\phi_{0}^{\rm in}{\bf v}_{0}^{ \rm in})\cdot{\bf n}_{0}=0.\] Integrating with respect to \(z\) from \(-\infty\) to \(+\infty\) and using matching conditions (15a) give \[-m_{f}{\bf v}_{0}^{\rm in}(t,{\bf y}_{1/2-})\cdot{\bf n}_{0}=v_{n,0}(2m_{m}-m_ {f}). \tag{21}\] This boundary condition corresponds to (3a). #### 4.3.3 Momentum conservation equation The dominating \(O(\lambda^{-2})\) terms arising from inserting the inner expansions (13) into (9b) are \[\mu\partial_{z}^{2}(\phi_{0}^{\rm in}{\bf v}_{0}^{\rm in})+(\nu+\mu)\partial_ {z}^{2}(\phi_{0}^{\rm in}{\bf v}_{0}^{\rm in}\cdot{\bf n}_{0}){\bf n}_{0}=0\] Note that \(\mu\) and \(\nu\) can be functions of \(T_{0}^{\rm in}\), which is still independent of \(z\). We integrate twice with respect to \(z\) and apply matching conditions (15a) and (15b) and hence arrive at \[\mu{\bf v}_{0}^{\rm in}(t,{\bf y}_{1/2-})=-(\nu+\mu)({\bf v}_{0}^{\rm in}(t,{ \bf y}_{1/2-})\cdot{\bf n}_{0}){\bf n}_{0}.\] Here, the temperature \(T_{0}^{\rm in}\) inside \(\mu\) and \(\nu\) is also to be evaluated at \({\bf y}_{1/2-}\). Note that the current equation already gives us a no-slip condition: The velocity has only a normal component at the boundary, and hence no tangential component. Combining with (21) we can write this condition as \[{\bf v}_{0}^{\rm in}(t,{\bf y}_{1/2-})=-\frac{\nu+\mu}{\mu}\frac{m_{f}-2m_{m}} {m_{f}}v_{n,0}{\bf n}_{0}.\] However, the current condition is only consistent with (21) when \(\nu=-2\mu\). In this case, \[\mathbf{v}_{0}^{\text{in}}(t,\mathbf{y}_{1/2-})=\frac{m_{f}-2m_{m}}{m_{f}}v_{n,0 }\mathbf{n}_{0},\] which then corresponds to (3b). Hence, this relation between the two viscosities \(\nu\) and \(\mu\) give consistency and the expected boundary condition in the sharp-interface limit. The second viscosity \(\nu\) was here introduced since the fluid-mineral mixture is quasi-incompressible. A second viscosity is usually connected to how a fluid reacts on an fluid expansion/contraction and is here chosen in relation to \(\mu\) to ensure that the chosen sharp-interface boundary condition is obtained. **Remark 2**.: _In case \(m_{f}=2m_{m}\), we have no expansion/contraction due to the mineral precipitation and dissolution. In this case \(m_{\phi}=m_{f}\) and (9a) would have expressed \(\nabla\cdot(\phi\mathbf{v})=0\) as \(m_{f}\) is constant. Then, the second viscosity \(\nu\) does not have to be introduced as (9b) could have used a simpler stress tensor. The sharp-interface limit in this case gives directly \(\mathbf{v}_{0}^{\text{in}}(t,\mathbf{y}_{1/2-})=\mathbf{0}\)._ #### 4.3.4 Solute conservation equation Inserting the inner expansions (13) into the solute conservation equation (9c), the dominating \(O(\lambda^{-2})\) term is \[\partial_{z}(\phi_{0}^{\text{in}}\partial_{z}c_{0}^{\text{in}})=0.\] Integrating with respect to \(z\) and using the matching condition (15b) together with the fact that \(\phi_{0}^{\text{in}}>0\), we get that \[\partial_{z}c_{0}^{\text{in}}=0.\] Hence, \(c_{0}^{\text{in}}\) is independent of \(z\). For the \(O(\lambda^{-1})\) terms we obtain \[-v_{n,0}\partial_{z}\big{(}\phi_{0}^{\text{in}}c_{0}^{\text{in}}+(1-\phi_{0}^ {\text{in}})m_{m}\big{)}+\partial_{z}(c_{0}^{\text{in}}\phi_{0}^{\text{in}} \mathbf{v}_{0}^{\text{in}})\cdot\mathbf{n}_{0}=D\partial_{z}(\phi_{0}^{\text{ in}}\partial_{z}c_{1}^{\text{in}}).\] Integrating with respect to \(z\) from \(-\infty\) to \(+\infty\) and using matching conditions (15) we get \[v_{n,0}\big{(}m_{m}-c_{0}^{\text{out}}(t,\mathbf{y}_{1/2-})\big{)}=\big{(}-c_ {0}^{\text{out}}(t,\mathbf{y}_{1/2-})\mathbf{v}_{0}^{\text{out}}(t,\mathbf{y}_ {1/2-})+D\nabla c_{0}^{\text{out}}(t,\mathbf{y}_{1/2-})\big{)}\cdot\mathbf{n} _{0},\] which corresponds to (3c). #### 4.3.5 Energy conservation equation By inserting the inner expansions (13) into (9d), the dominating \(O(\lambda^{-2})\) term is \[\partial_{z}^{2}T_{0}^{\text{in}}=0.\] When integrating this equation with respect to \(z\) and using the matching condition (15b), it becomes clear that \[\partial_{z}T_{0}^{\text{in}}=0;\] that is, \(T_{0}^{\text{in}}\) is independent of \(z\). The \(O(\lambda^{-1})\) terms are \[-v_{n,0}\partial_{z}((c_{p}\rho)_{\phi^{0}}T_{0}^{\text{in}})+\partial_{z}(c_ {f}\rho_{f}T_{0}^{\text{in}}\phi_{0}^{\text{in}}\mathbf{v}_{0}^{\text{in}}) \cdot\mathbf{n}_{0}=\partial_{z}(k_{\phi^{0}}\partial_{z}T_{1}^{\text{in}}),\] where the notation \((c_{p}\rho)_{\phi^{0}}=\phi_{0}^{\text{in}}c_{p,f}\rho_{f}+(1-\phi_{0}^{\text {in}})c_{p,m}\rho_{m}\) and similarly for \(k_{\phi^{0}}\). We integrate with respect to \(z\) from \(-\infty\) to \(+\infty\), taking advantage of that \(T_{0}^{\text{in}}\) is independent of \(z\). After applying matching conditions (15) we get that \[v_{n,0}\big{(}c_{p,m}\rho_{m}T_{0}^{\text{out}}(t,\mathbf{y}_{1/ 2+})-c_{p,f}\rho_{f}T_{0}^{\text{out}}(t,\mathbf{y}_{1/2-})\big{)}=\big{(}-k_{m} \nabla T_{0}^{\text{out}}(t,\mathbf{y}_{1/2+})\] \[\quad-c_{p,f}\rho_{f}T_{0}^{\text{out}}(t,\mathbf{y}_{1/2-})\mathbf{ v}_{0}^{\text{out}}(t,\mathbf{y}_{1/2-})+k_{f}\nabla T_{0}^{\text{out}}(t, \mathbf{y}_{1/2-})\big{)}\cdot\mathbf{n}_{0},\] which corresponds to (3d) when associating the temperature at the positive (mineral) side with \(T_{m}\) and the temperature at the negative (fluid) side with \(T_{f}\). Numerical discretization of the phase-field model We here numerically discretize the resulting phase-field model. Special attention is given to the conservative Allen-Cahn equation to ensure that the numerical discretization honors the conservative property. We apply Finite Volume (FV) discretization in space, which is outlined in Section 5.1, and it is shown under which conditions the conservative Allen-Cahn equation (8) is discretely conservative in Section 5.2. The conservative Allen-Cahn equation is non-linear and challenging to solve with Newton iterations due to the non-local term, and we formulate L-scheme iterations which converge to a conservative solution in Section 5.3. The conservative Allen-Cahn equation is coupled to the other model equations in an iterative manner, which in a simplified setup is proven to converge in Section 5.4. ### Finite Volume discretization For numerical discretization, we apply a cell-centered FV scheme on an admissible mesh \(\mathcal{E}\)[15], where the phase-field variable (and similarly for temperature and solute) is approximated in the cell-center and we calculate fluxes across cell edges. The fluxes are for simplicity two-point approximations, but extensions using multi-point flux approximations are possible. By integrating the conservative Allen-Cahn equation (8) across one element and using Gauss' theorem on the diffusive term, the discrete FV approximation of (8) reads \[\frac{\phi_{K}^{n+1}-\phi_{K}^{n}}{\Delta t}+\frac{\gamma}{ \lambda^{2}}P^{\prime}(\phi_{K}^{\ell})= \frac{\gamma}{|K|}\sum_{L\in\mathcal{N}(K)}|\sigma_{K,L}|F_{K,L} ^{\phi,\ell}-\frac{4}{\lambda}\phi_{K}^{\ell}(1-\phi_{K}^{\ell})\frac{f(T_{K}^ {\ell},c_{K}^{\ell})}{m_{m}}\] \[+\frac{\gamma}{\lambda^{2}}\frac{1}{|\Omega|}\sum_{J\in\mathcal{ E}}|J|P^{\prime}(\phi_{J}^{\ell}), \tag{22}\] where \(|K|\) is the measure of element \(K\). We have here also discretized in time, and let superscript \(n\) and \(n+1\) denote the time step. The notation \(\phi_{K}\) refers to the cell-centered value of \(\phi\) in \(K\) and is assumed to approximate the value of \(\phi\) in entire \(K\). Further, \(\mathcal{N}(K)\) refers to the neighboring elements of \(K\) and \(|\sigma_{K,L}|\) is the measure of the edge \(\sigma_{K,L}\) between element \(K\) and the neighboring element \(L\). The integral \(\int_{\Omega}P^{\prime}(\phi)d\mathbf{x}\) has been approximated by the sum \(\sum_{J\in\mathcal{E}}|J|P^{\prime}(\phi_{J}^{\ell})\). The superscript \(\ell\) denotes time step, and is \(n\) or \(n+1\) depending on whether forward or backward Euler is applied, respectively. The possible choices for time stepping will be addressed later. The fluxes \(F_{K,L}^{\phi,\ell}\) approximate the diffusive flux \(\nabla^{2}\phi\) on the edge between elements \(K\) and \(L\) and are given by \[F_{K,L}^{\phi,\ell}=\frac{\phi_{L}^{\ell}-\phi_{K}^{\ell}}{d_{K,L}},\] where \(d_{K,L}\) is the Euclidean distance between the center points \(x_{K}\in K\) and \(x_{L}\in L\). We have \(F_{K,L}^{\phi,\ell}=-F_{L,K}^{\phi,\ell}\) on interior edges, which ensures the conservation between elements. ### Discrete conservation of the conservative Allen-Cahn equation We consider the global conservation of (22) up to the chemical reaction by summing up (22) for all \(K\in\mathcal{E}\), hence across entire domain \(\Omega\). Summing up over all \(K\in\mathcal{E}\) and using Neumann boundary conditions for \(\phi\) and that \(F_{K,L}^{\phi,\ell}=-F_{L,K}^{\phi,\ell}\), we obtain \[\sum_{K\in\mathcal{E}}\phi_{K}^{n+1}|K|= \sum_{K\in\mathcal{E}}\phi_{K}^{n}|K|-\sum_{K\in\mathcal{E}} \frac{4}{\lambda}\phi_{K}^{\ell}(1-\phi_{K}^{\ell})\frac{f(T_{K}^{\ell},c_{K} ^{\ell})}{m_{m}}|K|\] \[-\frac{\gamma}{\lambda^{2}}\sum_{K\in\mathcal{E}}\left(|K|P^{ \prime}(\phi_{K}^{\ell})-\frac{|K|}{|\Omega|}\sum_{J\in\mathcal{E}}|J|P^{ \prime}(\phi_{J}^{\ell})\right)\] The summation \(\sum_{K\in\mathcal{E}}\left(|K|P^{\prime}(\phi_{K}^{\ell})-\frac{|K|}{|\Omega| }\sum_{J\in\mathcal{E}}|J|P^{\prime}(\phi_{J}^{\ell})\right)\) adds up to zero under certain restrictions on the time evaluation. Denoting the sum \(\sum_{J\in\mathcal{E}}|J|P^{\prime}(\phi_{J}^{\ell}):=\Pi\), we have that the summation can be rewritten to \[\sum_{K\in\mathcal{E}}\left(|K|P^{\prime}(\phi_{K}^{\ell})-\frac{|K|}{|\Omega| }\sum_{J\in\mathcal{E}}|J|P^{\prime}(\phi_{J}^{\ell})\right)=\Pi-\sum_{K\in \mathcal{E}}\frac{|K|}{|\Omega|}\Pi.\] As \(|\Omega|\) and \(\Pi\) are independent of the element, it is obvious that the terms sum up to zero as \(\sum_{K\in\mathcal{E}}|K|=|\Omega|\) as the mesh is conforming. This remains true as long as the same choice of \(\ell\) is taken for the evaluation of \(P^{\prime}(\phi_{K}^{\ell})\) in both appearances for corresponding elements. This means that both forward Euler and backward Euler will be discretely conservative, as also shown in [6]. However, as also noted in [6], an explicit discretization of the non-linear terms is unstable, while the implicit choice is slow to solve with Newton iterations as the Jacobian is full. In the following we suggest a strategy to overcome these difficulties, while still assuring the discrete conservation of (8). ### L-scheme iterations of a semi-implicit scheme We here consider a particular semi-implicit version of (8); and consider an alternative non-linear solver of the resulting non-linear system of equations. We will consider the L-scheme [33, 27], which reformulates the discrete equation to iterations which form a contraction. To simplify the notation, we consider only the time-discrete equation. Initially we let the scheme be implicit, hence \(\ell=n+1\), and we address how to solve \[\frac{1}{\Delta t}(\phi^{n+1}-\phi^{n})+\frac{\gamma}{\lambda^{2} }P^{\prime}(\phi^{n+1})= \gamma\nabla^{2}\phi^{n+1}-\frac{4}{\lambda}\phi^{n+1}(1-\phi^{n +1})\frac{f(T,c)}{m_{m}}\] \[+\frac{\gamma}{\lambda^{2}}\frac{1}{|\Omega|}\int_{\Omega}P^{ \prime}(\phi^{n+1})dV. \tag{23}\] As we only address the non-linearity with respect to the phase field, we assume the temperature and concentration in the reactive term given. Note that the L-scheme approach is of course also applicable to solve the non-linearities in (7), which has been done in [3], but we here focus on how to apply it to (8), and show how the L-scheme iterations can be assured to converge to the conservative solution. #### 5.3.1 Convergence of the L-scheme The L-scheme adds an extra stabilization term to create a contraction, which converges by the Banach fixed point theorem. We need the non-linearity in the resulting discrete equation to be monotonically increasing or decreasing. However, the non-linearity in (23); \[G(\phi,T,c)=-\frac{\gamma}{\lambda^{2}}P^{\prime}(\phi)-\frac{4}{\lambda}\phi (1-\phi)\frac{f(T,c)}{m_{m}}+\frac{\gamma}{\lambda^{2}}\frac{1}{|\Omega|}\int _{\Omega}P^{\prime}(\phi)dV, \tag{24}\] is both increasing and decreasing with respect to \(\phi\). Therefore we split the non-linearity \(G(\phi,T,c)\) into an increasing and decreasing part, and solve the increasing part explicitly and decreasing implicitly. Note that we consider \(T,c\) known and fixed in the following. Hence, we define \[G_{-}(\phi,T,c) =\int_{0}^{\phi}\min\{0,\partial_{1}G(\psi,T,c)\}d\psi,\] \[G_{+}(\phi,T,c) =\int_{0}^{\phi}\max\{0,\partial_{1}G(\psi,T,c)\}d\psi,\] and consider the time discretization \[\frac{1}{\Delta t}(\phi^{n+1}-\phi^{n})=\gamma\nabla^{2}\phi^{n+1}+G_{-}(\phi ^{n+1},T,c)+G_{+}(\phi^{n},T,c). \tag{25}\] The L-scheme iterations solve the non-linear equation (25) iteratively through adding a stabilizing therm. By letting \(j\) be the iteration index and setting \(\phi^{n+1,0}:=\phi^{n}\), we solve \[\frac{1}{\Delta t}(\phi^{n+1,j+1}-\phi^{n})= \gamma\nabla^{2}\phi^{n+1,j+1}+G_{-}(\phi^{n+1,j},T,c)+G_{+}(\phi^ {n},T,c)\] \[-\mathcal{L}(\phi^{n+1,j+1}-\phi^{n+1,j}), \tag{26}\] where \(\mathcal{L}\in\mathbb{R}_{+}\). **Theorem 1**.: _The L-scheme iterations (26) form a contraction and are guaranteed to converge independently of the initial guess and time-step size when \(\mathcal{L}\geq M_{G}=\max_{0\leq\xi\leq 1}(-\partial_{1}G_{-}(\xi,T,c))\)._ Proof.: To show when the iterations (26) converge, we follow similar steps as [24, Lemma 4.1] and we show the main steps in the following. We first define the error \(e^{j+1}:=\phi^{n+1,j+1}-\phi^{n+1}\). By subtracting (25) from (26), we get \[e^{j+1}(1+\Delta t\mathcal{L})-\gamma\Delta t\nabla^{2}e^{j+1}=\Delta t(G_{-}( \phi^{n+1,j},T,c)-G_{-}(\phi^{n+1},T,c))+\Delta t\mathcal{L}e^{j}. \tag{27}\] By the mean value theorem we have that \(G_{-}(\phi^{n+1,j},T,c)-G_{-}(\phi^{n+1},T,c)=\partial_{1}G_{-}(\xi,T,c)e^{j}\), for some number \(\xi\) between \(\phi^{n+1,j}\) and \(\phi^{n+1}\). We define \[M_{G}=\max_{0\leq\xi\leq 1}(-\partial_{1}G_{-}(\xi,T,c)). \tag{28}\] Since \(G_{-}(\phi,T,c)\) is monotonically decreasing with respect to \(\phi\), \(\partial_{1}G_{-}(\phi,T,c)\) is either negative or zero; hence, \(M_{G}\) is a positive number. We multiply (27) with \(e^{j+1}\) and integrate over the domain \(\Omega\). Using our knowledge about \(G_{-}(\phi,T,c)\), and letting \(\mathcal{L}\geq M_{G}\), this can be written as \[\|e^{j+1}\|^{2}(1+\Delta t\mathcal{L})+\gamma\Delta t\|\nabla e^{j+1}\|^{2} \leq\Delta t\mathcal{L}\|e^{j}\|\|e^{j+1}\|,\] where we in the second term have used integration by parts and on the right-hand side applied the Cauchy-Schwartz inequality as well as that \(0\leq\partial_{1}G_{-}(\xi,T,c)+\mathcal{L}\leq\mathcal{L}\). The norms are \(L^{2}\)-norms. The right-hand side can be further rewritten using Young's inequality, hence \[\|e^{j+1}\|^{2}(1+\Delta t\mathcal{L})+\gamma\Delta t\|\nabla e^{j+1}\|^{2} \leq\frac{1}{2}\Delta t\mathcal{L}(\|e^{j}\|^{2}+\|e^{j+1}\|^{2}).\] By collecting all terms with \(e^{j+1}\) on the left-hand side, we arrive at \[\|e^{j+1}\|^{2}+\frac{2\gamma\Delta t}{2+\Delta t\mathcal{L}}\|\nabla e^{j+1} \|^{2}\leq\frac{\Delta t\mathcal{L}}{2+\Delta t\mathcal{L}}\|e^{j}\|^{2}.\] From this we can conclude that the L-scheme (26) is a contraction and hence converges by the Banach fixed point theorem when \[\mathcal{L}\geq M_{G}. \tag{29}\] Note that (29) is not a strict limit, and convergence can also be achieved under milder constrictions on \(\mathcal{L}\), but will then have a restriction on \(\Delta t\). This has been exemplified for e.g. the Richards equation [27]. However, under the above constraints convergence is guaranteed, for any starting point. Note that the factor \[L=\frac{\Delta t\mathcal{L}}{2+\Delta t\mathcal{L}} \tag{30}\] can estimate how fast the convergence is. A smaller value of \(L\) (closer to \(0\)) means that convergence is expected to be faster. Hence, when \(\mathcal{L}\) and/or \(\Delta t\) grows, convergence is expected to be slower as \(L\) would approach \(1\). This is in contrast to non-linear equations where the non-linearity is occurring in the time derivative. In this case, the L-scheme iterations are expected to be faster for larger time-step sizes \(\Delta t\)[20, Remark 4]. In practice, the L-scheme iterations are performed until \[\|\phi^{n+1,j+1}-\phi^{n+1,j}\|\leq\mathrm{tol}_{\mathcal{L}}, \tag{31}\] where \(\mathrm{tol}_{\mathcal{L}}\) is a prescribed tolerance value. The newest \(\phi^{n+1,j+1}\) is taken as the solution \(\phi^{n+1}\) and the simulation can proceed. #### 5.3.2 Determining \(M_{g}\) and \(G_{-}(\phi)\) A relevant question is of course how to determine \(M_{G}\) and \(G_{-}(\phi)\) to perform the L-scheme iterations. As only an upper bound for \(M_{G}\) is needed, we start with this. We search for \(M_{G}\) such that \[M_{G}=\max_{0\leq\phi\leq 1}|\partial_{1}G(\phi,T,c)|.\] This way, \(M_{G}\) is potentially overestimated as the positive values of \(\partial_{1}G(\phi,T,c)\) are also included, but this \(M_{G}\) will clearly also fulfill (28). Differentiating \(G(\phi,T,c)\) with respect to \(\phi\) gives \[\partial_{1}G(\phi,T,c)=\frac{\gamma}{\lambda^{2}}16\Big{(}-(1-6\phi+6\phi^{2 })+\frac{1}{|\Omega|}\int_{\Omega}(1-6\phi+6\phi^{2})dV\Big{)}-\frac{4}{ \lambda}\frac{1}{m_{m}}f(T,c)(1-2\phi).\] Since the average of \(1\) is \(1\), this can be further simplified to \[\partial_{1}G(\phi,T,c)=\frac{96\gamma}{\lambda^{2}}\Big{(}\phi(1-\phi)-\frac {1}{|\Omega|}\int_{\Omega}\phi(1-\phi)dV\Big{)}-\frac{4}{\lambda}\frac{1}{m_{ m}}f(T,c)(1-2\phi) \tag{32}\] We then take the maximum over \(0\leq\phi\leq 1\): \[M_{G} =\max_{0\leq\phi\leq 1}|\partial_{1}G(\phi,T,c)|\] \[\leq\max_{0\leq\phi\leq 1}\frac{96\gamma}{\lambda^{2}}|\phi(1-\phi) -\frac{1}{|\Omega|}\int_{\Omega}\phi(1-\phi)dV|+\frac{4}{\lambda}\frac{1}{m_{m} }|f(T,c)|\] \[\leq\frac{24\gamma}{\lambda^{2}}+\frac{4}{\lambda}\frac{1}{m_{m}} |f(T,c)|.\] Hence, the value of \(M_{G}\) can be seen as an interplay between two processes: One due to changes in \(\phi\) coming from curvature-driven motion, and one coming from chemical reactions. The latter could be zero, but \(M_{G}\) will always be larger than zero since \(\gamma,\lambda>0\). The equation (32) can also be used to determine \(G_{-}(\phi,T,c)\). Since \(G_{-}(\phi,T,c)\) simply consists of the part of \(G(\phi,T,c)\) where \(G(\phi,T,c)\) is decreasing with respect to \(\phi\), the sign of \(\partial_{1}G(\phi,T,c)\) in (32) gives us the increasing and decreasing parts. However, we have to make sure that a splitting of \(G\) into \(G_{-}\) and \(G_{+}\) and corresponding implicit/explicit time stepping still result in a conservative scheme. #### 5.3.3 Strategy for L-scheme iterations for the conservative Allen-Cahn equation As pointed out in Section 5.2, the discrete version (22) of the conservative Allen-Cahn equation (8) is still conservative as long as the same time step \(\ell\) are used for same \(\phi_{K}^{\ell}\) in both \(P^{\prime}(\phi_{K}^{\ell})\) and in \(\sum_{J\in\mathcal{E}}|J|P^{\prime}(\phi_{J}^{\ell})\), although \(\ell=n\) or \(\ell=n+1\) could be chosen independently for each control volume. Hence, when splitting the non-linearities into an increasing and decreasing part, this has to be done for control volume by control volume. We outline this in practice: At time step \(n\), \(\phi_{K}^{n}\) is known for all \(K\in\mathcal{E}\), and we use the space-discrete version of (32), namely \[\partial_{1}G(\phi_{K};\{\phi\}_{J\in\mathcal{E}},T_{K},c_{K})= \frac{96\gamma}{\lambda^{2}}\Big{(}\phi_{K}(1-\phi_{K})-\frac{1} {|\Omega|}\sum_{J\in\mathcal{E}}|J|\phi_{J}(1-\phi_{J})\Big{)}\] \[-\frac{4}{\lambda}\frac{1}{m_{m}}f(T_{K},c_{K})(1-2\phi_{K}),\] to determine whether \(\phi_{K}\) is causing \(G(\phi_{K};\{\phi\}_{J\in\mathcal{E}},T_{K},c_{K})\) to be increasing or decreasing. Hence, depending on the sign, \(\phi_{K}\) will either be solved explicitly or implicitly (that is, either part of \(G_{+}\) or \(G_{-}\)), and is assigned to time step \(n\) or \(n+1\). We denote this as \(\phi_{K}^{\ell_{K}}\) to denote that \(\ell_{K}\) is \(n\) or \(n+1\) in an element-wise manner. When all control volumes are checked and assigned to being either \(n\) or \(n+1\) in the non-linearity, the discrete L-scheme iterations are: \[\frac{1}{\Delta t}(\phi_{K}^{n+1,j+1}-\phi_{K}^{n})= \frac{\gamma}{|K|}\sum_{L\in\mathcal{N}(K)}|\sigma_{K,L}|F_{K,L}^{ \phi;n+1,j+1}+G_{-}(\phi_{K}^{n+1,j},T_{K},c_{K})\] \[+G_{+}(\phi_{K}^{n},T_{K},c_{K})-\mathcal{L}(\phi_{K}^{n+1,j+1}- \phi_{K}^{n+1,j}), \tag{33}\] where \(G_{-}(\phi^{n+1,j}_{K},T_{K},c_{K})\) is \(G(\phi^{n+1,j}_{K};\{\phi^{\ell}\}_{J\in\mathcal{E}},T_{K},c_{K})\) when \(\phi^{n}_{K}\) was assigned to time step \(n+1\), and \(G_{+}(\phi^{n}_{K},T_{K},c_{K})\) is \(G(\phi^{n}_{K};\{\phi^{\ell}\}_{J\in\mathcal{E}},T_{K},c_{K})\) when \(\phi^{n}_{K}\) was assigned to time step \(n\). Note that the status of each element is updated at every iteration to ensure that the correct splitting of the non-linearity is performed. The discrete version of (24) accounting for the splitting is \[G(\phi^{\ell_{K}}_{K};\{\phi^{\ell_{J}}\}_{J\in\mathcal{E}},T_{K},c_{K})= -\frac{\gamma}{\lambda^{2}}P^{\prime}(\phi^{\ell_{K}}_{K})-\frac{ 4}{\lambda}\phi^{\ell_{K}}_{K}(1-\phi^{\ell_{K}}_{K})\frac{f(T_{K},c_{K})}{m_ {m}}\] \[+\frac{\gamma}{\lambda^{2}}\frac{1}{|\Omega|}\sum_{J\in\mathcal{E }}|J|P^{\prime}(\phi^{\ell_{J}}_{J})\] where \(\ell_{K}\) (and \(\ell_{J}\)) is either \(n\) or \(n+1,j\). Note that also \(G_{+}(\phi^{n}_{K},T_{K},c_{K})\) is updated throughout the L-scheme iterations, since the sum relying on \(\{\phi^{\ell_{J}}\}_{J\in\mathcal{E}}\) is updated. Choosing the stabilization parameter \(\mathcal{L}\) according to \(M_{G}\), the scheme (33) converges to a solution that conserves the phase field. Note however, that since we in practice stop the L-scheme iterations when the error is within a certain threshold, see (31), we will therefore expect an error in terms of conserving the phase field within that same threshold. ### Coupling the phase-field equation to the other model equations The conservative Allen-Cahn equation (8) is coupled to (9) describing flow and transport of solute and heat. The equations are tightly coupled: the reaction rate in (8) depends on the solute concentration and temperature, while the phase field appears in all equations in (9). We here consider an iterative scheme, where at every time step each equation (8) and (9) is solved separately, and we iterate between the equations until convergence before continuing to the next time step. The benefit of such a scheme is that each equation can be solved using separate strategies, according to the properties of each equation. For simplicity we here consider a case without flow and that \(m_{f}=2m_{m}\). Hence, solute and heat is transported with diffusion/conduction only, but where the diffusion and conduction through the fluid and mineral depends on the value of the phase field. The solution strategy is as following, and similar to the strategies applied in [3, 9]. We discretize in time using a time-step size \(\Delta t\), and let \(\phi^{n+1},c^{n+1},T^{n+1}\) denote the phase field, concentration and temperature at the next time step. The time-discrete equations, when splitting the non-linearity in an explicit/implicit manner as needed for the phase field and solving the other equations fully implicitly, are \[\frac{1}{\Delta t}(\phi^{n+1}-\phi^{n}) =\gamma\nabla^{2}\phi^{n+1}+G_{-}(\phi^{n+1},T^{n+1},c^{n+1})\] \[\quad+G_{+}(\phi^{n},T^{n+1},c^{n+1}), \tag{34a}\] \[\frac{1}{\Delta t}(\phi^{n+1}c^{n+1} +(1-\phi^{n+1})m_{m}-\phi^{n}c^{n}-(1-\phi^{n})m_{m})\] \[=D\nabla(\phi^{n+1}\nabla c^{n+1})\] (34b) \[\frac{1}{\Delta t}(c^{n+1}_{\phi}\rho^{n+1}_{\phi}T^{n+1}-c^{n} _{\phi}\rho^{n}_{\phi}T^{n}) =\nabla\cdot(k^{n+1}_{\phi}\nabla T^{n+1}). \tag{34c}\] We will solve these equations in an iterative manner. We let \(i\) be the iteration index for the coupling iterations. The logic initialization of the coupling iterations is to take \(\phi^{n+1,0}=\phi^{n}\), \(c^{n+1,0}=c^{n}\) and \(T^{n+1,0}=T^{n}\), where \(\phi^{n},c^{n},T^{n}\) are the (already found) solutions from the previous time step. However, the iterations converge for any initial guess, which will be shown in proof of Theorem 2. We first find the phase field \(\phi^{n+1,i+1}\) by solving \[\frac{1}{\Delta t} (\phi^{n+1,i+1}-\phi^{n})=\gamma\nabla^{2}\phi^{n+1,i+1}+G_{-}( \phi^{n+1,i+1},T^{n+1,i},c^{n+1,i})\] \[\quad+G_{+}(\phi^{n},T^{n+1,i},c^{n+1,i})-\mathcal{L}_{\text{ coup}}(\phi^{n+1,i+1}-\phi^{n+1,i}), \tag{35}\] where the last term has been added as a stabilization term. This term helps us to ensure that the iterations are indeed converging and is needed for the proof of Theorem 2. Note that the reaction rate is using the iteration index from the previous iteration for temperature and solute concentration, which are known. Hence, \(\phi^{n+1,i+1}\) is the only unknown. This equation is still non-linear and will be solved with L-scheme iterations. See Remark 3 on how the L-scheme iterations are modified due to the presence of the newly added stabilization term. After solving for \(\phi^{n+1,i+1}\), we find \(c^{n+1,i+1}\) and \(T^{n+1,i+1}\) by solving \[\frac{1}{\Delta t}(\phi^{n+1,i+1}c^{n+1,i+1}+(1-\phi^{n+1,i+1})m_{m }-\phi^{n}c^{n}-(1-\phi^{n})m_{m})\] \[\qquad=D\nabla(\phi^{n+1,i+1}\nabla c^{n+1,i+1}) \tag{36a}\] \[\frac{1}{\Delta t}((c_{p}\rho)_{\phi}^{n+1,i+1}T^{n+1,i+1}-(c_{p} \rho)_{\phi}^{n}T^{n})=\nabla\cdot(k_{\phi}^{n+1,i+1}\nabla T^{n+1,i+1}). \tag{36b}\] Recall that \((c_{p}\rho)_{\phi},k_{\phi}\) depend on \(\phi\) and are here using \(\phi^{n+1,i+1}\). Since \(\phi^{n+1,i+1}\) is known, these two equations can be solved for \(c^{n+1,i+1},T^{n+1,i+1}\). Both these equations are linear, hence they can be solved directly when applying a spatial discretization. Overall the scheme is as illustrated in Figure 1. **Remark 3**.: _Note that the L-scheme iterations are slightly altered as the term \(\mathcal{L}_{\text{coup}}(\phi^{n+1,i+1}-\phi^{n+1,i})\) now appears in (35). In the resulting L-scheme iterations, we will at every time step \(n+1\) and at every coupling iteration \(i+1\), solve for \(\phi^{n+1,i+1,j+1}\)_ \[\frac{1}{\Delta t}(\phi^{n+1,i+1,j+1}-\phi^{n})=\gamma\nabla^{2} \phi^{n+1,i+1,j+1}\] \[\qquad+G_{-}(\phi^{n+1,i+1,j},T^{n+1,i},c^{n+1,i})+G_{+}(\phi^{n},T^{n+1,i},c^{n+1,i})\] \[\qquad-\mathcal{L}(\phi^{n+1,i+1,j+1}-\phi^{n+1,j})-\mathcal{L}_ {\text{coup}}(\phi^{n+1,i+1,j+1}-\phi^{n+1,i}), \tag{37}\] _where \(G_{-}\) and \(G_{+}\) are the same as before. Performing the same steps as in the proof of Theorem 1 we now obtain_ \[\|e^{j+1}\|^{2}+\frac{2\gamma\Delta t}{2+\Delta t\mathcal{L}+\Delta t\mathcal{ L}_{\text{coup}}}\|\nabla e^{j+1}\|^{2}\leq\frac{\Delta t\mathcal{L}}{2+ \Delta t\mathcal{L}+\Delta t\mathcal{L}_{\text{coup}}}\|e^{j}\|^{2}.\] _Hence, we still obtain a contraction for any \(\mathcal{L}_{\text{coup}}>0\) and assuming \(\mathcal{L}\geq M_{G}\). In fact, the factor \(L=\frac{\Delta t\mathcal{L}}{2+\Delta t\mathcal{L}+\Delta t\mathcal{L}_{\text {coup}}}\) is now smaller compared to (30) and we can expect slightly faster convergence with the presence of this new stabilization term. The L-scheme iterations are made until_ \[\|\phi^{n+1,i+1,j+1}-\phi^{n+1,i+1,j}\|\leq\text{tol}_{\mathcal{L}},\] _and the newest \(\phi^{n+1,i+1,j+1}\) is taken as \(\phi^{n+1,i+1}\); that is, the phase field of the current coupling iteration._ To prove convergence of the coupling iterations, we rely on the following assumptions: 1. The phase field must be bounded away from 0 and 1. 2. The solute concentration and temperature must be bounded. 3. The gradient of the solute concentration and temperature must be bounded at every time step; that is, \(\|\nabla c^{n}\|_{L^{\infty}(\Omega)}\leq C_{c}\) and \(\|\nabla T^{n}\|_{L^{\infty}(\Omega)}\leq C_{T}\) for constants \(C_{c},C_{T}>0\) and for all time steps \(n\). The original Allen-Cahn equation can be shown to fulfill a maximum principle, also in the time-discrete form including stabilization terms, see [3, Lemma 1]. Including the non-local term does not alter this, hence also the conservative Allen-Cahn equation (35) results in a phase field which is between 0 and 1. For a bounded domain and \(\lambda>0\), the phase field will be bounded away from 0 and 1. Hence, (A1) is fulfilled. Additionally, [3, Lemma 3] proves the maximum principle for the solute concentration, using the same equation for the solute as considered here. The proof relies on the constraint \[4\gamma\leq\frac{\lambda k_{0}}{m_{m}} \tag{38}\] which we will obey as well when simulating the coupled model. The heat equation (36b) has the same structure as the solute equation (36a), and hence can be shown to be bounded under same arguments as for the solute concentration. Hence, (A2) is fulfilled when the constraint is obeyed. Finally, (A3) is a reasonable assumption given the diffuse nature of the equations for solute concentration and temperature (36). However, rigorous proofs of the assumptions (A1)-(A3) are beyond the scope of this manuscript. **Remark 4**.: _Note that to obtain guarantees that the phase field is bounded away from 0 and 1, one can also regularize the phase-field equation with some value \(\delta>0\), similar as has been done in [3, 8]. In this case one rephrases the equation to using \(\phi_{\delta}\), where \(0<\delta\leq\phi_{\delta}\leq 1-\delta<1\)._ **Theorem 2**.: _Under the assumptions (A1)-(A3), the coupling iterations (35), (36) form a contraction and will hence converge for any initial guess when \(\mathcal{L}_{\text{coup}}\) fulfills (45) and the time-step size fulfills (44)._ Proof.: We define the error terms \[e_{\phi}^{i+1}:=\phi^{n+1,i+1}-\phi^{n+1},\quad e_{c}^{i+1}:=c^{n+1,i+1}-c^{n+1 },\quad e_{T}^{i+1}:=T^{n+1,i+1}-T^{n+1}.\] We consider first the Allen-Cahn equation. By subtracting (35) from the corresponding time-discrete form without decoupling, equation (34a), we obtain \[\frac{1}{\Delta t}(\phi^{n+1,i+1}-\phi^{n+1})= \gamma\nabla^{2}(\phi^{n+1,i+1}-\phi^{n+1})+G_{-}(\phi^{n+1,i+1}, T^{n+1,i},c^{n+1,i})\] \[+G_{+}(\phi^{n},T^{n+1,i},c^{n+1,i})-G_{-}(\phi^{n+1},T^{n+1},c^{ n+1})\] \[-G_{+}(\phi^{n},T^{n+1},c^{n+1})-\mathcal{L}_{\text{coup}}(\phi^ {n+1,i+1}-\phi^{n+1,i}).\] We multiply the above equation with a \(\psi\in H^{1}_{0}(\Omega)\) and integrate over the domain \(\Omega\). We then obtain \[\frac{1}{\Delta t}(e_{\phi}^{i+1},\psi)= -\gamma(\nabla e_{\phi}^{i+1},\nabla\psi)+(G_{-}(\phi^{n+1,i+1}, T^{n+1,i},c^{n+1,i}),\psi)\] \[+(G_{+}(\phi^{n},T^{n+1,i},c^{n+1,i}),\psi)-(G_{-}(\phi^{n+1},T^{ n+1},c^{n+1}),\psi)\] \[-(G_{+}(\phi^{n},T^{n+1},c^{n+1}),\psi)-\mathcal{L}_{\text{coup} }(e_{\phi}^{i+1}-e_{\phi}^{i},\psi), \tag{39}\] where the notation \((\cdot,\cdot)\) refers to the usual \(L^{2}\)-inner product. We then let \(\psi=e_{\phi}^{i+1}\), which gives us \[\|e_{\phi}^{i+1}\|^{2} +\Delta t\gamma\|\nabla e_{\phi}^{i+1}\|^{2}+\Delta t\mathcal{L}_ {\text{coup}}\|e_{\phi}^{i+1}\|^{2}=\Delta t\mathcal{L}_{\text{coup}}(e_{ \phi}^{i},e_{\phi}^{i+1})\] \[+\Delta t(G_{-}(\phi^{n+1,i+1},T^{n+1,i},c^{n+1,i}),e_{\phi}^{i+1 })+\Delta t(G_{+}(\phi^{n},T^{n+1,i},c^{n+1,i}),e_{\phi}^{i+1})\] \[-\Delta t(G_{-}(\phi^{n+1},T^{n+1},c^{n+1}),e_{\phi}^{i+1})- \Delta t(G_{+}(\phi^{n},T^{n+1},c^{n+1}),e_{\phi}^{i+1}),\] Figure 1: Scheme of coupling iterations We let \(\mathcal{M}_{G}=\max(|\partial_{1}G|,|\partial_{2}G|,|\partial_{3}G|)\), where we have maximized the partial derivatives of \(G\) over the bounded domain where \(\phi,T,c\) are defined (Assumptions (A1) and (A2)). Then, by the mean-value theorem, we get \[(1+\Delta t\mathcal{L}_{\rm coup})\|e_{\phi}^{i+1}\|^{2} +\Delta t\gamma\|\nabla e_{\phi}^{i+1}\|^{2}\leq\Delta t \mathcal{L}_{\rm coup}(e_{\phi}^{i},e_{\phi}^{i+1})\] \[+\Delta t\mathcal{M}_{G}\|e_{\phi}^{i+1}\|^{2}+2\Delta t \mathcal{M}_{G}(e_{T}^{i},e_{\phi}^{i+1})+2\Delta t\mathcal{M}_{G}(e_{c}^{i},e _{\phi}^{i+1}).\] We apply Young's inequality to the first and the last two terms on the right-hand side, which, for \(\delta_{1},\delta_{2},\delta_{3}>0\), gives \[(1+\Delta t\mathcal{L}_{\rm coup}-\Delta t\mathcal{M}_{G})\|e_{ \phi}^{i+1}\|^{2}+\Delta t\gamma\|\nabla e_{\phi}^{i+1}\|^{2}\] \[\leq \Delta t\mathcal{L}_{\rm coup}\frac{\delta_{1}}{2}\|e_{\phi}^{i} \|^{2}+\Delta t\mathcal{L}_{\rm coup}\frac{1}{2\delta_{1}}\|e_{\phi}^{i+1}\|^{ 2}+\Delta t\mathcal{M}_{G}\delta_{2}\|e_{T}^{i}\|^{2}+\Delta t\mathcal{M}_{G} \frac{1}{\delta_{2}}\|e_{\phi}^{i+1}\|^{2}\] \[+\Delta t\mathcal{M}_{G}\delta_{3}\|e_{c}^{i}\|^{2}+\Delta t \mathcal{M}_{G}\frac{1}{\delta_{3}}\|e_{\phi}^{i+1}\|^{2}.\] We collect all terms with \(\|e_{\phi}^{i+1}\|\) on the left-hand side and hence have \[(1+\Delta t\mathcal{L}_{\rm coup}-\Delta t\mathcal{M}_{G}-\Delta t \mathcal{L}_{\rm coup}\frac{1}{2\delta_{1}}-\Delta t\mathcal{M}_{G}\frac{1}{ \delta_{2}}-\Delta t\mathcal{M}_{G}\frac{1}{\delta_{3}})\|e_{\phi}^{i+1}\|^{2}\] \[+\Delta t\gamma\|\nabla e_{\phi}^{i+1}\|^{2}\leq\Delta t \mathcal{L}_{\rm coup}\frac{\delta_{1}}{2}\|e_{\phi}^{i}\|^{2}+\Delta t \mathcal{M}_{G}\delta_{2}\|e_{T}^{i}\|^{2}+\Delta t\mathcal{M}_{G}\delta_{3} \|e_{c}^{i}\|^{2}. \tag{40}\] To proceed, we need bounds for \(\|e_{T}^{i}\|\) and \(\|e_{c}^{i}\|\), hence we turn to the temperature and solute concentration equation. For the solute concentration, we start by subtracting (36a) from (34b), multiply with a test function from \(H_{0}^{1}(\Omega)\) and integrate over the domain. Letting \(e_{c}^{i+1}\) be the test function, we have \[\frac{1}{\Delta t}(\phi^{n+1,i+1}e_{c}^{i+1},e_{c}^{i+1})+D(\phi^{ n+1,i+1}\nabla c^{n+1,i+1},\nabla e_{c}^{i+1})\] \[=D(\phi^{n+1}\nabla c^{n+1},\nabla e_{c}^{i+1})+\frac{1}{\Delta t }(e_{\phi}^{i+1}(m_{m}-c^{n+1}),e_{c}^{i+1}).\] We use (A1); that \(\phi_{m}\leq\phi\) for some \(\phi_{m}>0\). Then the above equation can be rewritten to \[\frac{\phi_{m}}{\Delta t}\|e_{c}^{i+1}\|^{2}+D\phi_{m}\|\nabla e _{c}^{i+1}\|^{2}\] \[\leq D((\phi^{n+1}-\phi^{n+1,i+1})\nabla c^{n+1},\nabla e_{c}^{i+1}) +\frac{1}{\Delta t}(e_{\phi}^{i+1}(m-c^{n+1}),e_{c}^{i+1}).\] We apply Young's inequality to the two terms on the right-hand side, which, for \(\delta_{4},\delta_{5}>0\), gives \[\frac{\phi_{m}}{\Delta t}\|e_{c}^{i+1}\|^{2}+D\phi_{m}\|\nabla e _{c}^{i+1}\|^{2}\leq D\frac{\delta_{4}}{2}\|e_{\phi}^{i+1}\nabla c^{n+1}\|^{2}+D\frac{1}{2 \delta_{4}}\|\nabla e_{c}^{i+1}\|^{2}\] \[+\frac{1}{\Delta t}\frac{\delta_{5}}{2}\|(m-c^{n+1})e_{\phi}^{i+1 }\|^{2}+\frac{1}{\Delta t}\frac{1}{2\delta_{5}}\|e_{c}^{i+1}\|^{2}.\] We then apply (A2) and (A3), which results in \[(\frac{\phi_{m}}{\Delta t}-\frac{1}{\Delta t}\frac{1}{2\delta_{5}})\|e_{c}^{i+1 }\|^{2}+(D\phi_{m}-\frac{D}{2\delta_{4}})\|\nabla e_{c}^{i+1}\|^{2}\leq(D \frac{\delta_{4}}{2}C_{c}^{2}+\frac{1}{\Delta t}\frac{\delta_{5}}{2}m_{m}^{2}) \|e_{\phi}^{i+1}\|^{2}\] To ensure positive terms on the left-hand side, we let \(\delta_{4}=\delta_{5}=\frac{1}{\phi_{m}}\). We then arrive at \[\|e_{c}^{i+1}\|^{2}\leq\frac{2\Delta t}{\phi_{m}}(\frac{DC_{c}^{2}}{2\phi_{m}} +\frac{m_{m}^{2}}{2\phi_{m}\Delta t})\|e_{\phi}^{i+1}\|^{2}.\] Note that this bound is also valid for index \(i\). For simplicity, we rephrase the result with the more compact notation \[\|e_{c}^{i}\|^{2}\leq E_{c}\|e_{\phi}^{i}\|^{2}, \tag{41}\] for \(E_{c}=\frac{2\Delta t}{\phi_{m}}(\frac{DC_{c}^{2}}{2\phi_{m}}+\frac{m_{m}^{2}}{2 \phi_{m}\Delta t})\). We perform similar steps for the temperature equation. We start by subtracting (36b) from (34c), multiply with a test function from \(H_{0}^{1}(\Omega)\) and integrate over the domain. We let \(e_{T}^{i+1}\) be the test function and hence obtain \[\frac{c_{p,f}\rho_{f}}{\Delta t}(\phi^{n+1,i+1}e_{T}^{i+1},e_{T}^ {i+1})+\frac{c_{p,m}\rho_{m}}{\Delta t}((1-\phi^{n+1,i+1})e_{T}^{i+1},e_{T}^{i +1})\] \[\qquad+((\phi^{n+1,i+1}k_{f}+(1-\phi^{n+1,i+1})k_{m})\nabla T^{n+1,i+1},\nabla e_{T}^{i+1})\] \[= ((\phi^{n+1}k_{f}+(1-\phi^{n+1})k_{m})\nabla T^{n+1},\nabla e_{T }^{i+1})\] \[\qquad+\frac{1}{\Delta t}(e_{\phi}^{i+1}(c_{p,m}\rho_{m}-c_{p,f} \rho_{f})T^{n+1},e_{T}^{i+1})\] We use (A1); that \(\phi_{m}\leq\phi\leq\phi_{M}\) for \(\phi_{M}<1\). Then the above equation can be rewritten to \[\frac{c_{p,f}\rho_{f}\phi_{m}+c_{p,m}\rho_{m}(1-\phi_{M})}{\Delta t }\|e_{T}^{i+1}\|^{2}+(k_{f}\phi_{m}+k_{m}(1-\phi_{M}))\|\nabla e_{T}^{i+1}\|^{2}\] \[\leq((k_{f}(\phi^{n+1}-\phi^{n+1,i+1})+k_{m}(\phi^{n+1,i+1}-\phi^ {n+1}))\nabla T^{n+1},\nabla e_{T}^{i+1})\] \[\quad+\frac{1}{\Delta t}(e_{\phi}^{i+1}(c_{p,m}\rho_{m}-c_{p,f} \rho_{f})T^{n+1},e_{T}^{i+1}).\] We apply Young's inequality to the two terms on the right-hand side, which, for \(\delta_{6},\delta_{7}>0\), gives \[\frac{c_{p,f}\rho_{f}\phi_{m}+c_{p,m}\rho_{m}(1-\phi_{M})}{\Delta t }\|e_{T}^{i+1}\|^{2}+(k_{f}\phi_{m}+k_{m}(1-\phi_{M}))\|\nabla e_{T}^{i+1}\|^{2}\] \[\leq\frac{\delta_{6}}{2}\|(-k_{f}e_{\phi}^{i+1}+k_{m}e_{\phi}^{i+ 1})\nabla T^{n+1}\|^{2}+\frac{1}{2\delta_{6}}\|\nabla e_{T}^{i+1}\|^{2}\] \[+\frac{1}{\Delta t}\frac{\delta_{7}}{2}\|(c_{p,m}\rho_{m}-c_{p,f} \rho_{f})T^{n+1}e_{\phi}^{i+1}\|^{2}+\frac{1}{\Delta t}\frac{1}{2\delta_{7}}\| e_{T}^{i+1}\|^{2}.\] We then apply (A2) and (A3), with \(T_{M}\) being the upper bound of the temperature, which results in \[\frac{1}{\Delta t}(c_{p,f}\rho_{f}\phi_{m}+c_{p,m}\rho_{m}(1-\phi _{M})-\frac{1}{2\delta_{7}})\|e_{T}^{i+1}\|^{2}\] \[\qquad+(k_{f}\phi_{m}+(1-\phi_{M})k_{m}-\frac{1}{2\delta_{6}})\| \nabla e_{T}^{i+1}\|^{2}\] \[\qquad\leq(\frac{\delta_{6}}{2}|k_{m}-k_{f}|^{2}C_{T}^{2}+\frac{1 }{\Delta t}\frac{\delta_{7}}{2}|c_{p,m}\rho_{m}-c_{p,f}\rho_{f}|^{2}T_{M}^{2}) \|e_{\phi}^{i+1}\|^{2}\] To ensure positive terms on the left-hand side, we let \(\delta_{6}=\frac{1}{k_{f}\phi_{m}+k_{m}(1-\phi_{M})}\) and \(\delta_{7}=\frac{1}{c_{p,f}\rho_{f}\phi_{m}+c_{p,m}\rho_{m}(1-\phi_{M})}\). We then arrive at \[\frac{c_{p,f}\rho_{f}\phi_{m}+c_{p,m}\rho_{m}(1-\phi_{M})}{2 \Delta t}\|e_{T}^{i+1}\|^{2}\] \[\leq(\frac{\delta_{6}}{2}|k_{m}-k_{f}|^{2}C_{T}^{2}+\frac{1}{ \Delta t}\frac{\delta_{7}}{2}|c_{p,m}\rho_{m}-c_{p,f}\rho_{f}|^{2}T_{M}^{2}) \|e_{\phi}^{i+1}\|^{2},\] where we, to keep the notation more compact, did not insert the chosen values for \(\delta_{6},\delta_{7}\) on the right-hand side. This bound is also valid for index \(i\). For simplicity, we rephrase the result with the more compact notation \[\|e_{T}^{i}\|^{2}\leq E_{T}\|e_{\phi}^{i}\|^{2}, \tag{42}\] for \(E_{T}=\frac{2\Delta t}{c_{p,f}\rho_{f}\phi_{m}+c_{p,m}\rho_{m}(1-\phi_{M})}( \frac{\delta_{6}}{2}|k_{m}-k_{f}|^{2}C_{T}^{2}+\frac{1}{\Delta t}\frac{\delta_ {7}}{2}|c_{p,m}\rho_{m}-c_{p,f}\rho_{f}|^{2}T_{M}^{2})\). We then return to (40) and insert here (41) and (42). At the same time we let \(\delta_{1}=1\) and \(\delta_{2}=\delta_{3}=\frac{1}{2}\). Hence, \[(1+ \frac{\Delta t}{2}\mathcal{L}_{\rm coup}-5\Delta t\mathcal{M}_{G}) \|e_{\phi}^{i+1}\|^{2}\] \[\leq(\frac{\Delta t}{2}\mathcal{L}_{\rm coup}+\frac{\Delta t}{2} \mathcal{M}_{G}E_{T}+\frac{\Delta t}{2}\mathcal{M}_{G}E_{c})\|e_{\phi}^{i}\|^{2}. \tag{43}\] That means the coupling iterations form a contraction when the left-hand side is positive and when \[1+\frac{\Delta t}{2}\mathcal{L}_{\rm coup}-5\Delta t\mathcal{M}_{G}>\frac{\Delta t }{2}\mathcal{L}_{\rm coup}+\frac{\Delta t}{2}\mathcal{M}_{G}E_{T}+\frac{\Delta t }{2}\mathcal{M}_{G}E_{c}.\] The latter translates to a restriction on the time-step size, and results in \[\Delta t<\frac{1}{\mathcal{M}_{G}(5+E_{c}/2+E_{T}/2)}. \tag{44}\] To ensure that the left-hand side of (43) is positive, we need \(\mathcal{L}_{\rm coup}>10\mathcal{M}_{G}-\frac{2}{\Delta t}\), as well as \(\mathcal{L}_{\rm coup}>0\). Hence, \[\mathcal{L}_{\rm coup}>\max\{0,10\mathcal{M}_{G}-\frac{2}{\Delta t}\}. \tag{45}\] Note that the time-step size restriction (44) does not depend on the discretization mesh. However, it does depend on parameters that cannot easily be estimated. One possibility would be to use a coarse mesh and by trial-and-error find a time-step size that ensures convergence, which is then used on a finer mesh. However, the phase-field equation (35) needs a certain mesh size to resolve the diffuse interface. A coarser mesh can be applied if the diffuse-interface width \(\lambda\) is chosen larger, but changing the value of \(\lambda\) can influence the time-step size restriction. Hence, one would for these model equations need to find the suitable time-step size on the mesh that is to be applied. In practice, the coupling iterations are performed until \[\|\phi^{n+1,i+1}-\phi^{n+1,i}\|\leq\mathrm{tol}_{\rm coup}, \tag{46}\] where \(\mathrm{tol}_{\rm coup}\) is a prescribed tolerance. The newest \(\phi^{n+1,i+1},c^{n+1,i+1},T^{n+1,i+1}\) is taken as \(\phi^{n+1},c^{n+1},T^{n+1}\) and we can proceed to the next time step. As seen in the proof of Theorem 2, the errors of the solute concentration and temperature are expected to be bounded when the error in the phase field is bounded. Hence, it is sufficient to consider a tolerance based on the phase field only. **Remark 5**.: _Note that the value of \(E_{T}\) in equation (42) is zero if \(k_{m}=k_{f}\) and \(c_{p,f}\rho_{f}=c_{p,m}\rho_{m}\). This would correspond to equal heat conduction and heat storage properties in the fluid and mineral, which implies that the temperature equation is in practice independent of the phase field. A similar result can not be obtained for the solute._ ## 6 Numerical results of the phase-field models We here perform numerical experiments to assess the numerical behavior of the conservative Allen-Cahn equation, in particular when it is solved with the L-scheme, and to test the full, coupled model to show its numerical behavior and potential. We investigate behavior of the Allen-Cahn equation only in Section 6.1, before addressing the behavior of the coupled model in Section 6.2. ### Conservative and non-conservative Allen-Cahn We here consider the simplest test case possible, solving the original phase-field equation (7) or the conservative version (8). The reaction rate will initially be set to zero, so only curvature-driven motion can change the shape (and, potentially, size) of the mineral. We consider two test cases, both to be considered in a two-dimensional domain \(\Omega=(0,1)^{2}\), but with different initial conditions: * We initialize the phase field as a circular mineral with an approximated diffuse interface transition at radius \(0.3\). * We initialize the phase field with ones everywhere except for in a centered square with side lengths \(0.5\) where the phase field is set to zero. The latter case (b) is more challenging as the initial condition violates the phase field equation and is more bound to numerical error. We solve the two test cases in three different ways: * Solving (7) fully implicitly, using Newton's method to solve the non-linear system of equations. * Solving (8) fully implicitly, using Newton's method to solve the non-linear system of equations. * Solving (8) semi-implcitly, using L-scheme to solve the non-linear system of equations as described in section 5.3. All approaches are solved using the Finite Volume discretization described in Section 5.1, on a uniform, rectangular grid. The test cases apply homogeneous Neumann conditions on the boundaries. For test case (a), only minor adjustments of the phase field diffuse interface is expected for the conservative approaches (ii) and (iii), as well as numerical error from the solvers. In test case (b), the curvature-driven motion of (ii) and (iii) will transform the initial square to a circular mineral. For the non-conservative phase field (i), we expect the mineral to shrink in both case (a) and (b). All cases are solved on a \(100\times 100\) grid, with \(\lambda=0.05\) and \(\gamma=1\). Note that a smaller value of \(\gamma\) causes the curvature-driven motion to be slower, but is here chosen large to highlight the issues with (7). All equations are time stepped until \(t=1\). The two fully-implicit cases (i) and (ii) are solved using \(\Delta t=10^{-4}\), which was needed to ensure convergence of the Newton iterations, and this time-step size is also applied for the L-scheme. With the current setup, we have \(M_{G}=9.68\times 10^{4}\) and we choose \(\mathcal{L}\) equal to this value. Newton iterations and L-scheme iterations are solved until a threshold of \(10^{-13}\) is met in the \(L^{2}\)-norm between two subsequent iterations (that is, \(\mathrm{tol}_{\mathcal{L}}=10^{-13}\) in (31)). Since the size of the discrete system is not too large, the linear system is solved directly. Table 1 shows the change in volume of the mineral from initial condition until \(t=1\) for the two test cases for all three approaches. Since the phase field approaches the value \(0\) in the mineral and \(1\) outside, the volume is calculated through \[\mathrm{volume}=\int_{\Omega}(1-\phi)dV.\] Since all mineral disappeared for approach (i), the volume change corresponds to the loss of the initial amount of mineral. Note that there is a volume change due to the choice of the threshold tolerance. The two conservative approaches (ii) and (iii) show only change in volume that can be connected to the accuracy of the non-linear iterations. For approaches (ii) and (iii), at every time step a small volume change of about \(1.5\times 10^{-15}\) is seen, which is the same as the tolerance of the Newton and L-scheme iterations. Table 2 shows average number of Newton or L-scheme iterations needed for each time step. Note however that in general a larger amount of iterations are needed in the beginning of the simulation and then stabilizes at a lower value. At least one Newton or L-scheme iteration is forced to be performed at every time step. For the approach (i), the entire mineral dissolves during the first 300 time steps and hence attains a constant solution with fluid only afterwards. When using the conservative scheme with Newton iterations; that is, approach (ii), 2-3 iterations are needed per time step in the beginning, which settles after 150 time steps. The \begin{table} \begin{tabular}{|l|c|c|c|} \hline Case/Approach & (i) & (ii) & (iii) \\ \hline (a) & -0.2850 & \(2.0134\times 10^{-11}\) & \(1.5294\times 10^{-11}\) \\ (b) & -0.25 & \(-6.1989\times 10^{-12}\) & \(1.4864\times 10^{-11}\) \\ \hline \end{tabular} \end{table} Table 1: Volume change of mineral from first to last time step \begin{table} \begin{tabular}{|l|c|c|c|} \hline Case/Approach & (i) & (ii) & (iii) \\ \hline (a) & 1.03 & 1.0038 & 5.18 \\ (b) & 1.03 & 1.0160 & 5.77 \\ \hline \end{tabular} \end{table} Table 2: Average number of iterations for entire simulation Jacobian in the Newton iterations is however full due to the non-local term, which makes each Newton iteration expensive to solve. For approach (iii), several L-scheme iterations are needed in the first phase while the diffuse interface (and for the square, also the shape) is adjusting. In the beginning 30-40 iterations are needed, before gradually decreasing to 3 L-scheme iterations after 1000 time steps, where it remains stable. However, the iterations are cheap as only a sparse linear system of equations needs to be solved. When solved on the same laptop and same software, approach (iii) was roughly a factor 2 faster than approach (ii). Although the L-scheme iterations converge, they converge slowly. The size of \(\mathcal{L}\) is responsible for this. Although we chose it according to the theoretical bounds (29), earlier studies of other equations have shown (see e.g. [20, 27, 28]) that much lower values of \(\mathcal{L}\) are possible, and will generally give faster convergence. Also larger values of \(\Delta t\) could be used, but is for our equation expected to result in slower convergence for the L-scheme. Also note that lower values of \(\mathcal{L}\) would usually lead to a time-step size restriction to obtain convergence. Hence, in the following two subsection we investigate numerically possible values of \(\mathcal{L}\) and \(\Delta t\). #### 6.1.1 Size of \(\mathcal{L}\) We change the setup from the previous test slightly to not only consider semi-steady-state solutions. Initially we have a circular mineral of radius 0.3 as in (a), but this time with a reaction rate \(f=-0.1\). Hence the mineral gradually dissolves, but is still present at \(t=1\). We use \(\gamma=0.1\) to have a more typical value for the curvature-driven motion. The value of \(M_{G}\) is hence also changed and is \(M_{G}=968\) now. We will for now fix \(\Delta t=10^{-3}\) and vary \(\mathcal{L}\). Table 3 shows the average number of iterations in each time step as a function of \(\mathcal{L}\). Note that the numbers are generally larger than in Table 2 as no steady-state is reached during the simulation. In this setup, the number of L-scheme iterations per time step is almost constant throughout the simulation. We see how significantly lower values than the theoretical bound of \(\mathcal{L}\) still provide convergence, and that the amount of iterations needed decreases when \(\mathcal{L}\) decreases, before eventually increasing. Hence, there appears to be an optimal \(\mathcal{L}\) for when the lowest amount of needed L-scheme iterations is obtained. For other equations, it has been shown that a lower value of \(\mathcal{L}\) provides faster convergence (i.e., less iterations needed), and that there is an optimal \(\mathcal{L}\) where the least amount of iterations are needed [27], and \(\mathcal{L}\) can be chosen differently from time step to time step to optimize convergence [28]. There is also a lower limit for \(\mathcal{L}\) which can still ensure convergence if the time-step size is also large [27]. #### 6.1.2 Size of \(\Delta t\) We now use either the theoretical \(\mathcal{L}=M_{G}\) or the \(\mathcal{L}=M_{G}/4\) as this gave the lowest value of iterations, and vary the time-step size \(\Delta t\). The setup is otherwise the same as in Section 6.1.1. Table 4 and 5 show the average number of iterations as a function of the time-step size for these two choices of \(\mathcal{L}\). As expected for our equation, the number of iterations needed at every time step increases with the time-step size \(\Delta t\). One case is deemed to not converge as the threshold value was not reached within 200 L-scheme iterations per time step. Note that convergence potentially could be achieved by allowing more iterations. Note that for \(\mathcal{L}=M_{G}\), the number of L-scheme iterations increases but is less than doubled as the time-step size is doubled. Hence, in this case there is a computational gain by increasing the time-step size as the total number of L-scheme iterations throughout the simulation decreases. This is however not necessarily the case for \(\mathcal{L}=M_{G}/4\) as the average number of L-scheme iterations is more than doubled as the time-step size increases from \(2\times 10^{-3}\) to \(4\times 10^{-3}\). ### Behavior of coupled model We here consider the coupled model formulated in Section 5.4. As example case for the simulations, we consider a channel \(\Omega=(0,1)^{2}\), where a mineral layer of initial thickness 0.25 fills the lower part of the domain. Temperature is initially 1 everywhere and the fluid has a solute concentration of 0.5, which corresponds to chemical \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Value of \(\mathcal{L}\) & \(M_{G}\) & \(M_{G}/2\) & \(M_{G}/4\) & \(M_{G}/8\) \\ \hline Average number of iterations & 32 & 21 & 17 & 24 \\ \hline \end{tabular} \end{table} Table 3: Average number of L-scheme iterations per time step for different values of \(\mathcal{L}\) equilibrium of the chosen reaction rate. A Dirichlet boundary condition of \(T=0.9\) and \(c=0.25\) is applied to the left boundary, while all other boundaries have zero Neumann boundary conditions. The phase field has zero Neumann boundary conditions on all boundaries. The lowering of the solute concentration (and temperature) at the left boundary is expected to trigger dissolution of the mineral and hence an increase in regions where the phase field approaches one. The chosen reaction rate is a simpler version of (5) with constant solubility product. Hence, \[f(T,c)=f_{p}(T,c)-f_{d}(T,c)=k_{0}e^{-\frac{E}{RT}}\Big{(}\frac{c^{2}}{c_{\rm eq }^{2}}-1\Big{)}.\] We here consider for simplicity a constant equilibrium concentration \(c_{\rm eq}=0.5\), but one could consider also temperature-dependent solubility [32]. All parameters appearing in the model equations are as specified in Table 6. We have not attempted to choose physically correct parameters, but generally chosen values around 1 to mimic a non-dimensional setup. As we need to fulfill (38), we have adjusted \(\gamma\). For the heat conductivities we choose a larger heat conductivity in the mineral to be able to observe some heterogeneous influence. The simulation setup is sketched in Figure 2. #### 6.2.1 Size of time-step size, tolerances and stabilization parameters We here apply a slightly larger tolerance for the L-scheme iterations than in Section 6.1, but keep it lower than the tolerance for the coupling iterations to ensure that errors from the non-linear solver scheme does not influence the coupling iterations error. We therefore use \(\text{tol}_{\mathcal{L}}=10^{-8}\) and \(\text{tol}_{\text{coup}}=10^{-6}\) throughout all simulations presented in the following. Lower tolerances generally lead to more iterations. With our parameter choices, we have \(M_{G}=118\), and we choose \(\mathcal{L}=M_{G}\) in all simulations. For the coupling parameter \(\mathcal{L}_{\text{coup}}\) we choose it small as this provide faster convergence for the coupling iterations. This result coincides with similar investigations (e.g. [3, 9]). We here present results with \(\mathcal{L}_{\text{coup}}=10^{-4}\). Although we may \begin{table} \begin{tabular}{|l|c|} \hline Parameter & Value \\ \hline \(\lambda\) & 0.05 \\ \(\gamma\) & 0.01 \\ \hline \(E/R\) & 1 \\ \(k_{0}\) & 1 \\ \(c_{\rm eq}\) & 0.5 \\ \hline \(m_{m}\) & 1 \\ \(D\) & 1 \\ \hline \(\rho_{f}\) & 1 \\ \(c_{p,f}\) & 1 \\ \(\rho_{m}\) & 1 \\ \(c_{p,f}\) & 1 \\ \(k_{f}\) & 1 \\ \(k_{m}\) & 2 \\ \hline \end{tabular} \end{table} Table 6: Parameter choices \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Value of \(\Delta t\) & \(10^{-3}\) & \(2\times 10^{-3}\) & \(4\times 10^{-3}\) & \(8\times 10^{-3}\) \\ \hline Average number of iterations & 32 & 53 & 95 & 175 \\ \hline \end{tabular} \end{table} Table 4: Average number of L-scheme iterations per time step for different values of \(\Delta t\), when \(\mathcal{L}=M_{G}\) \begin{table} \begin{tabular}{|l|c|c|c|} \hline Value of \(\Delta t\) & \(10^{-3}\) & \(2\times 10^{-3}\) & \(4\times 10^{-3}\) & \(8\times 10^{-3}\) \\ \hline Average number of iterations & 17 & 30 & 83 & - \\ \hline \end{tabular} \end{table} Table 5: Average number of L-scheme iterations per time step for different values of \(\Delta t\), when \(\mathcal{L}=M_{G}/4\) not fulfill (45), numerical testing show that the coupling iterations converge for any \(\mathcal{L}_{\rm coup}>0\), and even with \(\mathcal{L}_{\rm coup}=0\). By testing various time-step sizes, we found time-step sizes even up to \(10^{-2}\) to provide converging coupling iterations. However, such long time-step sizes also resulted in very slowly converging L-scheme iterations, with generally more than 200 L-scheme iterations per coupling iteration needed. Hence, in the following, only results from shorter time-step sizes are presented. Although we cannot guarantee that (44) (and (45)) is fulfilled, the coupling iterations were always found to converge fast for the investigated parameters. #### 6.2.2 Behavior of coupling iterations For \(\Delta t=10^{-3}\), \(\mathcal{L}_{\rm coup}=10^{-4}\) and \(\mathcal{L}=M_{G}=118\), we need an average of 2.21 coupling iterations per time step. More iterations are needed initially as the changes in solute concentration and temperature, and hence in phase field, are larger due to the boundary condition on the left boundary. For the L-scheme iterations, we need 6.00, 3.22 and 2.08 iterations in the first, second and third coupling iterations, respectively. See Figure 3 for the evolution of iterations over time. Fewer L-scheme iterations are needed in later coupling iterations as we use the solution found in the previous coupling iteration when starting the L-scheme in later coupling iterations, giving a starting point that is already closer to the converged solution. Compared to Section 6.1, fewer L-scheme iterations are needed for same time-step size as the tolerance for the L-scheme iterations is now larger. When increasing the time-step size, the number of coupling iterations increases slightly, see Table 7. However, the number of L-scheme iterations that are needed per coupling iterations increases significantly. For \(\Delta t=8\times 10^{-3}\), more than 200 L-scheme iterations per coupling iteration are needed. Hence, we did not proceed with larger time-step sizes. Figure 2: Initial phase field \(\phi^{0}\) (background color), with initial temperature and solute concentration in text. Left boundary (dashed line) has Dirichlet boundary conditions for temperature and solute, while all other boundaries (solid line) have homogeneous Neumann boundary conditions. The phase field has homogeneous Neumann boundary conditions on the left boundary. #### 6.2.3 Simulation results of coupled model The simulation results for phase field, temperature and solute concentration after \(0.5\) and \(1\) time unit are shown in Figure 4. The results are for the solution found using \(\Delta t=10^{-3}\), but the solutions are qualitatively the same for the various time-step sizes that have been investigated. As expected, the mineral layer dissolves, which corresponds to a smaller region where the phase field attains values close to \(0\). As we used different heat conductivities in the fluid and mineral, vertical variability in the temperature profile can be seen in the transition between fluid and mineral. At \(t=1\), the temperature is close to the boundary condition \(T=0.9\) in the entire domain, showing that the heat conduction has propagated throughout the domain. For the solute we visualize the product \(\phi c\) to highlight the solute concentration in the fluid. The solute concentration shows a gradient near the mineral interface as more solute is released into the fluid by the mineral dissolution. Hence, the overall solute concentration in the domain approaches the boundary value \(c=0.25\) rather slowly due to ions being released by the chemical reaction. We also calculate how the changes in the total amount of phase field, which corresponds to the amount of fluid, correspond to the reaction rate at every time step. That is, we calculate \[\phi_{\text{int}}^{n}=\int_{\Omega}\phi^{n}dV\quad\Rightarrow\Delta\phi_{ \text{int}}^{n}=\phi_{\text{int}}^{n+1}-\phi_{\text{int}}^{n}\] for the changes in phase field, and \[R^{n}=\Delta t\int_{\Omega}-\frac{4}{\lambda}\phi^{n}(1-\phi^{n})\frac{1}{m_{ m}}f(T^{n},c^{n})dV\] to estimate the size of the reaction term at every time step. The resulting curves can be found in Figure 5. As observed here, the two lines are almost perfectly on top of each other, with a difference in the order of \(10^{-7}\), which can be explained by the chosen tolerances for the coupling iterations and the L-scheme iterations. From Figure 5 we also observe the size of the total reaction rate. Although temperature decreases, which corresponds to lower reaction rates and hence smaller change of \(\phi\), the change of \(\phi\) increases throughout the simulation as the decrease in solute concentration dominates. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Value of \(\Delta t\) & \(10^{-3}\) & \(2\times 10^{-3}\) & \(4\times 10^{-3}\) & \(8\times 10^{-3}\) \\ \hline Average number of coupling iterations & 2.21 & 2.64 & 3.11 & 3.58 \\ \hline \end{tabular} \end{table} Table 7: Average number of coupling iterations per time step for different values of \(\Delta t\) Figure 3: Number of coupling iterations (left) and number of L-scheme iterations per coupling iteration (right) over time for \(\Delta t=10^{-3}\), \(\mathcal{L}_{\text{coup}}=10^{-4}\) and \(\mathcal{L}=M_{G}\). ## 7 Discussion and conclusion In this paper we have formulated a phase-field model for reactive transport, where fluid flow, solute transport and heat transport is coupled with mineral precipitation and dissolution. The phase-field model is formulated rather general and can allow expansion/contraction effects due to the chemical reactions and is hence quasi-compressible. By taking the sharp-interface limit of the phase-field model, the model reduces to the expected sharp-interface model and recovers conservation of mass and energy in the fluid domain, conservation of energy in the mineral domain, as well as the expected exchanges of mass and energy across the evolving fluid-mineral interface. However, when using the original Allen-Cahn equation, the fluid-mineral interface itself evolves due to curvature-driven motion and would lead to non-physical loss or gain of mineral. We therefore consider a Figure 4: Phase field (left), temperature (middle) and solute concentration \(\phi c\) (right) after \(t=0.5\) (top) and \(t=1\) (bottom). Figure 5: Changes in amount of phase field \(\phi_{\text{int}}^{n}\) and integrated reaction rate \(R^{n}\) (left), and the difference between these two curves (right) over time. The two lines in the left plot are almost perfectly on top of each other and cannot be visually separated. reformulated Allen-Cahn equation which is conservative. In this case the total phase field in the considered domain can only change due to chemical reactions in the domain, but encounters a conservative redistribution towards constant curvature of mineral. Mass and energy is also conserved. The conservative Allen-Cahn equation includes a non-local term. Although easy to numerically discretize, the non-local term leads to a full Jacobian matrix if using Newton's method to solve the resulting non-linear system of equations, which makes Newton iterations expensive. We instead applied L-scheme iterations to solve the resulting non-linear system of equations. Although L-scheme iterations only offer linear convergence, they can be proven to always converge independently of time-step size and initial guess, when choosing the introduced stabilization parameter \(\mathcal{L}\) appropriately. We could also show that the L-scheme iterations will converge to a solution that is discretely conservative. We note that another alternative to obtain a conservative phase-field model would be to use the Cahn-Hilliard equation instead of the conservative Allen-Cahn equation. To couple the conservative Allen-Cahn equation to the other model equations, we use an iterative scheme where the various model equations are solved sequentially. This enables us to solve each equation independently of the others and to optimize the solver scheme for each equation. We here consider a simplified setup without flow, hence only solute diffusion and heat conduction is present next to the phase-field evolution and chemical reactions. In this case, we can prove the convergence of the coupling iterations when adding a stabilization term \(\mathcal{L}_{\text{coup}}\) to the phase-field equation, resulting in a restriction on the time-step size and on the stabilization term. For the future it would be beneficial to also include the flow in the coupling iterations. Incorporating the adapted Navier-Stokes equation in the proof of the convergence of the scheme is problematic, due to the occurring non-linearities and lack of reasonable bounds on the velocity. An alternative would be to use an adapted Stokes equation for the flow, which would be linear in velocity and offer the needed bounds. Using Stokes instead of Navier-Stokes is reasonable as long as the Reynolds number is small enough, which for applications in porous media is generally fulfilled. The resulting scheme hence relies on good choices for the stabilization parameters \(\mathcal{L}\) and \(\mathcal{L}_{\text{coup}}\) as well as the time-step size \(\Delta t\). Although the theoretical bound for \(\mathcal{L}\) can easily be estimated, the coupling parameter \(\mathcal{L}_{\text{coup}}\) and time-step size restriction of \(\Delta t\) cannot easily be calculated. By numerical experiments we could show that convergence of the L-scheme iterations can be achieved for lower values of \(\mathcal{L}\) than the estimated bound, which is a well-known behavior for L-scheme iterations. However, convergence of the L-scheme was significantly slower when increasing the time-step size, which is opposite what is the case when the L-scheme is applied to e.g. the Richards equation (see [20]), where the non-linearity appears in the time derivative. It is important to emphasize that each L-scheme iteration is computationally cheap. It could be possible to optimize the L-scheme convergence by adjusting the value of \(\mathcal{L}\) from time step to time step [28]. The coupling iterations appeared to be insensitive with respect to the choice of \(\mathcal{L}_{\text{coup}}\) and only slightly more iterations were needed when increasing the time-step size. Hence, the coupling iterations were robust and the resulting time-step restriction was not a limiting factor for the coupling iterations. In terms of geological applications like geothermal energy extraction, the formulated model and numerical scheme are suitable for modeling and simulation flow, transport and chemical reactions at the pore scale of the porous medium. Larger domains are usually modeled at the so-called Darcy scale, where only averaged quantities like porosity and averaged flow through permeability are used. The current phase-field model can be upscaled to Darcy scale by applying homogenization [19], similar as has been done for other phase-field models [8, 35]. The advantage of upscaling phase-field model in contrast to sharp-interface models (as in [41, 5]) is that constant domains are considered, which simplifies the upscaling steps. This however opens the question concerning asymptotic consistency; if the sharp-interface limit of the upscaled model is the same as the upscaled sharp-interface model, which has been established for a Cahn-Hilliard model [43]. To ensure a conservative Allen-Cahn model, the conservative Allen-Cahn equation could be applied to each so-called cell problem, which localizes the non-local term to smaller patches of the pore-scale domain. This opens for also efficient and conservative phase-field modeling using Allen-Cahn also in this case. ## Acknowledgements We thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for supporting this work by funding SFB 1313, Project Number 327154368.
2306.05501
Robust Explainer Recommendation for Time Series Classification
Time series classification is a task which deals with temporal sequences, a prevalent data type common in domains such as human activity recognition, sports analytics and general sensing. In this area, interest in explainability has been growing as explanation is key to understand the data and the model better. Recently, a great variety of techniques have been proposed and adapted for time series to provide explanation in the form of saliency maps, where the importance of each data point in the time series is quantified with a numerical value. However, the saliency maps can and often disagree, so it is unclear which one to use. This paper provides a novel framework to quantitatively evaluate and rank explanation methods for time series classification. We show how to robustly evaluate the informativeness of a given explanation method (i.e., relevance for the classification task), and how to compare explanations side-by-side. The goal is to recommend the best explainer for a given time series classification dataset. We propose AMEE, a Model-Agnostic Explanation Evaluation framework, for recommending saliency-based explanations for time series classification. In this approach, data perturbation is added to the input time series guided by each explanation. Our results show that perturbing discriminative parts of the time series leads to significant changes in classification accuracy, which can be used to evaluate each explanation. To be robust to different types of perturbations and different types of classifiers, we aggregate the accuracy loss across perturbations and classifiers. This novel approach allows us to recommend the best explainer among a set of different explainers, including random and oracle explainers. We provide a quantitative and qualitative analysis for synthetic datasets, a variety of timeseries datasets, as well as a real-world case study with known expert ground truth.
Thu Trang Nguyen, Thach Le Nguyen, Georgiana Ifrim
2023-06-08T18:49:23Z
http://arxiv.org/abs/2306.05501v4
# Robust Explanation Evaluation for Time Series Classification ###### Abstract Time series classification is a task which deals with a prevalent data type in domains such as human activity recognition, sports analytics and general healthcare. This paper provides a framework to quantitatively evaluate and rank explanation methods for time series classification. The recent interest in explanation methods for time series has provided a great variety of explanation techniques. Nevertheless, when the explanations disagree on a specific problem, it remains unclear which of them to use. Comparing multiple explanations to find the right answer is non-trivial. Two key challenges remain: how to quantitatively and robustly evaluate the informativeness of a given explanation method (i.e., relevance for the classification task), and how to compare explanation methods side-by-side. We propose AMEE, a robust Model-Agnostic Explanation Evaluation framework for quantifying and comparing multiple saliency-based explanations for time series classification. Data perturbation is added to the input time series guided by the saliency maps. The impact of perturbation on classification accuracy is measured and used for explanation evaluation. The results show that perturbing discriminative parts of the time series leads to significant changes in classification accuracy. To be robust to different types of perturbations and different types of classifiers, we aggregate the accuracy loss across perturbations and classifiers. This allows us to objectively quantify and rank different explanation methods. We provide a quantitative and qualitative analysis for synthetic datasets, a variety of time-series datasets, as well as a real-world dataset with known expert ground truth. Keywords:Explainable AI Trustworthy AI Explanation Evaluation Time Series Classification ## 1 Introduction The last decade witnessed a rapid integration and significant impact of machine learning in everyday life. Current machine learning algorithms work well in many applications and grow ever more complex with models having millions of parameters [5, 12]. However, we are far behind in explaining why these algorithms often work so well and occasionally fail to perform well [15]. _While we have a whole range of new explanation methods and methodologies, it is still difficult to decide which explanation methods to use for a given problem and dataset_. This unmatched growth of complexity and explainability of many machine learning algorithms, including those dealing with time series data, undermines application of these technologies in critical, human-related areas such as healthcare, and finance [6, 26]. As time series data is prevalent in these applications [3, 31, 32], Time Series Classification (TSC) algorithms often call for reliable explanations [4, 21]. This explanation is usually presented in the form of feature importance or as a _saliency map_[1], highlighting the parts of the time series which are _informative_ for the classification decision. Recent efforts both in designing intrinsically explainable machine learning algorithms, as well as building post-hoc methods explaining black-box algorithms, have gained significant attention [33, 36, 37, 27, 42]; yet, these efforts present us with a new challenge: _How to assess and objectively compare such methods?_ In other words, if two or more explanation techniques give different explanations (i.e., two different saliency maps coming from the same classifier or different classifiers, Figure 1), which technique and explanation should we use and trust? Our solution to this problem is a robust methodology resulting in a standardized metric, enabling quantitative comparison of explanation methods (Table 1). From the application users' perspective, having this assessment can support short-listing of useful explanations for user-studies which are generally costly and difficult to reproduce [13]. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline & Random & GradSHAP & ROCKET-LIME & MrSEQL-SM & Oracle \\ \hline Explanation Power [0, 1] & 0 & 0.06 & 0.56 & 0.90 & 1 \\ \hline Method Ranking & 5 (worst) & 4 & 3 & 2 & 1 (best) \\ \hline \end{tabular} \end{table} Table 1: Outcome of AMEE: a metric to evaluate multiple explanation methods. Explanation Power measures the informativeness of the explanation method, taking values from 0 to 1, where 0 is worst and 1 is best. Figure 1: Saliency map explanations for classifying a motion time series obtained using different explanation methods. The most discriminative parts (according to the method) are colored in deep red and the non-discriminative parts are colored in deep blue. In this paper, we present _A Model-agnostic framework for **E**xplanation **E**valuation for Time Series Classification_ (**AMEE**). Specifically, we focus on explanation in the form of a _saliency map_ and consider its informativeness within a defined computational scope, in which a more informative explanation means a higher capability to influence classifiers to correctly identify a class. Perturbation of discriminative subsequences of the time series results in a reduced accuracy of classifiers. The higher the impact of a perturbation means the more informative are the perturbed time series subsequences. Estimation of this impact, measured by a committee of highly accurate **referee classifiers**, can reveal the informativeness of the explanation. **Key Contributions.** Our work addresses an overlooked area of research: robust comparison and ranking of multiple explanation methods for time series classification. Our main contributions are: * A robust model-agnostic, ensemble-oriented explanation evaluation framework. We show through extensive experiments that a committee approach involving multiple types of data perturbations and multiple evaluating classifiers leads to explanation evaluation and ranking that better agrees with the explanation ground truth (synthetic data) and domain expert ground truth (real data). * A standardised evaluation metric (Explanation Power) that is comparable across different explanation methods, different referee classifiers and different datasets. * A rigorous study on both synthetic and real datasets with recent state-of-the-art time series classifiers and explanation methods. Verification of the evaluation methodology with annotated, real datasets. All data, code and detailed results are available1. Footnote 1: Data and code: [https://github.com/mlgig/amee](https://github.com/mlgig/amee) In the next sections we review related Explainable AI research including both time series specific and general methods (Section 2), describe our proposed solution (Section 3, and discuss experiments on both synthetic and real time series datasets, with a case study (Section 4). We finally summarize our results (Section 5) and discuss further considerations and recommendations for using our methodology to evaluate explanations for time series classification. ## 2 Related Work ### Explanation Methods for Time Series Classification The XAI methods for Time Series Classification have advanced in parallel with general XAI progress. Although recent works on _instance-based_ methods, such as factual and counterfactual explanations [41, 10] have become popular, the majority of explanation methods exist in the form of _saliency maps_[], where a map visualizes the importance weight vector \(w\) and highlights the discriminative areas of a time series for the classification task. These _saliency-based explanations_ can either be extracted directly from the classifier (intrinsic explanation), or indirectly by applying a post-hoc explanation method to the black-box classifier (post-hoc explanation). **Intrinsic Explanation.**_Explanation from MrSEQL Time Series Classifier._ MrSEQL [24] is a time series classification algorithm that is intrinsically explainable. The algorithm converts the numeric time series vector into strings, e.g., by using the SAX [25] transform with varying parameters to create multiple symbolic representations of the time series. The symbolic representations are then used as input for SEQL [19], a sequence learning algorithm, to select the most discriminative subsequences for training a classifier using logistic regression. The symbolic features combined with the classifier weights learned by logistic regression make this classification algorithm explainable. For a time series, the explanation weight of each data point is the accumulated weight of the SAX features that it maps to. These weights can be mapped back to the original time series to create a saliency map to highlight the time series parts important for the classification decision. We call the saliency map explanation obtained this way, MrSEQL-SM. For using the weight vector from MrSEQL-SM, we take the absolute value of weights to obtain a vector of non-negative weights. _Explanation from a Generic, White-box Classifier._ A generic, white-box classifier such as Logistic Regression or Ridge Regression has been the primary source of providing feature importance, especially in tabular data [18]. These classifiers and their explanations can be computationally cheap and useful for time series data [14]. **Posthoc Explanation.**_Gradient-based Explanation._ This approach uses the gradients from a trained deep neural net to infer explanations. Notable methods are Integrated Gradient [39], GradientSHAP [28], GradCAM [36], CAM [42]. _Perturbation-based Explanation._ This type of methods infuse noise into the data to create data variations and to infer the respective degree of data points importance. Notable methods are Feature Occlusion [40] and LIME [33]. One of the most popular post-hoc explanation methods is SHAP [27] - a unique way to explain any machine learning model using a game theoretic approach, in which all feature coalitions are evaluated. Feature importance is then calculated using the classic Shapley value [38]. ### Quantitative Evaluation of Saliency-based Explanation Quantitative evaluation of explanations for time series data was a relatively untouched topic until recently. Unlike image and text, time series data often does not have annotated ground truth importance; hence, it remains a challenge to determine whether a saliency-based explanation is correct. Attempts from the research community [8, 20] to benchmark and evaluate faithfulness of recent explanation methods overcome this problem by using synthetic datasets with assigned ground-truth. Other research ventures into real datasets, yet these efforts focus on examining explanations by a single classifier [8] or averaging a non-comparable metric across multiple datasets [35]. The approach in [16] uses a white-box classifier to get a pseudo ground-truth explanation (a) and evaluates a post-hoc, localized explanation method (b) by estimating cosine distance between (a) and (b). However, this method assume that the white-box classifiers can always produce explanations of ground-truth quality. We show in our experiments that this is not the case. Notably, [30, 2, 35] propose methods to quantify explanation methods, however, there are a few problems with the comparison: the metric used (change in accuracy) is not comparable across the selected datasets, the individual effect is not separated (only average change in accuracy is reported), and there is no discussion involving explanation ground-truth. Most importantly, there is little discussion in previous work about the impact of the classifier(s) accuracy on evaluating the explanation methods that are based on those classifier(s). This is an important point, as the evaluation can only be trusted if the classifier(s) are reliable. ## 3 Methodology In this section, we describe our proposed methodology and related concepts. Specifically, we present the blueprint of AMEE in Figure 2. The framework involves a labelled time series dataset (split into training and test), a set of explanation methods to be compared, and a set of diverse evaluating classifiers (**Referee Classifiers**). The output of the framework is the _explanation power_ of each explanation method (see Table 1). ### Saliency-based Explanations In the context of this paper, we only consider explanations in the form of saliency maps. A saliency map is numerically represented by a vector of weights \(M=[w_{0},\dots,w_{l}]\) where \(w_{i}\in\mathbb{R}\) and \(l\) is the length of the time series. The value \(w_{i}\) implies the importance (or saliency) of the time point \(i\) in the process of prediction making. This vector can be obtained from annotation (by a human) or computed by an explanation method. The explanation method can come from a classification model (intrinsic explanation) or a black-box model coupled with a post-hoc explanation method (post-hoc explanation). Figure 2: AMEE evaluation framework requires 3 elements: **(a) a dataset** that requires explanation evaluation, **(b)** a set of **saliency-based explanations**, and **(c)** a set of **referee classifiers** trained on a subset of (a). #### 3.1.2 Random Explanation. An explanation generated through random sampling. The weights \(w_{i}\) are drawn from a random uniform distribution. It serves as a baseline for any _reasonable_ explanation method. While we expect this explanation to be the worst explanation in a ranking of several explanation methods, there exists situations where a random explanation outperforms a method-based explanation method. Specifically, when a method-based explanation highlights non-discriminative parts, or fails to identify any discriminative parts, that explanation can be considered worse than a random explanation. #### 3.1.3 Oracle Explanation. In cases where we have explanation ground truth (e.g. for synthetic datasets), this should be the upper bound for any explanation method. ### Explanation-guided Data Perturbation A good saliency-based explanation for a time series should highlight its discriminative part(s) that contain class-specific information to distinguish from other classes. Data perturbation is the process of adding noise to the data by replacing selected time points in the time series. Explanation-guided Perturbation is done when a saliency-based explanation is used to determine the specific time points of the time series to be perturbed. As a result, the more informative the explanation, the higher the decrease in accuracy of a classifier is expected, since that explanation-guided perturbation knocks out important class-specific information in the respective time series. Given a threshold \(k\) (\(0\leq k\leq 100\)), the discriminative parts of a time series of \(l\) steps are defined as a set of \(k*l/100\) time steps that have the highest weights in the saliency map \(M\). In other words, the discriminative parts are identified by the top \(k\)-percentile in \(M\). Varying \(k\) allows us to control the scope of the perturbation. At \(k=0\), the time series is kept as original; at \(k=20\), only 20 percent of the time steps (that are most discriminative) are affected; at \(k=100\) the entire time series is distorted. ### Referee Classifiers In our work we employ a set of independent, diverse and accurate classifiers that are trained with the original training set and are used to evaluate the target explanations. This committee is formed of member classifiers that we call Referee Classifiers. In order to evaluate the explanation methods, our framework measures the impact of each explanation-guided data perturbation on the accuracy of the referee classifiers. ### Data Perturbation Strategy In Figure 3 we explain and visualize four strategies to perturb the discriminative areas for a time series [29]. These strategies are either time-step dependent (local perturbation, using only the _t-th_ step information) or time-step independent (global perturbation), using either Gaussian-based or single value replacement. With these strategies, discriminative time steps are replaced with noisy values, either making the original time series having a patch of constant values (like a grey mask in an image) or a patch of random Gaussian noise (like a noise mask in an image). Let \(S\) be the number of time series in a dataset, each with \(l\) time steps. We now want to perturb one out-of-sample test time series of size \(1\times l\), so its _t-th_ value \(x_{t}\) is replaced with a new value \(r_{t}\). We define the global and local profile for this time step perturbation as follows: \[\text{Local-based:}\quad\quad\mu_{t}=\frac{1}{|S|}\sum_{s\in S}s_{t};\quad \sigma_{t}^{2}=\frac{1}{|S|-1}\sum_{s\in S}(s_{t}-\mu_{t})^{2}\] \[\text{Global-based:}\quad\quad\mu=\frac{1}{l|S|}\sum_{i=1}^{l}\sum_{s\in S}s_{ i};\quad\sigma^{2}=\frac{1}{l|S|-1}\sum_{i=1}^{l}\sum_{s\in S}(s_{i}-\mu)^{2}\] With these local-based and global-based profiles, we can define the perturbation \(r_{i}\) accordingly: Local mean: \(r_{t}^{(1)}=\mu_{t}\); Local Gaussian: \(r_{t}^{(2)}\sim\mathcal{N}(\mu_{t},\,\sigma_{t}^{2})\); Global mean: \(r_{t}^{(3)}=\mu\); Global Gaussian: \(r_{t}^{(4)}\sim\mathcal{N}(\mu,\,\sigma^{2})\). Figure 3 illustrates an example of how the 4 strategies effectively modify the original time series in its discriminative regions. ### AMEE: Model-Agnostic Framework for Explanation Evaluation in Time Series Classification Figure 2 summarizes the components and steps in the AMEE framework. Our framework requires a labeled dataset (\(D\)) that needs explanation evaluation, a set of explanations (\(M\)) to evaluate, and a set of referee classifiers (\(R\)) to be trained on a subset of the dataset. With these elements, the following steps are done to record the necessary information to calculate evaluation metrics: 1. Split the labeled dataset \(D\) into training (\(D_{train}\)) and test (\(D_{test}\)); 2. Train Referee Classifier(s) (\(R\)) with (\(D_{train}\)); 3. Use each explanation in \(M\) to create a step-wise, explanation-based perturbation on \(D_{test}\); 4. Measure the accuracy of each trained referee in \(R\) on these perturbed datasets \(D^{\prime}_{test}\). Figure 3: Time Series Data Perturbation strategy: An example time series with a known saliency map (left) is perturbed using mean or Gaussian noise using local time steps (local) or global time steps (global) across the entire dataset on its most discriminative region (top 20%). The output of this process is the accuracy on the perturbed dataset \(D^{\prime}_{test}\) at various thresholds (\(k\)), serving as an indicator of how much an explanation-based perturbation impacts the referees. Significant drop in accuracy in the first few steps of the explanation-based perturbation (e.g., at \(k=10\)) signals that meaningful, salient data points are disturbed based on the explanation. Hence, explanations that correctly identify such salient regions are likely to be informative. #### 3.2.2 Explanation AUC. We measure the impact of perturbation guided by an explanation method by estimating the Area Under the Curve (AUC) of its explanation-based perturbation. Specifically, the accuracy scores at each threshold (\(k\)) are translated into an Explanation-AUC (\(EAUC\)) using the trapezoidal rule. Smaller \(EAUC\) means higher impact (accuracy loss) triggered by the explanation method (Figure 4). The Explanation AUC is computed for each combination of Perturbation - Referee - Explanation method (Figure 5: Step 1). #### 3.2.3 Metric Standardization & Explanation Power. AMEE employs multiple perturbation strategies and multiple referee classifiers. As the EAUC measures depend on the choices of referees and the perturbation strategies, they are not directly comparable. The next steps (Figure 5: Step 2-5) standardize and aggregate the EAUC to compute the final output of the framework: the **Explanation Power**. Step 2 rescales the Explanation AUC to the same range \([0,1]\) for each row (i.e. each pair of Referee and Perturbation). The red highlighted row is an example. After rescaling the Explanation AUC, the **Average Scaled EAUC** is computed in Step 3. It is basically the average of the column in Step 2. For example the Average Scaled EAUC of Rocket-SHAP is \((0.43+0.26+0.69+0.42+0.33+0.67)/6=0.47\). In Step 4, the Average Scaled EAUC is again rescaled to the range between 0 and 1. The result is the **Average Scaled Rank** (lower is better). The **Explana Figure 4: Changes of accuracy measured by a referee classifier among two explanation methods (red and blue) at each threshold level \(k\). When a signal is perturbed based on a more informative explanation, this signal becomes harder for the referee to classify correctly, leading to a more severe drop in accuracy. This impact is measured by the **Explanation AUC**, or the the area under the curve (AUC) of these changes in accuracy at different threshold \(k\). The curve with lower explanation AUC (red curve) results from perturbation from a more informative explanation method. tion Power** is simply the inverse of Average Scaled Rank (\(1-\) Average Scaled Rank), i.e., higher is better. Details of this calculation are summarised in Algorithm 1 ## 4 Experiments In this section, we evaluate the performance of the AMEE framework in 3 cases in ascending order of difficulty. In the simplest case, we want to confirm the validity of AMEE with synthetic datasets with known explanation ground-truth [20]. Next, we measure the performance of the framework with a diverse set of time series classification datasets covering popular domains that require explanation [9]. Finally, we test our framework on a real dataset and compare the result with ground-truth explanations provided by a domain expert. **Referees.** We employ 5 candidates for referee classifiers in our experiment, selected based on their accuracy, speed and diversity of approaches [34]: baseline 1NN-DTW [7], MrSEQL [24], ROCKET [11], RESNET [17, 22] and WEASEL 2.0 [34]. As the choice of referees is a critical component in our framework, we carefully select classifiers that perform well in accuracy on all studied datasets. For a classifier to be selected in the referee committee, it has to achieve at least the average accuracy of all candidates for referee classifiers, and this number has to be higher than the theoretical accuracy achieved by a random classifier. In case Figure 5: Metric Standardization and Explanation Power Calculation. This table illustrates an example of how Explanation Power is derived in a typical evaluation assessment, involving 2 perturbation strategies (local, global) and 3 referees (MrSEQL, k-nn, ROCKET). the average accuracy is over 90%, the threshold to choose referees will be 90%. Details of the referee accuracy performance are presented in the Supplementary Materials. **Explanation Methods.** In our experiment, we evaluate 8 popular explanation methods, representing the various properties of explanation (Table 2). ### Evaluation for Synthetic Data with Known Ground Truth **Data.** We work with 10 synthetic univariate datasets selected by taking the mid-channel from the benchmark generated by [20]. The datasets are created using five processes: (a) a standard continuous autoregressive time series with Gaussian noise (CAR), (b) sequences of standard non-linear autoregressive moving average (NARMA) time series with Gaussian noise, (c) nonuniformly sampled from a harmonic function (Harmonic), (d) nonuniformly sampled from a pseudo period function with Gaussian noise (Pseudo Periodic), and (e) Gaussian with zero mean and unit variance (Gaussian Process). The important areas, either a Small Middle part (30% of time series length) or a very small (Rare Time) part (10% of time series length), are created by adding or subtracting a constant \(\mu\) (\(\mu\) = 1) for the positive and negative class. The number of time steps is \(T\) = 50. Each dataset comprises of 500 samples in training set and 100 samples in testing set. Figure 6 visualizes the two classes in the 10 datasets. Before presenting the experiment \begin{table} \begin{tabular}{l c c c c} \hline \hline Explanation & Type & Model & Explanation & Time-Series \\ Method & & Specific & \multicolumn{1}{c}{Scope} & Specific \\ \hline GradientSHAP & Post-hoc & Yes & Global & No \\ Integrated Gradient & Post-hoc & Yes & Global & No \\ MrSEQL-LIME & Post-hoc & No & Local & No \\ ROCKET-LIME & Post-hoc & No & Local & No \\ MrSEQL-SHAP & Post-hoc & No & Local & No \\ ROCKET-SHAP & Post-hoc & No & Local & No \\ MrSEQL-SM & Intrinsic & Yes & Global & Yes \\ RidgeCV-SM & Intrinsic & Yes & Global & No \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of Explanation Methods properties result of our evaluation on the synthetic datasets, we discuss the effect of the Data Perturbation strategy and Referees, followed by a sanity check for source quality using model-agnostic post-hoc explanation methods. Figure 6: Visualization of the Synthetic Datasets. The columns describe the process used to create the dataset, while the row specifies the specific salient areas. Figure 7: Impact of data perturbation strategy on explanation power for each explanation method. A smaller box-range (which comes from 4 perturbation methods) indicates a smaller change of explanation power with different masking strategies. For datasets that are ”easier” to classify by referees, this range is often smaller than that of ”harder”-to-classify datasets. Having various perturbation strategies is useful due to common agreement in explanation power among different masking methods. Impact of Data Perturbation Strategy.Figure 7 shows the boxplots of explanation power by different masking strategy. In datasets which are "easier" to classify (i.e. most classifiers get close to 100% accuracy) such as CAR and NARMA, explanation power does not change with perturbation strategy. On the other hand, we observe a larger change in explanation power when data is harder to classify (for example, in SmallMiddle_GaussianProcess dataset). Nevertheless, in extreme cases when explanation are Ground Truth (Oracle upper bound of explanation) and Random (lower bound of explanation), we generally observe that these explanations are the most and least informative informative methods, respectively. In the rest of the paper, we will leverage this advantage of having various types of perturbations and present the average explanation power by the four strategies presented in Section 3. Impact of Referees.Similar to the previous investigation on impact of masking strategy, we now inspect how explanation power changes with respect to referees and present the result in Figure 8. Here, we also notice a relatively consistent explanation power among different referee classifiers in datasets that are easier to classify (such as CAR- and NARMA-based datasets). In datasets that are harder-to-classify (for example, in Gaussian Process-based datasets), we observe a larger ranges in distribution of explanation methods over referee classifiers. Random and Oracle explanations both have their explanation power in expected values for the evaluated datasets. Nevertheless, having a committee of referees that are highly accurate is desirable and is helpful in reducing bias that a single referee can introduce for a more stable evaluation. Sanity check for Impact of Base Model Quality.Model-agnostic post-hoc methods such as LIME and SHAP derive explanations based on a classifier of any types. Thus, these explanation are dependent on the performance of the base model. For example, the ROCKET-SHAP explanation is created by applying SHAP (explanation method) on ROCKET (source classifier). If a source classifier performs poorly (i.e., has low accuracy) on the sample dataset, the explanation based on that classifier could not be as good as one based on a more accurate source. In our experiment, we get LIME and SHAP explanations from two sources: MrSEQL classifier [24] and ROCKET [11]. We observe that ROCKET achieves higher accuracy than MrSEQL in datasets created from Pseudo Periodic, Harmonic, and Gaussian Process (Table 6 in Supplementary Materials). We compare the two pairs of explanation (MrSEQL-based and ROCKET-based) from LIME and SHAP and do a sanity check. Our experiment confirms that in both cases, ROCKET-LIME and ROCKET-SHAP are considered better explanation methods as compared to MrSEQL-LIME and MrSEQL-SHAP, respectively (Figure 7 and8). This sanity check confirms our intuition that the quality of the source classifier is an important factor in model-agnostic, post-hoc explanation methods such as LIME and SHAP. Results.Using a committee of 5 referees and 4 perturbation strategies, we evaluate 8 explanation methods together with the lower bound explanation (Random) and upper bound explanation (Oracle) using the AMEE framework. The resulting explanation power is presented in Table 3. We also compare the explanation methods with the ground truth explanation for each time points and calculate the F1-score (Table 4) to measure how good each method is in determining whether a timepoint is salient. Our result shows a high agreement between AMEE's explanation power and the F1-score using ground truth time-point importance evaluation. Figure 8: Impact of referees on explanation power for each explanation method. A smaller box-range (which comes from 5 referee classifiers) signals a smaller change of explanation power by different referees. Besides, in a specific dataset, the relative position of this range indicates the level of critical difference in opinions of referees in their votes. Nevertheless, having a committee of referees that are highly accurate is generally desirable. \begin{table} \begin{tabular}{|l||c c c c c c c c c c|} \hline & \multicolumn{3}{c|}{Grad} & Int & \multicolumn{3}{c|}{MRSEQL ROCKET} & \multicolumn{3}{c|}{MRSEQL} & \multicolumn{3}{c|}{RODGETCV} \\ Dataset & Random & SHAP & Gradient & -LIME & -LIME & -SHAP & -SHAP & -SM & -SM & Oracle \\ \hline SM & CAR & 0.00 & 0.73 & 0.73 & 0.39 & 0.58 & 0.72 & 0.76 & 0.31 & 0.79 & 1.00 \\ SM & NARMA & 0.00 & 0.61 & 0.62 & 0.33 & 0.45 & 0.64 & 0.71 & 0.30 & 0.78 & 1.00 \\ SM & Harmonic & 0.00 & 0.87 & 0.86 & 0.26 & 0.54 & 0.41 & 0.76 & 0.53 & 0.91 & 1.00 \\ SM & PseudoPeriodic & 0.00 & 0.83 & 0.84 & 0.28 & 0.53 & 0.40 & 0.77 & 0.21 & 0.85 & 1.00 \\ SM & GaussianProcess & 0.00 & 0.43 & 0.53 & 0.25 & 0.54 & 0.23 & 0.80 & 0.48 & 0.50 & 1.00 \\ \hline RT & CAR & 0.00 & 0.92 & 0.93 & 0.81 & 0.81 & 0.85 & 0.85 & 0.14 & 0.95 & 1.00 \\ RT & NARMA & 0.00 & 0.90 & 0.92 & 0.79 & 0.61 & 0.85 & 0.84 & 0.17 & 0.93 & 1.00 \\ RT & Harmonic & 0.00 & 0.93 & 0.94 & 0.32 & 0.65 & 0.55 & 0.84 & 0.50 & 1.00 & 0.94 \\ RT & PseudoPeriodic & 0.00 & 0.92 & 0.96 & 0.28 & 0.79 & 0.54 & 0.92 & 0.34 & 0.97 & 1.00 \\ RT & GaussianProcess & 0.00 & 0.73 & 0.67 & 0.09 & 0.81 & 0.39 & 0.98 & 0.65 & 0.85 & 1.00 \\ \hline \end{tabular} \end{table} Table 3: Synthetic Datasets: Explanation Power by Methods. ### Evaluation for real Time Series Data **Data.** We work with 17 datasets from the UCR Archive [9] that represent a variety of data sources. Oracle explanation is not available for these datasets. **Results.** We test explanations for these datasets with AMEE and report the result in Table 5. Since we do not have ground truth for majority of these datasets, we use this experiment to show how AMEE can apply to real datasets. It is noticeable that Random explanation sometimes outperforms a method-base explanation. This is an expected situation, as some explanation methods may not work well with certain datasets, resulting in _unreasonable_ explanations that misleadingly highlight non-discriminative parts as discriminative, or fail to identify any significant discriminative parts at all. In this situation, evaluation of random explanations can serve as a filter for _reasonable_ explanation methods, and any methods that have lower performance should be seriously red-flagged. ### Evaluation for Real Dataset with Expert Ground Truth Oracle explanation is the upperbound for any explanation method, however, it is only available in synthetic datasets. Real datasets ground truth is often available in an approximate level of precision, e.g., specifying the relative position of the shape and areas of importance. This approximate ground truth is widely used in other papers in evaluating explanation methods for images [23, 36, 42], however, this approximate ground truth is not readily available for time series data without opinions from data expert. Among the the datasets evaluated in Section 4.2, only CounterMovementJump (CMJ) has this information of the true important areas for each class of the dataset [24]. CMJ dataset records the counter movement jumps of participants of 3 classes: Normal (jump done correctly), Bend (jump with knee bend), and Stumble (stumble at landing). According to the sports expert who recorded this data [24], the critical area for the first two classes (normal and bend) is the middle part, while that of the final class (stumble) is in the end of the time series records. In class Normal, this region is completely flat and indifferent from neighboring time point. The same region is characterized by a small hump in case participants' knees in bending posture. In the Stumble class, the end of the time series is \begin{table} \begin{tabular}{|l||c c c c c c c c c c|} \hline & \multicolumn{3}{c|}{Grad} & Int & \multicolumn{3}{c|}{MRSEQL} & \multicolumn{3}{c|}{ROCKET} & \multicolumn{3}{c|}{MRSEQL} & \multicolumn{3}{c|}{MRSEQL} & \multicolumn{3}{c|}{RIDGECV} \\ Dataset & Random & SHAP & Gradient & -LIME & -LIME & -SHAP & -SHAP & -SM & -SM & Oracle \\ \hline SM & CAR & 0.36 & 0.81 & 0.83 & 0.59 & 0.70 & 0.83 & 0.83 & 0.50 & 0.86 & 1.00 \\ SM & NARMA & 0.36 & 0.75 & 0.77 & 0.61 & 0.66 & 0.83 & 0.83 & 0.43 & 0.84 & 1.00 \\ SM & Harmonic & 0.36 & 0.56 & 0.59 & 0.51 & 0.52 & 0.63 & 0.80 & 0.44 & 0.61 & 1.00 \\ SM & PseudoPeriodic & 0.36 & 0.54 & 0.56 & 0.45 & 0.58 & 0.58 & 0.74 & 0.55 & 0.56 & 1.00 \\ SM & GaussianProcess & 0.36 & 0.19 & 0.24 & 0.42 & 0.50 & 0.38 & 0.65 & 0.61 & 0.16 & 1.00 \\ \hline RT & CAR & 0.21 & 0.84 & 0.87 & 0.59 & 0.64 & 0.74 & 0.72 & 0.16 & 0.92 & 1.00 \\ RT & NARMA & 0.21 & 0.78 & 0.85 & 0.60 & 0.50 & 0.72 & 0.74 & 0.16 & 0.91 & 1.00 \\ RT & Harmonic & 0.21 & 0.42 & 0.47 & 0.29 & 0.39 & 0.46 & 0.69 & 0.29 & 0.65 & 1.00 \\ RT & PseudoPeriodic & 0.21 & 0.51 & 0.55 & 0.25 & 0.47 & 0.37 & 0.63 & 0.29 & 0.68 & 1.00 \\ RT & GaussianProcess & 0.21 & 0.21 & 0.20 & 0.22 & 0.37 & 0.25 & 0.51 & 0.34 & 0.33 & 1.00 \\ \hline \end{tabular} \end{table} Table 4: Synthetic Datasets: F1-score of important time points identified by Explanation Methods. _Abbreviation_: SM - Small Middle, RT - Rare Time. different from the previous two classes because of its very high, sharp peak due to very wrong landing position. The result of AMEE for various explanation methods is also in Table 5. The CMJ row shows the top 3 explanations for this dataset are MrSEQL-SHAP (SHAP explanation based on MrSEQL classifier), MrSEQL-LIME (LIME explanation based on MrSEQL classifier), and MrSEQL-SM (saliency map obtained directly from MrSEQL classifier). We see a high agreement between these classifiers as they all correctly highlight the corresponding discriminative areas provided by the expert (Figure 9). In addition, methods that are pointed out by AMEE as unreliable are also shown to highlight incorrect explanations and do not agree with the opinion of the domain expert (e.g., explanation provided by Integrated Gradient method). ### Discussion Our study carried over both synthetic and UCR datasets shows that AMEE can be conveniently used by a non-expert to computationally evaluate and compare different explanation methods. We recommend use of AMEE with full knowledge about the essential elements of the method. First, referees should be selected carefully, using classifiers of acceptable accuracy as determined by the application users. Using a committee of multiple accurate referee classifiers is recommended to reduce possible biases that one referee could introduce and results in a more reliable evaluation. Second, having a variety of perturbation methods is helpful, especially in case of hard-to-classify datasets. In addition, adding a \begin{table} \begin{tabular}{|l|l||c c c c c c c c c|} \hline \multirow{2}{*}{Data Type} & \multirow{2}{*}{Dataset} & \multicolumn{3}{c|}{Grad} & \multicolumn{1}{c}{Int} & \multicolumn{3}{c|}{MISEQL} & \multicolumn{3}{c|}{ROCKET} & \multicolumn{3}{c|}{MISEQL} & \multicolumn{3}{c|}{ROCKET} & \multicolumn{3}{c|}{MISEQL} & \multicolumn{1}{c|}{RIDGECV} \\ & & Random & SHAP & Gradient & -LIME & -LIME & -SHAP & -SHAP & -SM & -SM \\ \hline \hline SIMUL & CBF & 0.00 & 0.77 & 0.72 & 0.70 & 0.90 & **1.00** & 0.84 & 0.77 & 0.27 \\ & TwoPatterns & 0.00 & 0.86 & 0.85 & 0.89 & 0.95 & 0.83 & **1.00** & 0.88 & 0.52 \\ \hline ECG & ECG5000 & 0.00 & 0.13 & 0.17 & 0.67 & 0.57 & 0.84 & **1.00** & 0.46 & 0.45 \\ & ECG5000 & 0.00 & **1.00** & 0.99 & 0.67 & 0.57 & 0.61 & 0.82 & 0.30 & 0.29 \\ & ECGFiveDays & 0.54 & 0.77 & 0.71 & 0.39 & 0.54 & 0.91 & **1.00** & 0.00 & 0.27 \\ & TwoLeadECG & 0.39 & 0.17 & 0.18 & 0.44 & 0.40 & 0.92 & **1.00** & 0.05 & 0.00 \\ \hline MOTION & GunPoint & 0.81 & 0.74 & **1.00** & 0.77 & 0.00 & 0.92 & 0.84 & 0.89 & 0.56 \\ & CMJ & 0.24 & 0.06 & 0.00 & 0.99 & 0.56 & **1.00** & 0.84 & 0.90 & 0.31 \\ \hline DEVICE & PowerCons & 0.50 & 0.69 & 0.68 & 0.37 & 0.76 & 0.45 & **1.00** & 0.00 & 0.66 \\ \hline SPECTRO & Coffee & 0.00 & 0.36 & 0.53 & 0.53 & 0.37 & 0.86 & **1.00** & 0.72 & 0.57 \\ & Strawberry & 0.65 & 0.41 & 0.43 & 0.91 & 0.66 & **1.00** & 0.40 & 0.70 & 0.00 \\ \hline SENSOR & Car & 0.45 & 0.15 & 0.00 & 0.42 & 0.36 & **1.00** & 0.74 & 0.33 & 0.44 \\ & ItalyPower & 0.27 & **1.00** & 0.95 & 0.16 & 0.13 & 0.11 & 0.29 & 0.00 & 0.54 \\ & Plane & 0.75 & 0.80 & 0.78 & 0.50 & 0.00 & 0.67 & **1.00** & 0.19 & 0.32 \\ & Sony1 & 0.23 & 0.84 & 0.80 & 0.21 & 0.52 & 0.32 & **1.00** & 0.00 & 0.83 \\ & Sony2 & 0.32 & 0.99 & **1.00** & 0.45 & 0.41 & 0.37 & 0.82 & 0.00 & 0.57 \\ & Trace & 0.33 & 0.25 & 0.21 & 0.77 & 0.26 & **1.00** & 0.79 & 0.16 & 0.00 \\ \hline \hline **Average** & 0.32 & 0.59 & 0.59 & 0.58 & 0.47 & 0.75 & **0.85** & 0.37 & 0.39 \\ \hline & **Count of 1.00** & 0/17 & 2/17 & 2/17 & 0/17 & 0/17 & 5/17 & **8/17** & 0/17 & 0/17 \\ \hline \end{tabular} \end{table} Table 5: Explanation Power of selected UCR Datasets. Most informative method is highlighted in **bold**. The last row shows the count of occurance when a method is selected as most informative explanation according to AMEE.should we highlight CMJ row in another color/font? Recalculate last 2 line random explanation while carrying out the evaluation with AMEE is helpful in identifying unreliable explanation methods. A worse than random explanation means that the explanation fails to trigger a change in referee classifiers when compared even to a random explanation, either not identifying the important areas, or not focusing on any important areas at all. Finally, we recommend adding SHAP-based methods to any (reasonable) base classifier for testing and further evaluation, as our experiment shows that SHAP-based explanations often outperform LIME-based explanations using the same base classifiers. ## 5 Conclusion In this work, we proposed AMEE, a Model-Agnostic Explanation Evaluation framework, for computationally assessing and ranking explanation methods for the time series classification task. We test the framework on both synthetic and selected UCR archive datasets to obtain explanation evaluations for a wide variety of common explanation methods for time series, covering different aspects of explanation including type, scope and model dependency. Our experiments show a high agreement of Explanation Power (measured by AMEE) in both synthetic Figure 9: Visualization of worst explanation method (top row) and Top 3 best explanation method (bottom 3 rows) identified by AMEE. Visualization of other explanations are presented in Section 6. datasets with Oracle explanation (ground truth at time-point precision) and Expert explanation (ground truth provided by a domain expert). This assessment can be used to select appropriate explanation methods for application users or shortlist candidate methods for more detailed and expensive user studies. Besides, AMEE could potentially pinpoint the inherent problems, such as biases, that may exist in the training data. More importantly, this evaluation further empowers machine learning to discover new knowledge from the data. Future work includes devising a robust, AMEE-optimized, explanation method and using data experts to evaluate the validity and showing potential of knowledge discovery using this framework in biomedical and healthcare-related tasks such as genetic data understanding and sports analytics.
2308.13817
Decomposable forms generated by linear recurrences
Consider $k\ge 2$ distinct, linearly independent, homogeneous linear recurrences of order $k$ satisfying the same recurrence relation. We prove that the recurrences are related to a decomposable form of degree $k$, and there is a very broad general identity with a suitable exponential expression depending on the recurrences. This identity is a common and wide generalization of several known identities. Further, if the recurrences are integer sequences, then the diophantine equation associated to the decomposable form and the exponential term has infinitely many integer solutions generated by the terms of the recurrences. We describe a method for the complete factorization of the decomposable form. Both the form and its decomposition are explicitly given if $k=2$, and we present a typical example for $k=3$. The basic tool we use is the matrix method.
Kalman Gyory, Attila Petho, Laszlo Szalay
2023-08-26T08:55:24Z
http://arxiv.org/abs/2308.13817v1
###### Abstract ###### Abstract Consider \(k\geq 2\) distinct, linearly independent, homogeneous linear recurrences of order \(k\) satisfying the same recurrence relation. We prove that the recurrences are related to a decomposable form of degree \(k\), and there is a very broad general identity with a suitable exponential expression depending on the recurrences. This identity is a common and wide generalization of several known identities. Further, if the recurrences are integer sequences, then the diophantine equation associated to the decomposable form and the exponential term has infinitely many integer solutions generated by the terms of the recurrences. We describe a method for the complete factorization of the decomposable form. Both the form and its decomposition are explicitly given if \(k=2\), and we present a typical example for \(k=3\). The basic tool we use is the matrix method. **Decomposable forms generated by linear recurrences** K. Gyory1, A. Petho2, L. Szalay3 Footnote 1: University of Debrecen (Hungary) Footnote 2: University of Debrecen (Hungary) Footnote 3: University of Sopron (Hungary), and J. Selye University (Slovakia) 2010 Mathematics Subject Classification: 11B37, 11D61, 11D72. Keywords: linear recurrence, decomposable form, general identity, diophantine equation, matrix method. ## 1 Introduction Let \(k\geq 2\) be an integer. A sequence \((G_{n})\) of complex numbers is called _linear recurrence of order \(k\)_ if there exist \(\gamma_{0},\ldots,\gamma_{k-1}\in\mathbb{C}\) such that \[G_{n+k}=\gamma_{k-1}G_{n+k-1}+\cdots+\gamma_{0}G_{n} \tag{1}\] holds for all \(n\geq 0\), moreover \(k\) is minimal with this property. The polynomial \[P(X)=X^{k}-\gamma_{k-1}X^{k-1}-\cdots-\gamma_{0} \tag{2}\] is the _characteristic polynomial_ of \((G_{n})\). Note that the definition of the order yields \(\gamma_{0}\neq 0\). We will return to this question later, in Section 2. Dividing (1) by \(\gamma_{0}\neq 0\) and rearranging the equality we obtain \[G_{n}=-\frac{\gamma_{1}}{\gamma_{0}}G_{n+1}-\cdots-\frac{\gamma_{k-1}}{\gamma _{0}}G_{n+k-1}+\frac{1}{\gamma_{0}}G_{n+k}.\] Now performing the substitution \(n\mapsto-n-k\) it gives \[G_{-(n+k)}=-\frac{\gamma_{1}}{\gamma_{0}}G_{-(n+k-1)}-\cdots-\frac{\gamma_{k- 1}}{\gamma_{0}}G_{-(n+1)}+\frac{1}{\gamma_{0}}G_{-n},\] which is a linear recursion for the negative branch of \((G_{n})\). The results of the present paper hold for this two side infinite sequence what we can consider also as a number theoretic function \(G_{n}\,:\,\mathbb{Z}\mapsto\mathbb{C}\). There exist a huge number of relations involving linear recurrences in the corresponding literature, in particular for \(k=2\). Identities with Fibonacci \((F_{n})\), or Lucas \((L_{n})\) numbers play distinguished role among them. The Fibonacci Quarterly is devoted mainly for such works. Sequences \((F_{n}),(L_{n})\) have the initial terms \(F_{0}=0,L_{0}=2,F_{1}=L_{1}=1\), and both satisfy the recursion \[x_{n}=Ax_{n-1}+Bx_{n-2} \tag{3}\] with \(A=B=1\). From this rich variety of identities we quote three ones, namely \[L_{n}^{2}-5F_{n}^{2} = (L_{n}-\sqrt{5}F_{n})(L_{n}+\sqrt{5}F_{n})=4(-1)^{n}, \tag{4}\] \[F_{n+1}^{2}-F_{n}F_{n+1}-F_{n}^{2} = \left(F_{n+1}-\frac{1+\sqrt{5}}{2}F_{n}\right)\left(F_{n+1}-\frac {1-\sqrt{5}}{2}F_{n}\right)=(-1)^{n},\] (5) \[F_{n-1}F_{n+1}-F_{n}^{2} = (-1)^{n}. \tag{6}\] All three belong to the folklore and can be easily generalized to any second order recurrences. Indeed, assume that \((G_{n})\), and \((\widehat{G}_{n})\) satisfy the recursion (3). If their initial terms are \(G_{0},G_{1},\widehat{G}_{0}=2G_{1}-AG_{0},\widehat{G}_{1}=AG_{1}+2BG_{0}\), then we say that \((\widehat{G}_{n})\) is _associated_ to \((G_{n})\). Here \(A,B,G_{0},G_{1}\) denote complex numbers with the conditions \(B\neq 0\) and \(|G_{0}|+|G_{1}|\neq 0\). Assume that \(\alpha\) and \(\beta\) are the not necessarily different roots of the characteristic polynomial \(P(X)=X^{2}-AX-B\). Put \(D=A^{2}+4B\), which is the discriminant of \(P\), and let \(C_{G}=G_{1}^{2}-AG_{0}G_{1}-BG_{0}^{2}\). Then the identities \[\widehat{G}_{n}^{2}-DG_{n}^{2} = (\widehat{G}_{n}-\sqrt{D}G_{n})(\widehat{G}_{n}+\sqrt{D}G_{n})=4C _{G}(-B)^{n}, \tag{7}\] \[G_{n+1}^{2}-AG_{n}G_{n+1}-BG_{n}^{2} = (G_{n+1}-\alpha G_{n})\left(G_{n+1}-\beta G_{n}\right)=C_{G}(-B)^ {n},\] (8) \[G_{n-1}G_{n+1}-G_{n}^{2} = -C_{G}(-B)^{n-1} \tag{9}\] hold, and it is a simple exercise to see that they are direct generalizations of (4)-(6), respectively. Note that the second one is equivalent to the third one by \((G_{n+1}-AG_{n})G_{n+1}-BG_{n}^{2}=BG_{n-1}G_{n+1}-BG_{n}^{2}\), and this phenomenon is obviously true for (5) and (6), too. Assuming \(|B|=1\) and using (7), Petho [10] characterized the polynomials whose values appear infinitely many times in \((G_{n})\). One can reverse identity (5) such that the only positive integer solutions to the equation \[x^{2}-xy-y^{2}=\pm 1\] are \((F_{n},F_{n+1})\). J. P. Jones [5] gave a good overview on this equation and its relation to the solution of Hilbert's tenth problem. For recurrences of higher order there are only a few and complicated identities available. Especially, we do not know any generalization of (7). Recently, Craveiro et al. [2] explored a nice generalization of the so-called Cassini identity (6) for recurrences of arbitrary order \(k\). They investigated also analytical and combinatorial properties of their result. Corollary 1 of Theorem 1 of this paper provides a new contribution to this question, in fact we consider the problem with any set of initial values apart from singular cases. Corollary 1 via Theorem 1 makes it possible to determine a Cassini-like identity if the basic recurrence of order \(k\) is given. However such an identity is expected to be rather compound. The main result of this article is a common and wide extension of (7), (8), and other identities to \(k\geq 2\) linear recurrences of order \(k\) satisfying the same recursive rule, including their connection with decomposable forms and diophantine equations. ## 2 New results We shall prove some general results. Note that the basic field we work in is the set \(\mathbb{C}\) of complex numbers, but the machinery works even for any fields. **Theorem 1**.: _Let \((G_{n}^{(j)}),j=1,\ldots,k\) be linear recursive sequences of complex numbers of order \(k\) with the same characteristic polynomial (2) having constant term \(-\gamma_{0}\neq 0\). If the sequences are \(\mathbb{C}\)-linearly independent, then there exists a homogeneous polynomial \(F\in\mathbb{C}[X_{1},X_{2},\ldots,X_{k}]\) of degree \(k\) such that_ \[F(G_{n}^{(1)},G_{n}^{(2)},\ldots,G_{n}^{(k)})=((-1)^{k+1}\gamma_{0})^{n}\] _holds for all \(n\in\mathbb{Z}\)._ A homogeneous polynomial \(Q\in\mathbb{C}[X_{1},X_{2},\ldots,X_{k}]\) is called _decomposable_ if it splits completely into linear factors. Hence a decomposable form is necessarily homogeneous, but for \(k\geq 3\) the converse is not true in general. The next theorem is a remarkable completion of Theorem 1. **Theorem 2**.: _Under the assumptions of Theorem 1 the homogeneous polynomial \(F\) is decomposable._ Clearly, Theorem 2 is a continuation of Theorem 1, and we could have presented them together. However their proofs differ significantly, which explains their separation. We emphasize that the proofs are constructive, and following their arguments one can compute the polynomial \(F\) and its decomposition into linear factors, too. Unfortunately, \(F\) has no nice general form if \(k\geq 3\), so we will omit its explicit constitution. On the other hand, \(F\) is given precisely when \(k=2\) (see Theorem 5 and the description after). Furthermore, we will show some representative examples in Section 6. A simple consequence of the main theorems is the following generalization of (8). **Corollary 1**.: _Let \((G_{n})\) be a linear recursive sequence of complex numbers of order \(k\) satisfying (1). If the vectors \(\bar{g}_{j}=(G_{j},G_{j+1},\ldots,G_{j+k-1}),j=0,\ldots,k-1\) are \(\mathbb{C}\)-linearly independent, then there exists a decomposable form \(F\in\mathbb{C}[X_{1},X_{2},\ldots,X_{k}]\) of degree \(k\) such that_ \[F(G_{n},G_{n+1},\ldots,G_{n+k-1})=((-1)^{k+1}\gamma_{0})^{n}\] _holds for all \(n\in\mathbb{Z}\)._ The linear recursive sequences with initial values \(G_{0}=\cdots=G_{k-2}=0,G_{k-1}=1\) satisfy always the assumption of this corollary, hence the statement too. The \(k\)-generalized Fibonacci sequences \((F_{n}^{(k)})\) (or, in other words, \(k\)-step Fibonacci numbers), which are defined by the initial values \(F_{0}^{(k)}=\cdots=F_{k-2}^{(k)}=0,F_{k-1}^{(k)}=1\) and by the recursion \(F_{n+k}^{(k)}=F_{n+k-1}^{(k)}+\cdots+F_{n}^{(k)}\) are important examples. Here the upper index \((k)\) in \(F_{n}^{(k)}\) means traditionally the parameter \(k\), while in Theorem 1 the upper index \((j)\) denotes the \(j\)th from the \(k\) given sequences. Clearly, the case \(k=2\) is the Fibonacci sequence. In fact, sequence \((G_{n})\) is a complex valued number theoretical function. Another direct consequence of Theorem 1 is as follows. The conditions of Theorem 1 are valid here, too. **Corollary 2**.: _Let \((G_{n}^{(j)}),j=1,\ldots,k\) be linear recursive sequences of complex numbers of order \(k\) with the same characteristic polynomial (2). Then they and the sequence \((\gamma_{0}^{n})\) are algebraically dependent._ It is possible to extend \((G_{n})\) to a complex function \(G(z)\) such that \(G_{n}=G(n)\) for all \(n\in\mathbb{Z}\). Indeed, assume that \(P(X)\) in (2) has the factorization \[P(X)=(X-\alpha_{1})^{m_{1}}(X-\alpha_{2})^{m_{2}}\cdots(X-\alpha_{h})^{m_{h}},\] where \(\alpha_{1},\ldots,\alpha_{h}\) denote pairwise distinct complex numbers and \(m_{j}\), \(j=1,\ldots,h\) are positive integers. Then, see e.g. [12], there exist polynomials \(g_{j}\in\mathbb{C}[X]\) of degree at most \(m_{j}-1\) such that \[G_{n}=g_{1}(n)\alpha_{1}^{n}+g_{2}(n)\alpha_{2}^{n}+\cdots+g_{h}(n)\alpha_{h}^ {n}\] holds for all \(n\in\mathbb{Z}\). Hence \(G(z)=g_{1}(z)\alpha_{1}^{z}+g_{2}(z)\alpha_{2}^{z}+\cdots+g_{h}(z)\alpha_{h}^ {z}\) is a complex function with \(G(n)=G_{n}\) for all \(n\in\mathbb{Z}\). For example, the extensions of the Fibonacci and the Lucas sequences are the complex functions \[F(z)=\frac{1}{\sqrt{5}}\left(\left(\frac{1+\sqrt{5}}{2}\right)^{z}-\left( \frac{1-\sqrt{5}}{2}\right)^{z}\right),\quad L(z)=\left(\frac{1+\sqrt{5}}{2} \right)^{z}+\left(\frac{1-\sqrt{5}}{2}\right)^{z},\] respectively. One can easily verify that identities (4)-(6) remain true if we replace \(n\), \(F_{n}\), \(L_{n}\), \((-1)^{n}\) by \(z\), \(F(z)\), \(L(z)\), \((-1)^{z}\), respectively. An interpretation of the generalized identities is the fact that the corresponding complex functions are algebraically dependent. It is well known, see e.g. [7], that a function \(f(z)\) satisfies the linear differential equation with characteristic polynomial \(P(X)\) if and only if it is identical with one of the above defined \(G(z)\). Our next theorem is a generalization of Corollary 2. **Theorem 3**.: _Let \(P(X)\in\mathbb{C}[X]\) be a polynomial of degree \(k\) with simple roots, and assume \(P(0)\neq 0\). Let \(k_{0}\geq k\) and denote \(G_{j}(z)\), \(1\leq j\leq k_{0}\) pairwise different solutions of a homogeneous linear differential equation with characteristic polynomial \(P(X)\). Then \(G_{j}(z)\), \(1\leq j\leq k_{0}\) and the function \(((-1)^{k}P(0))^{z}\) are algebraically dependent._ As we noted Theorem 2 and its constructive proof have a remarkable consequence in the theory of diophantine equations. This is presented as Theorem 4 in Section 3, where it is more suitable to formulate the precise statement. Indeed, if \((G_{n}^{(j)})\)'s are integer sequences, then it makes possible to have infinitely many integer solutions to the polynomial-exponential diophantine equation \[\tilde{F}(x_{1},x_{2},\ldots,x_{k})=pq^{n} \tag{10}\] in the integers \(x_{1},x_{2},\ldots,x_{k}\) and \(n\geq 0\), where \(\tilde{F}\) is a decomposable form of degree \(k\) depending on the parameters of the sequences, further \(p\), \(q\) are suitable integers (see (24) below). If \(\tilde{F}\) is irreducible over \(\mathbb{Q}[X_{1},\ldots,X_{k}]\), then we may assume that \(\tilde{F}\) is, up to a constant factor, a full norm form. It follows from a result of Borevich and Safarevich [1] that if the splitting field of \(\tilde{F}\) differs from \(\mathbb{Q}\) and from the imaginary quadratic number fields, then (10) has infinitely many solutions for \(q=1\) and for some integer \(p\). More generally, by a celebrated theorem of W. M. Schmidt [11] on norm form equations the set of solutions is a union of a finite set and finitely many families of solutions. For a further generalization for arbitrary decomposable form equations see Gyory [4]. The families mentioned are sums of product of powers of appropriate algebraic numbers. Fixing all but one exponent and let run the remaining exponents through non-negative integers we see that each family includes linear recursive sequences. Our result shows that (10) may have infinitely many solutions in the case when \(\tilde{F}\) is reducible over \(\mathbb{Q}[X_{1},\ldots,X_{k}]\), too. In our paper, we apply linear algebraic approach. We were also motivated to contribute to the development of so-called matrix method often used in the theory of linear recurrences. At the early 80's H. W. Gould [3] presented a survey on the \(Q\)-matrices, and he refers to R. Simson who first gave the formula (6). This identity, together with its generalization (9) is an easy consequence of our Theorem 5 in Section 6 with the notation \(H_{n}=G_{n+1}\) (for \(H_{n}\) see again Theorem 5). Theorem 5 is a common extension of identities (7)-(9), and provides the background for the case of \(k\geq 2\) linear recurrences of order \(k\) in general appearing in Theorem 1. We mention that recently Craviero et al. [2] have given a generalization of Cassini identity for the \(k\)-generalized Fibonacci numbers with specific initial vector set. The basic approach of [2] coincides with the idea of Lemma 2.1 of [9], and we exploit the advantages of this argument in the present paper. The paper is organized as follows. In Section 3, we give two proofs for Theorem 1. In the next section, we prove Theorem 2, while Section 5 is devoted to the proof of corollaries 1 and 2 and of Theorem 3. Section 6 specializes the results of Theorem 1 and 2 to the cases \(k=2\) and \(k=3\). In Subsection 6.1, we present explicit formula in full generality if \(k=2\), while in Subsection 6.2, we compute the appropriate formula only for three given third order recursive sequence. Other examples for larger \(k\) values might be easily presented by following the method of the proofs. ## 3 The homogeneous polynomials Before starting the first proof of Theorem 1 we introduce some notation and summarize well known facts. Recall that \(k\geq 2\) is an integer, furthermore \(\gamma_{0},\gamma_{1},\ldots,\gamma_{k-1}\) denote arbitrary complex numbers. Consider the set of recurrent sequences \[\Gamma^{(\gamma_{0},\gamma_{1},\ldots,\gamma_{k-1})}_{k}=\left\{(x_{n})_{n\in \mathbb{Z}}\in\mathbb{C}^{\infty}\mid x_{n}=\gamma_{k-1}x_{n-1}+\gamma_{k-2}x_{ n-2}+\cdots+\gamma_{0}x_{n-k}\right\}. \tag{11}\] Clearly, sequences (11) satisfy a common recurrence rule, but they differ in their initial values. They constitute a \(\mathbb{C}\)-vector space with respect to the coordinate-wise addition and multiplication by scalar if and only if \(\gamma_{0}\neq 0\), and in this case the dimension of the vector space is \(k\). To be able to apply linear algebraic tools we assume \(\gamma_{0}\neq 0\) in the sequel. Throughout this paper we deal with vector spaces over the field of complex numbers \(\mathbb{C}\), therefore we do not mention always the ground field. ### First proof of Theorem 1 Assume that the recurrences \[(G^{(1)}_{n}),(G^{(2)}_{n}),\ldots,(G^{(k)}_{n})\in\Gamma^{(\gamma_{0},\gamma_ {1},\ldots,\gamma_{k-1})}_{k} \tag{12}\] are \(\mathbb{C}\)-linearly independent. Then, equivalently, the vectors \[\bar{g}_{0}=\left[\begin{array}{c}G^{(1)}_{0}\\ \vdots\\ G^{(k)}_{0}\end{array}\right],\;\bar{g}_{1}=\left[\begin{array}{c}G^{(1)}_{1 }\\ \vdots\\ G^{(k)}_{1}\end{array}\right],\ldots,\;\bar{g}_{k-1}=\left[\begin{array}{c}G^ {(1)}_{k-1}\\ \vdots\\ G^{(k)}_{k-1}\end{array}\right] \tag{13}\] are also linearly independent. Define the matrices \({\bf G},{\bf G}^{\star}\in\mathbb{C}^{k\times k}\) as follows. The column vectors of \({\bf G}\) are \(\bar{g}_{0},\bar{g}_{1},\ldots,\bar{g}_{k-1}\), respectively, while \(\bar{g}_{1},\bar{g}_{2},\ldots,\bar{g}_{k}\) admit the column vectors of \({\bf G}^{\star}\) in this order. The \(k\) recurrences in (12), together with their initial values build up the vector recurrence \[\bar{g}_{n}=\gamma_{k-1}\bar{g}_{n-1}+\gamma_{k-2}\bar{g}_{n-2}+\cdots+\gamma_ {0}\bar{g}_{n-k},\quad n\geq k \tag{14}\] with initial vectors (13). In particular, (14) provides the last column vector \(\bar{g}_{k}\) of \({\bf G}^{\star}\). The linear independence of the column vectors of \({\bf G}\) guarantees that \[\Delta=\det({\bf G})\neq 0. \tag{15}\] Let \(\bar{r}_{i}\) denote the \(i\)th column vector of the transposition \({\bf G}^{\top}\) of \({\bf G}\) for each \(i=1,2,\ldots,k\). That is \(\bar{r}_{i}=[G^{(i)}_{0},G^{(i)}_{1},\ldots,G^{(i)}_{k-1}]^{\top}\), the entries are exactly the initial values of the \(i\)th sequence we fixed in (12). Clearly, the vectors \[\bar{r}_{1},\bar{r}_{1}\ldots,\bar{r}_{k} \tag{16}\] form a basis of the vector space \(\mathbb{C}^{k}\). Put \[{\bf M}={\bf G}^{\star}{\bf G}^{-1}=\frac{1}{\Delta}\cdot{\bf G}^{\star}\cdot \mbox{adj}({\bf G}).\] The extension of Lemma 2.1 of [9] from \(\mathbb{R}\) to \(\mathbb{C}\) shows that the vector sequence \((\bar{g}_{n})\) generated by the initial vectors (13) and by the vector recurrence (14) satisfies \[\bar{g}_{n+1}={\bf M}\bar{g}_{n},\qquad n\geq 0. \tag{17}\] The proof of this lemma provides even that the characteristic polynomial of matrix \({\bf M}\) is \[k_{\bf M}(x)=\det(x{\bf I}-{\bf M})=x^{k}-\gamma_{k-1}x^{k-1}-\cdots-\gamma_{1}x -\gamma_{0}, \tag{18}\] which is also the characteristic polynomial of any recurrence of \(\Gamma_{k}^{(\gamma_{0},\gamma_{1},\ldots,\gamma_{k-1})}\) (\({\bf I}\) denotes the \(k\times k\) unit matrix). Thus \(k_{\bf M}(0)=\det(-{\bf M})=-\gamma_{0}\), and then \(\det({\bf M})=(-1)^{k}(-\gamma_{0})\), is denoted in the sequel by \(\delta\). Hence \[\det({\bf M}^{n})=\left(\det({\bf M})\right)^{n}=\delta^{n}. \tag{19}\] This observation gives later the right-hand side of the principal identity of this article (with a suitable homogeneous polynomial of degree \(k\) on the left-hand side). Now turn our attention to the matrices \[{\bf M}^{n}=\left[m^{(n)}_{i,j}\right]_{i,j=1,2,\ldots,k},\qquad n\in\mathbb{N}.\] Since \({\bf M}^{n}=\gamma_{k-1}{\bf M}^{n-1}+\gamma_{k-2}{\bf M}^{n-2}+\cdots+\gamma_ {0}{\bf M}^{n-k}\) holds for \(n\geq k\), therefore the same recurrence rule is valid for each element-wise sequence \((m^{(n)}_{i,j})\) of the matrix sequence \(({\bf M}^{n})\). That is \[m^{(n)}_{i,j}=\gamma_{k-1}m^{(n-1)}_{i,j}+\gamma_{k-2}m^{(n-2)}_{i,j}+\cdots+ \gamma_{0}m^{(n-k)}_{i,j}.\] Such a sequence is determined by the initial values \(m^{(0)}_{i,j},m^{(1)}_{i,j},\ldots,m^{(k-1)}_{i,j}\). Collect them into the initial column vector \(\overline{m}_{i,j}\in\mathbb{C}^{k}\) (look at (20)), which can be given as a linear combination of the basis vectors (16). More precisely, there exist uniquely determined coordinates \(c^{(u)}_{i,j}\in\mathbb{C}\), \(u=1,2,\ldots,k\), such that \[\overline{m}_{i,j}=\left[\begin{array}{c}m^{(0)}_{i,j}\\ m^{(1)}_{i,j}\\ \vdots\\ m^{(k-1)}_{i,j}\end{array}\right]=\sum_{u=1}^{k}c^{(u)}_{i,j}\bar{r}_{u}. \tag{20}\] This is a system of \(k\) linear equations in the \(k\) unknowns \(c^{(1)}_{i,j},c^{(2)}_{i,j},\ldots,c^{(k)}_{i,j}\) (if \(i,j\) are fixed), and the system can be solved, for instance, by Cramer's rule. Assume that \({\bf G}_{u}\) is the matrix derived by replacing the \(u\)th column vector of \({\bf G}^{\top}\) by the vector \(\overline{m}_{i,j}\). Then, using \(\det({\bf M}^{\top})=\det({\bf M})=\Delta\), clearly \[c^{(u)}_{i,j}=\frac{\det({\bf G}_{u})}{\Delta}. \tag{21}\] Since the initial values determine a complete sequence of \(\Gamma_{k}^{(\gamma_{0},\gamma_{1},\ldots,\gamma_{k-1})}\), the coefficients (21) appearing in (20) descend for the whole sequence \((m^{(n)}_{i,j})\). Thus the main consequence of the previous arguments is the equality \[m^{(n)}_{i,j}=\sum_{u=1}^{k}c^{(u)}_{i,j}G^{(u)}_{n}.\] In this manner, we are able to represent any entry of matrix \(\mathbf{M}^{n}\) as a linear combination of the \(n\)th terms of the linear recurrences \((G_{n}^{(1)}),(G_{n}^{(2)}),\ldots,(G_{n}^{(k)})\in\Gamma_{k}^{(\gamma_{0}, \gamma_{1},\ldots,\gamma_{k-1})}\). Subsequently, the determinant of \(\mathbf{M}^{n}\) is a homogeneous polynomial of degree \(k\) in \(G_{n}^{(1)},G_{n}^{(2)},\ldots,G_{n}^{(k)}\). Denote this form by \(F\). Combining it with (19) we obtain \[F(G_{n}^{(1)},G_{n}^{(2)},\ldots,G_{n}^{(k)})=\delta^{n}, \tag{22}\] and the proof is complete. \(\Box\) Assume now that the coefficients \(\gamma_{i}\), and the initial terms of the sequences \(G_{n}^{(j)}\) are integers. Then \((G_{n}^{(j)})\) is a sequence of integers (\(j\) is fixed). Multiply both sides of (22) by \(\Delta^{k}\), because (21) is now a rational number with denominator dividing \(\Delta\). Then \(\tilde{F}=\Delta^{k}F\) has integer coefficients. Consequently, the polynomial-exponential diophantine equation \[\tilde{F}(x_{1},x_{2},\ldots,x_{k})=\Delta^{k}\delta^{n} \tag{23}\] possesses infinitely many integer solutions in integers \(x_{1},x_{2},\ldots,x_{k}\) and \(n\). Now we record the results in **Theorem 4**.: _Given the recursive sequences (12) of integers with initial values (13) such that (15) holds. Then for any \(n\geq 0\)_ \[(x_{1},x_{2},\ldots,x_{k})=(G_{n}^{(1)},G_{n}^{(2)},\ldots,G_{n}^{(k)})\] _is the solution to the diophantine equation_ \[\tilde{F}(x_{1},x_{2},\ldots,x_{k})=\Delta^{k}\delta^{n}, \tag{24}\] _where the coefficients of the homogeneous form \(\tilde{F}\) depend on the parameters of the sequences._ Proof.: The proof is given before the enunciation of the theorem. Note that the proof is constructive in the sense that following the method we can compute \(\tilde{F}\), \(\Delta\), and \(\delta\) if the initial values and the recurrence rule is fixed. This will be illustrated later, in Section 6 for binary recurrences in general, and for a triple of ternary recurrences. ### Alternative proof of Theorem 1 We provide here an alternative proof for the existence of the form \(F\) on the left-hand side of (22). The notation is taken from the preceding parts of Section 3. Observe first that \[\mathbf{G}^{*}=\mathbf{GT}\] with the matrix \[\mathbf{T}=\left[\begin{array}{ccccc}0&0&\cdots&0&\gamma_{0}\\ 1&0&\cdots&0&\gamma_{1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&0&\gamma_{k-2}\\ 0&0&\cdots&1&\gamma_{k-1}\end{array}\right].\] Thus we have \[\mathbf{M}=\mathbf{G}^{*}\mathbf{G}^{-1}=\mathbf{GTG}^{-1}.\] This matrix equality immediately implies * \(\det({\bf M})=\det({\bf T})=(-1)^{k-1}\gamma_{0}\), and * \({\bf M}^{n}={\bf GT}^{n}{\bf G}^{-1}\) for \(n\geq 0\). Here \({\bf GT}^{n}\) is a matrix with the column vectors \(\bar{g}_{n},\bar{g}_{n+1},\ldots,\bar{g}_{n+k-1}\). On the other hand, from (17) we know \(\bar{g}_{n+j}={\bf M}^{j}\bar{g}_{n}\). Hence all entries of \({\bf GT}^{n}\) are linear combinations of the terms \(G_{n}^{(1)},G_{n}^{(2)},\ldots,G_{n}^{(k)}\), where the coefficients are the elements of \({\bf M}^{j},j=0,\ldots,k-1\), and they are independent from \(n\). Subsequently, the determinant of \({\bf GT}^{n}\) is a signed sum of the products of \(k\) linear forms in \(G_{n}^{(1)},G_{n}^{(2)},\ldots,G_{n}^{(k)}\) i.e. a homogeneous form of degree \(k\). Multiplying this by \(\det({\bf G}^{-1})\), which is the determinant of a constant matrix we obtain that \(\det({\bf M}^{n})\) is also a homogeneous form of degree \(k\). \(\Box\) ## 4 Decomposability of F This section is devoted to study, and prove the decomposability of the homogeneous polynomial \(F\) provided in Section 3. Recall that \(F(x_{1},x_{2},\ldots,x_{k})=\widetilde{F}(x_{1},x_{2},\ldots,x_{k})/\Delta^{k}\) (see (22) and (23)). _Proof of Theorem 2._ The proof consists of two main parts. In the first step, we show that a specific homogeneous polynomial, say \(F_{S}\) is decomposable if the initial \(k\) sequences from \(\Gamma_{k}^{(\gamma_{0},\gamma_{1},\ldots,\gamma_{k-1})}\) are chosen advantageously. Then, in the second part, we find a connection between these favorable sequences and arbitrary package of \(k\) sequences. Part 1.Recall that the characteristic polynomial \(k_{\bf M}(x)\) of the family \(\Gamma_{k}^{(\gamma_{0},\gamma_{1},\ldots,\gamma_{k-1})}\) of homogeneous linear recurrences (11) we investigate is (18). Assume that it has decomposition \[k_{\bf M}(x)=(x-\alpha_{1})^{m_{1}}(x-\alpha_{2})^{m_{2}}\cdots(x-\alpha_{h})^ {m_{h}} \tag{25}\] into linear factors, where \(\alpha_{i}\in\mathbb{C}\) (\(i=1,2,\ldots,h\)) are distinct, and \(m_{1}+m_{2}+\cdots+m_{h}=k\). For each pair \((i,j)\), where \(i\in\{1,2,\ldots,h\}\) and \(j\in\{0,1,\ldots,m_{i}-1\}\) consider the sequences \[S_{n}^{(i,j)}=n^{j}\alpha_{i}^{n},\quad(n\geq 0). \tag{26}\] Note that the range for \(j\) depends on the value of \(i\), but we do not indicate this fact in the notation. Sequences (26) are belonging to \(\Gamma_{k}^{(\gamma_{0},\gamma_{1},\ldots,\gamma_{k-1})}\), this fact is based on the following two observations, or simply we may refer to the theory of difference equations, see e.g. the classical book of Milne-Thomson [8, Chapter XIII]. Firstly, if the multiplicity of an \(\alpha_{i}\) is \(m_{i}\), then \(\alpha_{i}\) is also a root of the derivatives \(k_{\bf M}^{(j)}(x)\) for \(j=0,1,\ldots,m_{i}-1\). Secondly, from the theory of Stirling number of second kind we know that \(n^{j}=\sum_{v=0}^{j}\left\{{j\atop v}\right\}(n)_{v}\), where \(\left\{{j\atop v}\right\}\) denotes a Stirling number of second kind, and \((n)_{v}=n(n-1)\cdots(n-v+1)\) is a falling factorial. Another important argument is that the sequences in (26) are \(\mathbb{C}\)-linearly independent, consequently they form a basis of \(\Gamma_{k}^{(\gamma_{0},\gamma_{1},\ldots,\gamma_{k-1})}\). This follows again from [8, Chapter XIII]. (A similar result is given in [6].) Analogously to \(\mathbf{G}\) we define the matrix \(\mathbf{G}_{S}\) by the initial values of the sequences \(S_{n}^{(i,j)}\) such that the first row contains the initial values of \(S_{n}^{(1,0)}\), etc. The matrix \(\mathbf{G}_{S}\) can be split into horizontal stripes such that stripe \(i\in\{1,2,\ldots,h\}\) contains \(m_{i}\) rows. The enumeration of stripes begins on the top of the matrix, and the entries of stripe \(i\) are given as follows: \[\mathbf{G}_{S}=\left[\begin{array}{ccccc}\includegraphics[width=142.362pt ]{./. where \(\ell=1,2,\ldots,k\). This is given in details by \[\left[\begin{array}{ccccccccc}0&\cdots&0&\binom{u}{0}\alpha_{i}&\binom{u}{1} \alpha_{i}&\cdots&\binom{u}{u}\alpha_{i}&0&\cdots&0\end{array}\right]\cdot\left[ \begin{array}{ccccccccc}\vdots\\ \alpha_{i}^{\ell-1}\\ (\ell-1)\alpha_{i}^{\ell-1}\\ (\ell-1)^{u}\alpha_{i}^{\ell-1}\\ \vdots\\ (\ell-1)^{m_{i}-1}\alpha_{i}^{\ell-1}\\ \vdots\end{array}\right].\] Hence the dot product equals \[\binom{u}{0}\alpha_{i}^{\ell}+\binom{u}{1}(\ell-1)\alpha_{i}^{\ell}+\cdots+ \binom{u}{u}(\ell-1)^{u}\alpha_{i}^{\ell}=((l-1)+1)^{u}\,\alpha_{i}^{\ell}= \ell^{u}\alpha_{i}^{\ell}=S_{\ell}^{(i,u)}.\] Recall that \(S_{\ell}^{(i,u)}\) is the general term of the matrix \(\mathbf{G}_{S}^{*}\) such that it is the element of the \((u+1)\)th row of stripe \(i\) in the column \(\ell\)\((\ell=1,2,\ldots,k)\). Subsequently, we have \[\mathbf{M}_{S}=\mathbf{G}_{S}^{*}\mathbf{G}_{S}^{-1}=\left(\mathbf{B}\mathbf{G }_{S}\right)\mathbf{G}_{S}^{-1}=\mathbf{B},\] and then \(\mathbf{M}_{S}^{n}=\mathbf{B}^{n}\) follows. We know that \(\mathbf{B}\) is a lower diagonal matrix. Thus \(\det(\mathbf{B})\) is the product of the elements lying in the main diagonal, i.e. \[\det(\mathbf{B})=\alpha_{1}^{m_{1}}\alpha_{2}^{m_{2}}\cdots\alpha_{h}^{m_{h}}.\] Finally, \[\det\left(\mathbf{M}_{S}^{n}\right)=\left(\det(\mathbf{B})\right)^{n}=\alpha_ {1}^{m_{1}n}\alpha_{2}^{m_{2}n}\cdots\alpha_{h}^{m_{h}n}=\left(S_{n}^{(1,0)} \right)^{m_{1}}\left(S_{n}^{(2,0)}\right)^{m_{2}}\cdots\left(S_{n}^{(h,0)} \right)^{m_{h}}.\] So we have found that \(\det\left(\mathbf{M}_{S}^{n}\right)\) is a corresponding value of the decomposable form \[F_{S}(y_{1},y_{2},\ldots,y_{h})=y_{1}^{m_{1}}y_{2}^{m_{2}}\cdots y_{h}^{m_{h}}\] at the point \(\left((S_{n}^{(1,0)}),(S_{n}^{(2,0)}),\cdots,(S_{n}^{(h,0)})\right)\). Thus the assertion of the theorem holds for this particular choice of the recurrences. **Part 2.** Recall that sequences (12) form a basis in \(\Gamma_{k}^{(\gamma_{0},\gamma_{1},\ldots,\gamma_{k-1})}\) because \(\Delta\neq 0\). Hence there uniquely exists a matrix \(\mathbf{A}=[a_{i,j}]\in\mathbb{C}^{h\times k}\) such that \[\left[\begin{array}{ccccc}S_{n}^{(1,0)}&S_{n}^{(2,0)}&\cdots&S_{n}^{(h,0)} \end{array}\right]^{\top}=\mathbf{A}\left[\begin{array}{ccccc}G_{n}^{(1)}&G _{n}^{(2)}&\cdots&G_{n}^{(k)}\end{array}\right]^{\top}. \tag{27}\] Consider now the substitution \(\bar{y}=[y_{1},y_{2},\ldots,y_{h}]^{\top}=\mathbf{A}[x_{1},x_{2},\ldots,x_{k}] ^{T}=\mathbf{A}\bar{x}\). Denoting by \(\bar{a}_{i}\) the \(i\)th row vector of \(\mathbf{A}\) we compute the dot product \[y_{i}=\bar{a}_{i}[x_{1},x_{2},\ldots,x_{k}]^{\top}\] for every \(i=1,2,\ldots,h\). Now we can define \(F(x_{1},\ldots,x_{k})=F(\bar{x})\) as follows: \[F_{S}(\bar{y})=y_{1}^{m_{1}}y_{2}^{m_{2}}\cdots y_{h}^{m_{h}}=\prod_{i=1}^{h} \left(\bar{a}_{i}[x_{1},x_{2}\ldots,x_{k}]^{\top}\right)^{m_{i}}=F(\bar{x}).\] Obviously, \(F\) is a decomposable form of degree \(k\) with the property \[F(G_{n}^{(1)},\ldots,G_{n}^{(k)})=((-1)^{k}k_{\bf M}(0))^{n}=\delta^{n},\quad(n =0,1,\ldots).\] Thus the proof is complete. \(\Box\) **Remark 1**.: _If every zero of the characteristic polynomial \(k_{\bf M}(x)\) is simple (i.e. \(h=k\)), then the matrix \({\bf G}_{S}\) is simplified to_ \[{\bf G}_{S}=\left[\begin{array}{cccc}1&\alpha_{1}&\cdots&\alpha_{1}^{k-1}\\ 1&\alpha_{2}&\cdots&\alpha_{2}^{k-1}\\ \vdots&\vdots&\ddots&\vdots\\ 1&\alpha_{k}&\cdots&\alpha_{k}^{k-1}\end{array}\right]={\bf AG}\] _(the second equality is implied by (27)). Consequently, \({\bf A}={\bf G}_{S}{\bf G}^{-1}\), and then_ \[\bar{y}={\bf G}_{S}{\bf G}^{-1}\bar{x},\] _Finally, we obtain \(F_{S}(\bar{y})=F_{S}({\bf G}_{S}{\bf G}^{-1}\bar{x})=F(\bar{x})\)._ At the end of this section, we show a general example to demonstrate the power of our results. **Example**.: Consider the sequences of (12) such that the characteristic polynomial has factorization (25). Assume that we have the specific initial vectors \[\bar{g}_{0}=\left[\begin{array}{c}1\\ 0\\ \vdots\\ 0\end{array}\right],\;\bar{g}_{1}=\left[\begin{array}{c}0\\ 1\\ \vdots\\ 0\end{array}\right],\ldots,\;\bar{g}_{k-1}=\left[\begin{array}{c}0\\ 0\\ \vdots\\ 1\end{array}\right]. \tag{28}\] Using the advantageous properties of this orthonormal basis, we can easily show that \[S_{n}^{(j,0)}=1\cdot G_{n}^{(1)}+\alpha_{j}\cdot G_{n}^{(2)}+\cdots+\alpha_{j }^{k-1}\cdot G_{n}^{(k)},\quad(j=1,2,\ldots,h).\] Consequently, \[y_{j}=x_{1}+\alpha_{j}x_{2}+\cdots+\alpha_{j}^{k-1}x_{k},\quad(j=1,2,\ldots,h),\] and then \[F(x_{1},x_{2},\ldots,x_{k})=\prod_{i=1}^{h}(x_{1}+\alpha_{j}x_{2}+\cdots+ \alpha_{j}^{k-1}x_{k})^{m_{j}}\] follows. Hence the complete factorization of the form \(F\) is determined. Note that the simplicity depends on the features of the orthonormal basis (28). Proof of the corollaries and of Theorem 3 _Proof of Corollary 1._ The entries of the vectors \(\bar{g}_{j}\), \(j=0,\ldots,k-1\) are the initial values of the sequences \((G_{n+j})\), \(j=0,\ldots,k-1\). At the beginning of the proof of Theorem 1 we pointed out that the linear independence of the vectors \(\bar{g}_{j}\) and the linear independence of the sequences \((G_{n+j})\) is equivalent. Thus the sequences \((G_{n+j})\), \(j=0,\ldots,k-1\) are linearly independent and Theorem 1 implies the existence of a homogeneous form \(F\), while Theorem 2 justifies the decomposability of \(F\). \(\Box\) _Proof of Corollary 2._ If the sequences \((G_{n}^{(j)})\), \(j=1,\ldots,k\) are linearly dependent, then enlarging their set with the sequence \((\gamma_{0}^{n})\) the linear dependent property remains true, and this is a very special algebraic dependence. Otherwise, if \((G_{n}^{(j)}),j=1,\ldots,k\) are linearly independent, then Theorem 1, more precisely equation (22) verifies the algebraic dependence. \(\Box\) _Proof of Theorem 3._ Assume that \(P(X)=z_{k}X^{k}+z_{k-1}X^{k-1}+\ldots+z_{0}\), where \(z_{0}\neq 0\). The differential equation with characteristic polynomial \(P(X)\) has the form \[z_{k}f(z)^{(k)}+z_{k-1}f(z)^{(k-1)}+\ldots+z_{0}=0. \tag{29}\] Here \(f^{(\ell)}\) denotes the \(\ell\)th derivative of \(f\). The pair-wisely distinct zeros of \(P(X)\) are assigned by \(\alpha_{1},\alpha_{2},\ldots,\alpha_{k}\). The function \(f(z)\) is a solution of (29) if and only if it has the form \[f(z)=f_{1}\alpha_{1}^{z}+f_{2}\alpha_{2}^{z}+\ldots+f_{k}\alpha_{k}^{z},\] with complex numbers \(f_{1},f_{2},\ldots,f_{k}\). The set of solutions of (29) form a \(\mathbb{C}\)-vector space with respect to the addition of functions and multiplication with complex numbers. We denote it by \(S(P)\). The functions \(\alpha_{1}^{z},\alpha_{2}^{z},\ldots,\alpha_{k}^{z}\) are linearly independent, thus the dimension of \(S(P)\) is \(k\). Assume that \(G_{j}(z),j\leq k_{0}\) are pairwise different solutions of (29). If they are \(\mathbb{C}\)-linearly dependent, which holds always when \(k_{0}>k\), then the same is true if we enlarge their set with the function \(((-1)^{k}P(0))^{z}\). It remains to examine the case when \(k_{0}=k\) and the functions \(G_{j}(z),j\leq k_{0}\) are linearly independent. Then they form a bases of \(S(P)\). The functions \(\alpha_{1}^{z},\alpha_{2}^{z},\ldots,\alpha_{k}^{z}\) belong to \(S(P)\), thus there exist complex numbers \(a_{ij},1\leq i,j\leq k\) such that \[\alpha_{i}^{z}=\sum_{j=1}^{k}a_{ij}G_{j}(z). \tag{30}\] On the other hand \[\alpha_{1}^{z}\alpha_{2}^{z}\cdots\alpha_{k}^{z}=(\alpha_{1}\alpha_{2}\cdots \alpha_{k})^{z}=((-1)^{k}P(0))^{z}.\] Inserting here the linear relations (30) we obtain the statement. \(\Box\) Binary and ternary recurrences ### Binary recurrences, a general identity This subsection is devoted to present the general identity (33) below with two binary recurrences satisfying the same recurrence relation. This is a special case of (22). On the other hand it provides a common generalization of (7)-(9). At the end of this subsection a few examples will illustrate (33). In the computations, we will follow the arguments of the previous sections. Assume \(k=2\). For simplicity suppose that the two recurrent sequences are \((G_{n})\) and \((H_{n})\), their initial values are \(G_{0},G_{1}\) and \(H_{0},H_{1}\), respectively, furthermore put \(\gamma_{1}=A\), \(\gamma_{0}=B\). Then the two sequences above belong to \[\Gamma_{2}^{(B,A)}=\left\{(x_{n})_{n\in\mathbb{Z}}\in\mathbb{C}^{\infty}\mid x _{n}=Ax_{n-1}+Bx_{n-2},\ n\geq 2\right\}.\] Now \[{\bf G}=\left[\begin{array}{cc}G_{0}&G_{1}\\ H_{0}&H_{1}\end{array}\right],\qquad{\bf G}^{\star}=\left[\begin{array}{cc}G _{1}&AG_{1}+BG_{0}\\ H_{1}&AH_{1}+BH_{0}\end{array}\right],\] and assume that \(\Delta=\det({\bf G})=G_{0}H_{1}-G_{1}H_{0}\neq 0\). Then \[{\bf G}^{-1}=\frac{1}{\Delta}\left[\begin{array}{cc}H_{1}&-G_{1}\\ -H_{0}&G_{0}\end{array}\right].\] Introduce the notation \(C_{G}=G_{1}^{2}-AG_{0}G_{1}-BG_{0}^{2}\) (see also before the equations (7)-(9)), and analogously \(C_{H}=H_{1}^{2}-AH_{0}H_{1}-BH_{0}^{2}\). We even define \(E_{10}=G_{1}H_{1}-AG_{1}H_{0}-BG_{0}H_{0}\), and \(E_{01}=G_{1}H_{1}-AG_{0}H_{1}-BG_{0}H_{0}\). One can easily see that \[{\bf M}={\bf G}^{\star}{\bf G}^{-1}=\frac{1}{\Delta}\left[\begin{array}{cc} E_{10}&-C_{G}\\ C_{H}&-E_{01}\end{array}\right]. \tag{31}\] Clearly, \(k_{\bf M}(x)=x^{2}-Ax-B=(x-\alpha)(x-\beta)\), \(\det({\bf M})=-B\). Obviously, \(\det({\bf M}^{n})=(-B)^{n}\), \(A=\alpha+\beta\) and \(B=-\alpha\beta\). (Note that \(\alpha\) and \(\beta\) are not necessarily distinct.) The element sequences of the powers of matrix \({\bf M}\) satisfy \[m_{i,j}^{(n)}=Am_{i,j}^{(n-1)}+Bm_{i,j}^{(n-2)},\qquad(n\geq 2;\ 1\leq i,j\leq 2),\] the initial values are clear from \({\bf M}^{0}={\bf I}\) and from \({\bf M}\) (here \({\bf I}\) is the \(2\times 2\) unit matrix). Now \[\left[\begin{array}{c}m_{i,j}^{(0)}\\ m_{i,j}^{(1)}\end{array}\right]=c_{i,j}^{(1)}\left[\begin{array}{c}G_{0}\\ G_{1}\end{array}\right]+c_{i,j}^{(2)}\left[\begin{array}{c}H_{0}\\ H_{1}\end{array}\right],\qquad(1\leq i,j\leq 2). \tag{32}\] Once we have the solutions to (32) in \(c_{i,j}^{(1)}\) and \(c_{i,j}^{(2)}\), then \(m_{i,j}^{(n)}=c_{i,j}^{(1)}G_{n}+c_{i,j}^{(2)}H_{n}\) holds. A straightforward calculation shows that \[{\bf M}^{n}=\frac{1}{\Delta}\left[\begin{array}{cc}(-H_{0}m_{1,1}^{(1)}+H_ {1})G_{n}+(G_{0}m_{1,1}^{(1)}-G_{1})H_{n}&-H_{0}m_{1,2}^{(1)}G_{n}+G_{0}m_{1,2}^ {(1)}H_{n}\\ -H_{0}m_{2,1}^{(1)}G_{n}+G_{0}m_{2,1}^{(1)}H_{n}&(-H_{0}m_{2,2}^{(1)}+H_{1})G_ {n}+(G_{0}m_{2,2}^{(1)}-G_{1})H_{n}\end{array}\right],\] where we used the values \(m_{1,1}^{(0)}=m_{2,2}^{(0)}=1\), \(m_{1,2}^{(0)}=m_{2,1}^{(0)}=0\). In fact, to determine \(\det({\bf M}^{n})\) we do not need the exact values \(m_{i,j}^{(1)}\) given in (31), only the identities \(m_{1,1}^{(1)}m_{2,2}^{(1)}-m_{1,2}^{(1)}m_{2,1}^{(1)}=\det({\bf M})=-B\) and \(m_{1,1}^{(1)}+m_{2,2}^{(1)}=\mbox{tr}({\bf M})=A\). Indeed, if we figure out simply the determinant of the matrix \({\bf M}^{n}\), and collect the coefficient of the terms \(G_{n}^{2}\), \(G_{n}H_{n}\), and \(H_{n}^{2}\), respectively, then the key moment of the simplification is the application of these identities. Finally, the computations lead to \[(-B)^{n}=\det\left({\bf M}^{n}\right)=\frac{C_{H}}{\Delta^{2}}G_{n}^{2}+\frac{ C_{GH}}{\Delta^{2}}G_{n}H_{n}+\frac{C_{G}}{\Delta^{2}}H_{n}^{2},\] where \(C_{GH}=-(E_{10}+E_{01})\). Moreover one can show that \(C_{GH}\) can be given by using the corresponding associate sequences: \(C_{GH}=G_{0}\widehat{H}_{1}-G_{1}\widehat{H}_{0}=H_{0}\widehat{G}_{1}-H_{1} \widehat{G}_{0}\). Then we have the desired equality given in **Theorem 5**.: _The terms of the recurrences above satisfy_ \[C_{H}G_{n}^{2}+C_{GH}G_{n}H_{n}+C_{G}H_{n}^{2}=(-B)^{n}\Delta^{2}. \tag{33}\] This is a nice common generalization of (7)-(9). Indeed, observe that Theorem 5 leads to (7) whenever \((H)\) is the associate sequence of \((G)\). Really, it is easy to show that \(C_{\widehat{G}}=-DC_{G}\), \(C_{G\widehat{G}}=0\) (i.e. \(C_{GH}\) vanishes if \((H)=(\widehat{G})\)), and in this particular case \(\Delta=-2C_{G}\) holds. Insert these values into (33) to get immediately (7). For (9) let \(H_{n}=G_{n+1}\), the details here are omitted. The binary quadratic form on left-hand side of (33) is trivially decomposable. The decomposition is formulated by \[((H_{1}-\alpha H_{0})G_{n}-(G_{1}-\alpha G_{0})H_{n}))\cdot((H_{1}-\beta H_{0 })G_{n}-(G_{1}-\beta G_{0})H_{n})=(-B)^{n}\Delta^{2}.\] We now illustrate Theorem 5 by five examples given in Table 1. One or two coefficients from \(\{C_{G},C_{H},C_{GH}\}\) may vanish in (33), which provides a large variety of identities. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \((A,B)\) & \({\bf G}\) & \({\bf M}\) & \((-B)^{n}\Delta^{2}=C_{H}G_{n}^{2}+C_{GH}G_{n}H_{n}+C_{G}H_{n}^{2}\) \\ \hline \hline \((0,4)\) & \(\left[\begin{array}{cc}1&2\\ 2&3\end{array}\right]\) & \(\left[\begin{array}{cc}2&0\\ 7&-2\end{array}\right]\) & \((-4)^{n}(-1)^{2}=-7G_{n}^{2}+4G_{n}H_{n}=G_{n}(4H_{n}-7G_{n})\) \\ \hline \((2,-1)\) & \(\left[\begin{array}{cc}2&3\\ 4&5\end{array}\right]\) & \(\left[\begin{array}{cc}1/2&1/2\\ -1/2&3/2\end{array}\right]\) & \(1^{n}(-2)^{2}=G_{n}^{2}-2G_{n}H_{n}+H_{n}^{2}=(G_{n}-H_{n})^{2}\) \\ \hline \((7,-10)\) & \(\left[\begin{array}{cc}0&1\\ 2&7\end{array}\right]\) & \(\left[\begin{array}{cc}7/2&1/2\\ 9/2&7/2\end{array}\right]\) & \(10^{n}(-2)^{2}=-9G_{n}^{2}+H_{n}^{2}\) \\ \hline \((7,-10)\) & \(\left[\begin{array}{cc}1&2\\ 1&5\end{array}\right]\) & \(\left[\begin{array}{cc}2&0\\ 0&5\end{array}\right]\) & \(10^{n}3^{2}=9G_{n}H_{n}\) \\ \hline \((4,-1)\) & \(\left[\begin{array}{cc}1&2\\ 3&4\end{array}\right]\) & \(\left[\begin{array}{cc}13/2&-3/2\\ 23/2&-5/2\end{array}\right]\) & \(1^{n}(-2)^{2}=-23G_{n}^{2}+18G_{n}H_{n}-3H_{n}^{2}\) \\ \hline \end{tabular} \end{table} Table 1: Binary recurrence examples. ### Ternary recurrences Instead of pointing on the complicated general formula it is better to present an example which is typical of the ternary forms we are studying. Examples for higher order linear recurrences can be analogously produced. There are only a few papers in the literature which work with three different ternary recurrences satisfying the same recurrence rule. An example is [13], which studies Narayana sequence \((A_{n})\), Narayana-Lucas sequence \((B_{n})\), and Narayana-Perrin sequence \((C_{n})\). These recurrences satisfy \[x_{n}=x_{n-1}+x_{n-3}\] with initial values given by \[A_{0}=0,A_{1}=1,A_{2}=1;\quad B_{0}=3,B_{1}=1,B_{2}=1;\quad C_{0}=3,C_{1}=0,C_{ 2}=2,\] respectively. Let \(\alpha_{i}\)\((i=1,2,3)\) denote the simple zeros of the characteristic polynomial \(k(x)=x^{3}-x^{2}-1\) such that \(\alpha_{1}\in\mathbb{R}\) and \(\alpha_{3}\) is the complex conjugate of \(\alpha_{2}\). Following the the method we described in details it leads to the diophantine equation \[-187x^{3}+159x^{2}y-45x^{2}z-189xy^{2}+306xyz-117xz^{2}+y^{3}-45y^{2}z+63yz^{2 }-27z^{3}=-216 \tag{34}\] possessing infinitely many integer solutions \(x=A_{n}\), \(y=B_{n}\), \(z=C_{n}\). In this case, \(\Delta=-6\), and the base of the exponential term is \(1\). Note that all the \(10\) coefficients in the corresponding \(\tilde{F}\) are non-zero. The ternary form of (34) can be decomposed in the algebraic number field \(\mathbb{Q}[\alpha_{1},\alpha_{2}]\) as \[-\prod_{i=1}^{3}\left((3\alpha_{i}^{2}+3\alpha_{i}-2)x+(-3\alpha_{i}^{2}+3 \alpha_{i}+2)y+(3\alpha_{i}^{2}-3\alpha_{i})z\right).\] ### Acknowledgments We thank Robert Tichy for his help in the formulation of Theorem 3. K. Gyory and L. Szalay were supported by the Hungarian National Foundation for Scientific Research Grant No. 128088, and No. 130909. For L. Szalay this research was supported by the Slovak Scientific Grant Agency VEGA 1/0776/21.
2304.14275
What's in a Name? Evaluating Assembly-Part Semantic Knowledge in Language Models through User-Provided Names in CAD Files
Semantic knowledge of part-part and part-whole relationships in assemblies is useful for a variety of tasks from searching design repositories to the construction of engineering knowledge bases. In this work we propose that the natural language names designers use in Computer Aided Design (CAD) software are a valuable source of such knowledge, and that Large Language Models (LLMs) contain useful domain-specific information for working with this data as well as other CAD and engineering-related tasks. In particular we extract and clean a large corpus of natural language part, feature and document names and use this to quantitatively demonstrate that a pre-trained language model can outperform numerous benchmarks on three self-supervised tasks, without ever having seen this data before. Moreover, we show that fine-tuning on the text data corpus further boosts the performance on all tasks, thus demonstrating the value of the text data which until now has been largely ignored. We also identify key limitations to using LLMs with text data alone, and our findings provide a strong motivation for further work into multi-modal text-geometry models. To aid and encourage further work in this area we make all our data and code publicly available.
Peter Meltzer, Joseph G. Lambourne, Daniele Grandi
2023-04-25T12:30:01Z
http://arxiv.org/abs/2304.14275v1
# What's in a Name? Evaluating Assembly-Part ###### Abstract Semantic knowledge of part-part and part-whole relationships in assemblies is useful for a variety of tasks from searching design repositories to the construction of engineering knowledge bases. In this work we propose that the natural language names designers use in Computer Aided Design (CAD) software are a valuable source of such knowledge, and that Large Language Models (LLMs) contain useful domain-specific information for working with this data as well as other CAD and engineering-related tasks. In particular we extract and clean a large corpus of natural language part, feature and document names and use this to quantitatively demonstrate that a pre-trained language model can outperform numerous benchmarks on three self-supervised tasks, without ever having seen this data before. Moreover, we show that fine-tuning on the text data corpus further boosts the performance on all tasks, thus demonstrating the value of the text data which until now has been largely ignored. We also identify key limitations to using LLMs with text data alone, and our findings provide a strong motivation for further work into multi-modal text-geometry models. To aid and encourage further work in this area we make all our data and code publicly available. **Keywords**: Artificial intelligence, Big data and analytics, Computer aided design, Data driven engineering, Machine learning for engineering applications ## 1 Introduction In the mechanical engineering domain, natural language is used by designers and engineers throughout the design process: to express design requirements, document design intent, and communicate ideas and solutions to others in the complex network of people working together to create a single product. Computer-Aided Design (CAD) software is often used to document and communicate design decisions, where designers create assemblies of parts, and use natural language to name each part and the assembly itself for documentation and collaboration purposes. This text often contains important semantic information about the individual parts and/or the roles they play in an assembly. Additionally, by grouping the text associated with the parts in an assembly together, there is the potential for whole-part relationships to be recovered. While the geometry of parts and their relationships in assemblies have been extensively studied [1, 2, 3, 4, 5], and various data mining and machine learning techniques have been used to build knowledge bases relevant to design [6, 7, 8], the natural language inside CAD models has been largely ignored [9]. Recently, large language models (LLMs) have revolutionized the field of natural language processing (NLP), becoming the standard tool used for machine translation [10] and showing impressive performance on standard tasks like text summarizing [11] and question answering in a few shot setting [12]. As these models are trained on a broad range of human knowledge [13, 14, 15], they have some exposure to the domain-specific vocabulary and mechanical engineering concepts useful for understanding mechanical CAD data. In this work, we evaluate their effectiveness at solving three problems that designers currently face when querying CAD content libraries and design repositories or managing the designs in product lifecycle management (PLM) systems. The use of CAD content libraries greatly speeds up the design process, allowing engineers and designers to import models of standard parts and automatically add the part numbers to a bill of materials. However searching these libraries for parts is often a tedious process. One way the process can be accelerated is for the content library to provide recommendations for additional parts which may be suitable to add to the same assembly akin to a "recently downloaded" or "frequently purchased together" part. To facilitate this, we investigate the ability of large language models to predict whether two parts are commonly found in the same assembly. CAD content libraries may also benefit from having a better understanding of the model a designer is currently working on, allowing more relevant recommendations and better targeted search functionality. Utilizing the text in CAD models to help provide this contextual information is an appealing solution, as the data can be efficiently extracted from the CAD software without computationally expensive interrogation of the model geometry and rapidly transmitted to a content provision server. To validate the ability of large language models to utilize contextual information from CAD part names to improve recommendations, we analyze the accuracy with which they can predict the name of a randomly selected part when the other non-default part names in the assembly are provided. In addition to content recommendations, this kind of prediction could be used to identify opportunities for the re-use of parts from a PLM system during the modeling process. A final problem, which large language models may help address, is to be able to identify the type of object modelled by a given assembly from a list of its part names. The ability to cluster and group assemblies together without the need for expensive geometric analysis would allow automatic categorization of assemblies in a PLM system. Additionally, CAD software could use this kind of information to identify the kinds of parts being modelled and show more relevant options on quick access toolbars and provide content-aware help and documentation. As there are no large datasets of CAD assemblies classified by object category, we instead investigate the ability of large language models to predict the user-defined names of OnShape documents. As around 20% of OnShape document names consist of a few words describing the assembly as a whole (e.g. "Coffee Mug", "Mechanical Pencil"), the ability to predict assembly names indicates the model could be used to group similar assemblies and classify them using few-shot learning techniques. By evaluating the performance of a pre-trained language model at these three tasks, we also gain an understanding of the amount of mechanical engineering information which is inherited from its initial training and can study the amount by which performance can be increased by fine-tuning. Our key contributions are: 1. We extract and clean a natural language corpus built from all the non-default text strings in the ABC dataset [16]. This includes the names of parts and modeling features along with the name of the document which contained them. 2. We quantitatively evaluate a pre-trained DistilBERT language model on three self-supervised tasks. We demonstrate that, without fine-tuning, the language model has superior performance over numerous benchmarks including TechNet [7], which was specifically designed to extract mechanical engineering knowledge from patent data. 3. We show that fine-tuning the language model allows further improvements in performance. The model is able to learn new relationships between part numbers and other domain-specific vocabulary, providing a pathway to better understand part assembly relationships by training on larger datasets. The source code and data for this project are available at [https://github.com/AutodeskAILab/WhatsInAName](https://github.com/AutodeskAILab/WhatsInAName). ## 2 Related work ### Transformers Since their introduction by Vaswani _et al._[17], transformer models have become the predominant network architecture used for a wide range of NLP tasks such as machine translation [10], question answering [18] and text summarization [11]. Given an array of tokens, derived from words in the input text, they are trained either to predict the next token in the sequence [19] or the values of certain masked tokens [20]. The key innovation in the transformer architecture is the self-attention mechanism, which allows the model to learn to focus on certain combinations of tokens. Scaling up both the size of the transformers and the corpus they are trained on, has allowed good performance on many NLP tasks with minimal fine-tuning [12]. The DistilBERT model [21] is especially designed for transfer learning and fine-tuning on new tasks, and is initially trained to mimic the outputs of the full-size version of BERT [20], a LLM that has been successfully leveraged in both academia and industry [22], while requiring much less memory and being faster to compute. The data for pre-training includes English Wikipedia [13] and BookCorpus [14]. As the parts in CAD models are not canonically ordered, it is useful to work with their names as an unordered set. The Set Transformer [23] has the advantage of permutation invariance with respect to the input tokens. It also avoids the quadratic complexity which is usually present in the self-attention mechanism, allowing long lists of data to be consumed. Beyond conventional NLP settings alone, advances in language models have also enabled great progress in multi-modal tasks such as zero-shot image classification [24] and text-to-image generation [25]. As of yet, there has been no work done on the analogous text-CAD applications. ### CAD search, retrieval and clustering One area where the text from CAD models has been utilized is for search and retrieval, however most published work in this area focuses on geometric similarity [9; 26]. Of the 50 papers analysed by Schinko _et al._[9], only 6 of them utilize metadata of any kind. Fisher _et al._[27] incorporate a keyword search into a 3D model finding algorithm. The text is derived from the CAD filename and scene graph nodes, and matching is done based on the intersection over union of the sets of words in the query. Funkhouser _et al._[28] crawled the web looking for VRML models and derived text annotations from the filenames and nearby text on the websites. The text is cleaned by removing common words using the SMART document retrieval system [29] and then matches were made using the TF-IDF/Rocchio method [30]. Yi _et al._[31] utilized part names in the ShapeNet scene graph [32] to learn shape hierarchies using an Expectation-Maximization (EM) algorithm, however they manually pre-process the noisy user-provided text into a tag dictionary rather than feeding it directly to a machine learning model. None of these works evaluate the utility of the raw text data for understanding parts that commonly occur in the same assembly, or for discovering part-whole relationships. ### Multi-modal models Another area where text has been used is in the creation of multi-modal models. These models embed both text and geometry information into a joint embedding space. Chen _et al._[33] collected natural language descriptions for chairs and tables in the ShapeNet dataset. The text is encoded using a character level CNN and RNN (GRU) and the geometry encoded with a 3D CNN. The text and geometry embeddings were built into a joint embedding space which could be used for search and generation using a Wasserstein GAN framework. Han _et al._[34] used the same text data with sequences of multi-view images of the 3D models. An RNN encoder is used to encode the data for each modality, and two RNN decoders jointly reconstruct the modality itself and predict the data for the counterpart modality. More recently a number of works have used pre-trained CLIP models [24] for text-to-image generation [25; 35; 36], text-to-3D generation [37; 38; 39] and text based search [40]. ### Engineering focused knowledge bases Engineering knowledge bases, or design repositories, are databases that contain product design knowledge useful to design engineers [41; 42; 43]. Over the years, various schemas and taxonomies have been proposed to organize design knowledge [44; 45; 46; 47; 48]; however, despite efforts to streamline the process of adding data to design repositories [49], adoption in research and industry has been limited, due to the resource commitment necessary to expand these knowledge bases, and the costly manual human input required. Recent efforts have facilitated the creation of engineering knowledge bases via unsupervised knowledge-mining methods. Shi [6] constructed a semantic network for design and engineering knowledge by text mining close to a million mechanical engineering papers from ScienceDirect [50]; resulting in an ontology with a higher retrieval rate for engineering concepts than generic knowledge bases like WordNet [51], ConceptNet [52], and NeLL [53]. Sarica _et al._ created the TechNet [7] semantic network using the titles and abstracts of 5 million patents from the USPTO patent database, and used a word2vec model [54] to create vector embeddings for 4 million entities. Technet embeddings have been used to encode the semantic names of parts in CAD models [55]. Our work differs from previous research in that we leverage the knowledge of parts and their relationships available in a pre-trained LLM, rather than seeking to extract and represent it explicitly using specialized corpora or manually constructed knowledge bases. ## 3 Data A key contribution of this work is the extraction and publication of a cleaned natural language corpus of text strings from the ABC dataset, which we primarily use for evaluation purposes. We now describe the methodology used and provide a snapshot of the types of strings included. ### Data Extraction The text data used in this work was extracted from the ABC dataset [16], which contains CAD models created by users of the OnShape software. OnShape models are organized into "documents", which can contain multiple "tabs" for parts, assemblies, and drawings. The dataset contains metadata for each document, including a user-defined document name and any additional text the user has supplied as a description. In this work we make the assumption that all the solid bodies inside a given document are related in some way. Designers can choose to model multiple parts in a single tab, or to design each part in a separate tab and then combine them into an assembly. While not all documents represent assemblies, we consider most documents to contain a similar combination of parts to those found in an assembly. We extract the part names from the MANIFOLD_SOLID_BREP entities of the STEP files and aggregate these for all STEP files in a document. If the same string appears more than once in a single STEP file, then we maintain the largest multiplicity this string has for any STEP file derived from the document. This preserves the multiplicity of parts that commonly appear more than once in a design (4 wheels on a car) while avoiding duplication due to multiple design revisions of parts, which are sometimes kept in separate tabs. In addition, we extract the names of all modeling features from the ABC metadata files. Many of the part names and feature names are the defaults automatically generated by the OnShape software. For example, solids are often named "Part \(n\)", where \(n\) is some small integer. The default feature names include the feature type and an integer, i.e. "Extrude \(n\)" or "Revolve \(n\)". These default names are removed. While only 47% of the solids in the entire ABC dataset contain non-default names, we find that in documents which contain at least one non-default part name, 77% of solids' names are non-default. The ABC dataset contains many duplicate documents. To remove duplicates we take all the strings extracted from a given document and sort them. If this sorted list of strings is identical to any other in the dataset then the two documents are considered as a duplicate and one copy is removed. We also remove the prefix "Copy of" and suffix "- Copy" from document names as these affixes appear to have been added automatically by OnShape when documents are cloned. The full ABC dataset contains files from 456,811 OnShape documents. Of these, 80,282 documents have at least one non-default part name or feature name after deduplication. Inside these documents we have a total of 950,335 non-default part names and 562,339 non-default feature names. 39,613 documents contain two or more non-default part names. The dataset is divided into a 70%/15%/15% train, validation, test split. In all our experiments, we further pre-process the dataset by converting all strings to lowercase and replacing all underscores with whitespace to remove potential opportunities for the models to cheat by spotting common naming conventions between parts and names from the same documents. However, we preserve the original cases in the version which we share in order to enable their use in further research if required. ### Data Snapshot We provide a small snapshot of the extracted text data here to illustrate some of the challenges it presents. In Table 1 we show a sample of 20 document names organised according to the semantic relationship between the name and the parts contained in the document. The "Clean Semantic" names present the most ideal case, as we have a strong indication of the contents of the document. For "Noisy Semantic" it is also clear, but the names contain noise which may affect automated processing or use in language models. The "Ambiguous" names present a worst-case scenario since, with respect to mechanical engineering or assembly design, they tell us nothing about the contained parts. To get a sense of the distribution of these categories in the rest of the dataset, we manually classified 250 randomly selected document names. In cases where domain-specific vocabulary or product names were used, we searched the internet to establish if the object could be identified from the document name without ambiguity. The percentage of the document names in each category are shown in the table. \begin{table} \begin{tabular}{l l l l} \hline \hline Clean Semantic & 20.8\% & Coffee Mug & Mechanical Pencil \\ & & OS kinematics & OS Chess \\ & & Torch Light For Bike & Yoke \\ & & Bottle & Concept Vehicle \\ & & Mounting Arm & \\ \hline Noisy Semantic & 34.8\% & Lava Lamp 2 & Sample - Headphones \\ & & Dave’s Handsome Mug & Sample - Bicycle Helmet \\ \hline Ambiguous & 44.4\% & Assem Test & Sample Document 2 \\ & & Left.x\_t & Part Performance test \\ & & Part Per. Test 2 & Fasteners\_onshape\_Support \\ & & Mark’s First Document & \\ \hline \hline \end{tabular} \end{table} Table 1: Sample document names grouped according to how much the names tell us about the parts included inside. Percentage figures show estimates of the proportions of each string type based on a manually labeled random sample of 250 document names. Table 2 shows the parts from a sample of 11 documents. Again there is a varied level of semantics present. In some cases it is obvious what the assembly or collection of parts would be, i.e. a chess set or a house. There are also many cases of materials, i.e. Oriented-Strand-Board (OSB), and commercial part codes. However, sometimes it is unclear whether or not we have part codes or custom naming conventions, i.e. an internet search for 47065t604 does not show this to be part code in common usage. Other noise such as dimensions may also weaken the signal in the semantics of a part, while the inclusion of many non-English languages could also present a challenge, as we do not have enough strings from each language to enable multi-lingual generalization. Fasteners are one of the most common types of component in mechanical assemblies. Using the database of fasteners which Autodesk Inventor provides as its content library, we are able to automatically detect fasteners in the dataset with types 'BS', 'KS', 'DIN EN', 'DIN', 'ISO', 'AS', 'UNI', 'IS', 'ANSI' and 'DIN EN ISO'. Fasteners are considered to be identified without ambiguity if the fastener type and dimensions are matched in a part name. For example, an "ISO 4762 Cap screw " would need to match the substrings "ISO"and "4762" with optional whitespace hyphens and dimensions of the form "M\(n_{1}\)x\(n_{2}\)" where \(n_{1}\) and \(n_{2}\) are integer values which are valid dimensions for that fastener type. We find that 471 documents contains at least one fastener and 5008 fasteners are identified in total without ambiguity. To provide an estimate of the quantities of different kinds of part name strings in the dataset, we randomly select 250 \begin{table} \begin{tabular}{l c l} \hline \hline Part Codes or & 1.6\% & OSB-O-S-4, OSB-I-S-3, OSB-O-J-3, ram-krokev-j, OSB-I-J-4, OSB-O-S-1, OSB-O-J- \\ Materials? & & 2, ram-pozed, OSB-I-J-5, OSB-I-J-2, hotovo, OSB-I-S-4, OSB-I-S-1, ram-pozed-vym, OSB-O-J-5, deska-pozed, OSB-I-J-1, OSB-I-J-3, vrchol-O, OSB-O-0, ram-krokev-s, OSB-O-J-1, OSB-O-S-2, OSB-O-S-3, OSB-I-S-5, OSB-O-S-5, OSB-I-S-2, OSB-O-J- \\ Part Codes or & 15.6\% & 47065t604, 47065t217, 47065t258, 47065t601, 47065t261, 47065t254, 47065t845, i,id6\_x_t \\ Clean Natural & 23.6\% & Bishop, Knight, Queen, Pawn, Castle, King \\ Language & & room, door, window \\ & & upright, cylinder, cylinder top, shaft, baseplate, crank, bearing, flywheel, con-rod, socket, wheel pin, handle, back plate, rod pin, handle shaft \\ Natural Language & 14.8\% & Bathtub, House Frame, 1st Floor, Toilet, Walls, Utility Room, Walls, Basement Bath, Plumbing, Sink, 1st Floor Ceiling, 2nd Floor, Basement Ceiling, 2nd Floor Stairs, Basement Stairs, Walls, Basement, \\ Natural Language & & Upright 2, Big Gear Rod, Pie Holder of Doom, Hangar 1, Randomizer 1, Cross Support 2, Throwing Arm 1, Randomizer 2, Little Gear, Big Gear, Throwing Arm Rod, Randomizer 3, CAM, Randomized 4, Propeller Rod, Side 2, Little Gear Rod, Upright Support 1, Hangar 2, Upright Support 2, Upright 1, Propeller, Side 1, Throwing Arm 2, Cross Support 1, Upright Support 3, \\ Natural Language & 8.4\% & 18x24x76 Cabinet, 6x24x76 Cabinet, 23x24 Shelf \\ with Dimensions & & Upright 2, Big Gear Rod, Pie Holder of Doom, Hangar 1, Randomizer 1, Cross Support 2, Throwing Arm 1, Randomizer 2, Little Gear, Big Gear, Throwing Arm Rod, Randomizer 3, CAM, Randomized 4, Propeller Rod, Side 2, Little Gear Rod, Upright Support 1, Hangar 2, Upright Support 2, Upright 1, Propeller, Side 1, Throwing Arm 2, Cross Support 1, Upright Support 3, \\ Natural Language & 8.4\% & 18x24x76 Cabinet, 6x24x76 Cabinet, 23x24 Shelf \\ with Dimensions & & Upright 2, Big Gear Rod, Pie Holder of Doom, Hangar 1, Randomizer 1, Cross Support 2, Throwing Arm 1, Randomizer 2, Little Gear, Big Gear, Throwing Arm Rod, Randomizer 3, CAM, Randomized 4, Propeller Rod, Side 2, Little Gear Rod, Upright Support 1, Hangar 2, Upright Support 2, Upright Support 1, Propeller, Side 1, Throwing Arm 2, Cross Support 1, Upright Support 3, \\ Natural Language & 9.6\% & flat knob, REVERSE MBOX, domed knob, d.knob moldbox \\ with Abbreviations & & \\ Fasteners & 2.8\% & \_Slotted head cap screw ISO 4762 - M6x16, \_Hexagon socket set screw with flat point DIN 913 - M5x5\_2, \_Slotted head cap screw ISO 4762 - M6x16\_3,\_Hexagon socket set screw with flat point DIN 913 - M5x5\_3, \_Hexagon socket set screw with flat point DIN 913 - M5x5\_3, \_Hexagon socket set screw with flat point DIN 913 - M5x5, \_Slotted head cap screw ISO 4762 - M6x16\_1, \_Slotted head cap screw ISO 4762 - M6x16\_2 \\ Non-English with Special Characters & & \\ \hline \hline \end{tabular} \end{table} Table 2: Parts from a sample of 10 documents grouped according to the level and interpretability of the semantics included. The percentages indicate the estimated fraction of all part names in the dataset with strings of a similar type. A further 20.4% of the strings were ambiguous. part names and classify them into the categories in Table 2. The middle column of Table 2 shows the estimated fraction of all unique strings in the dataset which fall into each category. To further quantify the fraction of the text with some semantic content, we count the number of unique strings containing words which match nouns which ConceptNet has tagged as "artifacts" [52]. We find 51% of the part names contain at least one word matching this criteria. This is approximately consistent with the fraction of strings in the "Clean natural language", "Natural language with ordinal numbers", "Natural language with dimensions", "Natural language with abbreviations" and "Fasteners" categories. ## 4 Method ### Part and Document Name Representations To encode the strings in the part names we seek to leverage the prior knowledge in a pre-trained version of DistilBERT [21]. Specifically, we use distilbert-base-uncased from Hugging Face [56]. Since each part string can be formed from multiple words, and even each word can be formed from multiple tokens, we pool the outputs of the language model to form a single vector for each part/document name string. Our initial experiments indicated that taking the mean of the output tokens in the second to last layer to be most effective. ### Document Representations We have two key considerations to take into account when representing a whole document. First, there can be an arbitrary number of parts, and second, the parts could appear in any order. Thus, we opt for a Set Transformer [23] to encode the set of embeddings for each part, since the set transformer is invariant to the order of parts and can handle arbitrarily sized input sets. ### Self-Supervised Evaluation We design three self-supervised tasks to quantitatively evaluate the ability of LLMs to address the search and recommendation problems described in the introduction and our hypothesis that they contain useful knowledge relevant to mechanical Figure 1: Architecture for training and evaluation of “Two Parts” experiment. **A**: The parts from documents are embedded by the language model which is pre-trained and has all its weights fixed. **B**: Parts are sampled either from the same assembly to produce positive pairs, or from different assemblies to produce negative pairs. **C**: The labels generated by the sampling process, along with the positive/negative pairs, are used to train and evaluate a small MLP. engineering. The performance at these tasks can then be compared against numerous baselines. The tasks are intended to evaluate each model's understanding of which parts commonly occur together in assemblies, and how easily the nature of the entire assembly can be understood from a list of its component parts. First, we consider whether the language model contains useful signals with which to tell if two parts are from the same document or not. Next, we consider how well a randomly selected part from a document can be identified given the remaining parts, and finally, we consider how well the document name can be identified from the document's parts. The design of these tasks thus enables quantitative comparisons between different language models and baselines despite the lack of ground truth labels in the dataset's original form. We aim to evaluate how much information the DistilBERT language model knows about mechanical design, based only on its pre-training, and to measure how much fine-tuning will further improve performance. To achieve this, the three self-supervised tasks are used to evaluate both the base and fine-tuned DistilBERT models and the results are reported in section 5. In all experiments, fine-tuning of the language model is done using the training set only, using a standard LLM fine-tuning task (see subsection 4.4) rather than fine-tuning directly on each task. The training set is also used to train any baseline methods that do not have pre-trained embeddings available. #### 4.3.1 Two Parts The ability to identify whether pairs of parts commonly co-occur in the same assembly is useful for CAD content recommender systems. We frame this task as a binary classification problem, in which a small Multi-Layer Perceptron (MLP) must predict whether or not two parts are taken from the same document. The input to the MLP is the concatenated LLM embeddings of either a positive or negative pair of parts, where positive pairs belong to the same document and negative pairs are from different documents, and the MLP is trained using a Binary Cross Entropy (BCE) loss. A high-level view of the architecture is shown in Figure 1. This experiment can also be used to assess the level of information about mechanical design present in a pre-trained LLM, as well as our baseline methods. As such, we opt for a small downstream model (the MLP), which contains only a single layer of 100 hidden units - this limited capacity ensures that the downstream network must rely most heavily on the signals already present in the language model embeddings to solve the task. All weights in the LLM are fixed during training of the MLP. The labels are generated in a contrastive approach by sampling positive and negative pairs from the dataset. Positive pairs are formed of two parts taken from the same document, while negative pairs are sampled from different documents. To select the positive pairs, we consider only parts where the sets of tokens are distinct. This excludes pairs with the same part names or pairs containing the same sub-words. Here we use NLTK Wordpunkt tokens [57] for each part. We then shuffle the pairwise correspondence to produce the negative pairs, and any resulting negative pairs which actually occur together somewhere in the dataset are removed. Since this results in fewer negative than positive pairs, we discard any additional positive pairs at random to balance the class distribution. Thus our result is a balanced binary classification dataset, where a random classifier would be expected to score 50% accuracy. Due to the small size of the evaluation model (the MLP), we further split the held-out test set into a subset for train, validation, and test with the same ratios as before. This gives us approximately 16.5K/3.5K/3.5K pairs for train, validation, and test respectively. This means the MLP is trained and evaluated on data that none of the language models have ever seen before, allowing us to compare against language models fine-tuned using the original train data, and provides a good means to evaluate generalization. #### 4.3.2 Missing Part Next, we wish to evaluate our approach at predicting a missing part from a document, which is directly applicable to recommending a commercially available off-the-shelf part that a designer might want to insert into an assembly. For the documents from which text was extracted to create our dataset, 23% of the part names were left as default and 77% had user-provided names, thus by excluding the default part names we have a snapshot of a partially completed assembly. This type of recommendation could be useful both in the conceptual design stage and for part re-use recommendations during the modelling stage. To again enable quantitative evaluation we also frame this experiment as a classification problem. In particular, we embed a whole batch of 512 part strings, and for each document we remove one part at random and try to correctly identify that removed part from the rest of the batch of parts. We use only a batch of 512 candidates rather than the entire set of possible parts since the number of possible parts is very large. In fact it is an order of magnitude greater than the typical vocabulary size of a LLM. We also wish to reduce the likelihood of including multiple very similar parts. While this simplifies the problem, all methods are tested in the same way, thus our comparisons hold. For training (see Figure 1(a)) we use the LLM embedding of the removed part directly as an auto-regressive target for a set encoder, which is given the remaining parts from the document as input. A Mean Squared Error (MSE) loss is used to minimize the distance between the predicted part embedding and the embedding of the target. For inference and evaluation (see Figure 1(b)) we mix the removed part string embedding into the embeddings for the rest of the part strings in the batch to form the target candidates. Then we use the cosine similarity to find the closest candidate to the predicted part embedding produced by the set encoder. A random classifier would be expected to achieve approximately 0.2% accuracy on this task. Due to the increased model size in this experiment, we train the set transformer on the full training set, however, the evaluation test set remains completely unseen by any of the language models. #### 4.3.3 Document Name As a final experiment, we consider the task of predicting a document's name. This gives an indication of the extent to which the language model can effectively predict part-whole relationships. Recall that 'documents' are loosely related to assemblies, as such in many cases it is feasible and realistic to attempt to identify the document's name from the parts contained within it. The ability to correctly classify a collection of parts can be applied directly to the automatic categorization Fig. 2: Architecture for “Missing Part” experiment. of assemblies in a PLM system. Moreover, with an understanding of an assembly as a whole, CAD software can tailor quick access tools and documentation to suit the class of object being modelled and suggest relevant design standards to follow. We expect this task to be the hardest of the three, in that to solve it effectively on the unstructured and diverse data included in the ABC dataset would require significant 'world knowledge'. We frame this experiment as per the "Missing Part" task, except for providing all non-default document parts as input and using the document names for the targets. Again using a batch size of 512, the expected performance of a random classifier would be 0.2% accuracy. ### Fine-Tuning We note that we do not fine-tune the LLM separately for each task and fine-tune only once on a corpus generated from the ABC text data. To generate the corpus we use the following template string: "An assembly with name {DOCUMENT_NAME} contains the following parts: {PART_1},..., {PART_N}.\(\backslash\)n". The pre-trained language model is then fine-tuned line-by-line for 3 epochs using all default settings with a script provided by Hugging Face [56]. This corpus also serves as a complete training corpus for our baseline models which require training (see subsection 4.5). ### Baselines For the most simple baselines we compare against a bag-of-words approach (BOW) using both frequency-based word vectors and term frequency-inverse document frequency (TF-IDF) word vectors. We also compare against the pre-trained TechNet word embeddings [7], which are mean pooled to provide embeddings for each part or document name. Finally, for a learning-based baseline trained entirely from scratch on the ABC text data, we compare against FastText [58], which is trained on the fine-tuning corpus generated from the full train set (see subsection 4.4). Training FastText from scratch allows us full control over the hyperparameters and thus we are able to sweep the embedding size over a comparable range to DistilBERT rather than adopting the pre-trained common crawl embeddings with a fixed embedding size of only 300 dimensions. ### Hyper-Parameters & Implementation All implementations are in pytorch with pytorch-lightning unless otherwise stated. The MLP used in all "Two Parts" experiments has 100 hidden units, and is optimized using Adam with a learning rate of 0.001. For FastText we use the gensim implemattion and grid search embedding size in {128, 256, 512, 768}, window size in {6, 8, 10} and skipgram in {TRUE, FALSE}. See subsection 4.4 for details on fine-tuning DistilBERT. For the SetTransformer used in the "Missing Part" and "Document Name" experiments we grid search hidden dimensions in {512, 768}, number of in-directions in {32, 64, 128} and number of attention heads in {4, 8}. In all cases we found greater stability in training without layer norm, thus it is not used throughout. We train for a maximum of 200 epochs with early stopping on the validation loss with a patience of 40. Again we use Adam optimizer with a learning rate 0.001. Any hyper-parameters for any method not detailed above are set to defaults. ## 5 Results and Discussion All results show the mean test accuracy with standard deviations after 5 trials on the held-out test set unless otherwise stated. In all cases 'Random' refers to the expected value of a random classifier, 'DistilBERT' refers to the pre-trained version, and 'DistilBERT-FT' refers to the same pre-trained model with additional fine-tuning (see subsection 4.4) on the cleaned version of the ABC corpus which we contribute with this work. ### Two Parts Table 3 shows the mean test accuracies of the "Two Parts" experiment. We observe that the traditional BOW approaches both score no better than random, indicating that simple statistics of repeated words are not sufficient to solve this task with any level of success. We also note that DistilBERT (without any fine-tuning or task-specific training) outperforms both the pre-trained TechNet embeddings (trained on mechanical and engineering design patents) and the FastText embeddings which are trained specifically on the ABC data with a subword tokenizer. This supports our hypothesis that general-purpose pre-trained LLMs contain valuable domain-specific mechanical engineering knowledge that can be leveraged in downstream tasks. Finally, we see that fine-tuning the language model on the ABC string data further improves the performance. While the gain is small, it has statistical significance (with \(p\) value of less than 1%), and as such it shows that the string data contains additional information which the language model can learn to exploit. This may include domain-specific vocabulary, brand names, and part numbers, not present in the original Wikipedia and BookCorpus training set. More training data of this kind might enable further improvement in results, leading to a better understanding of parts that co-occur in mechanical assemblies. Note, due to the close proximity of the top scores, we increased the number of trials to 100 in order to collect enough evidence to interpret any differences. For the other models with already much larger gaps, this was not necessary. Also, due to the complete failure of the BOW approaches, we do not consider these in further experiments. ### Missing Part Table 4 shows the comparison of the mean test classification accuracies for each language model at identifying the correct missing part from a batch of 512 candidates. Here TechNet performs very poorly, which is likely due to its limitation of whole-word tokens. Whereas in the "Two Parts" task, the presence of out-of-vocab (OOV) tokens may have provided a clue to whether or not two parts came from the same assembly, in the more difficult setting here the semantics alone captured by TechNet word embeddings are not enough to predict the missing part with useful accuracy. FastText (fully trained on the ABC text train set) performs much better, however the pre-trained LLM DistilBERT achieves better performance despite never having seen the ABC data before. Again this confirms our hypothesis that general-purpose LLMs contain knowledge useful for engineering and CAD-related tasks. In this particular experiment we note that fine-tuning has a more substantial improvement in the predictions than in the "Two Parts" case. Recall that the fine-tuning was not done directly on this task, but through a masked language modelling task on the generated corpus (see subsection 4.4). Thus, here we demonstrate that fine-tuning on the corpus provided with this work allows language models to generalize to new tasks in the domain of CAD and mechanical engineering. \begin{table} \begin{tabular}{l r} \hline \hline Language Model & Test Accuracy (\%) \\ \hline Random & 0.2 \\ TechNet & \(1.8\pm 0.5\) \\ FastText & \(20.8\pm 1.3\) \\ DistilBERT & \(27.2\pm 0.6\) \\ DistilBERT-FT & \(\mathbf{32.1\pm 0.4}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Mean test accuracy for predicting a missing part from the remaining parts in a document. \begin{table} \begin{tabular}{l r} \hline \hline Language Model & Test Accuracy (\%) \\ \hline Random & 50 \\ TF-IDF BOW & \(50.1\pm 2.0\) \\ Frequency BOW & \(49.8\pm 1.8\) \\ TechNet & \(61.6\pm 4.1\) \\ FastText & \(66.6\pm 1.9\) \\ DistilBERT & \(68.0\pm 1.9\) \\ DistilBERT-FT & \(\mathbf{68.9\pm 2.2}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Mean test accuracy at predicting whether or not two parts are from the same document. ### Document Name Table 5 shows the mean test accuracies for identifying the correct document name from a batch of 512 candidates given the document's parts as input. With the exception of TechNet, for all language models this task presented the greatest challenge. This is largely expected, since the downstream model must go beyond recognizing common co-occurring parts to reasoning about part-whole relationships. For TechNet, the improvement seen here (compared to the "Missing Part" experiment) is likely due to its OOV limitations in that we expect to find fewer OOV terms in the document names than the part names. Therefore, given a set of parts with only some OOV parts, predicting a missing part that is also OOV would not be possible, however predicting the name could be more achievable. Again the pre-trained DistilBERT outperforms both TechNet and FastText by a large margin, thus providing further support for our hypothesis that the "world-knowledge" captured by such LLMs can be effectively leveraged for downstream CAD and mechanical engineering-related tasks. ### Qualitative Analysis Considering the "Missing Part" predictions made by each of the models, we observe many textual correlations in targets and input parts. It is very common in the dataset for parts from the same document to share words or sub-words which could give the models a way to cheat at identifying the correct candidate. We believe FastText (trained purely on the ABC data with a sub-word tokenizer) provides a baseline for this, from which we can compare the language models and quantify any gains. To illustrate this, Table 6 shows examples that both FastText and DistilBERT-FT were able to correctly predict. Here we see many common words or sub-words or a similar pattern of numbers. Such predictions could still be useful - i.e. \begin{table} \begin{tabular}{l l} \hline \hline Missing Part & Input Parts \\ \hline 20.1134 30 r 30 & 20.1125 30 r 90, 20.1136 30 r 60 \\ motor plate & sensor plate, motor plate, slew bearing plate \\ joist 9 & concrete, house, roof, patio roof, joist 14, joist 13, joist 12, joist 11, joist 10, joist 8, joist 7, joist 6, joist 5, joist 4, joist 3, joist 2, joist 1, cross member, post 5, post 4, post 3, post 2, post 1 \\ front right wheel & back left wheel, front left wheel, back right wheel, back axel, front axel, large base, back wheels, front wheels, base \\ support leg - upper & support leg - lower, central leg \\ pin-3 & base part, pin-2, pin-4, pin-5, link-1, pin-1, shaft \\ starboard outer & port outer, port inner, base, starboard inner, horn, frame \\ lower swivel bracket & upper swivel bracket, board mount plate \\ gear 2 & gear 4, gear 3, gear 1, construction \\ syringe piston & syringe body \\ \hline \hline \end{tabular} \end{table} Table 6: Missing part (part-part) predictions DistilBERT-FT and FastText both made correctly. \begin{table} \begin{tabular}{l r} \hline \hline Language Model & Test Accuracy (\%) \\ \hline Random & 0.2 \\ TechNet & \(3.3\pm 0.5\) \\ FastText & \(11.3\pm 0.6\) \\ DistilBERT & \(18.0\pm 0.7\) \\ DistilBERT-FT & \(\mathbf{19.3\pm 1.9}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Mean test accuracy for predicting a document name given the set of part names. knowing we likely need a "front right wheel" given a "front left wheel", etc. has value and the text provides a much simpler indicator of this than geometry alone. However, it is not possible to determine here whether the models really understand the relationships between the parts or are simply looking for common words/sub-words, which would not generalize well to a setting in which we have a much greater number of candidate parts. Table 7 shows examples that DistilBERT-FT was able to correctly predict where FastText failed. In contrast, here we see examples where the semantic understanding of part-part relations is necessary in order to make the correct prediction, and confirm that the DistilBERT-FT model is able to make predictions based on more than just text matching. \begin{table} \begin{tabular}{l l} \hline \hline Missing Part & Input Parts \\ \hline bearing spacer & gt2 pulley 16, right motor, left motor, m5 washer, f695, nema17 \\ plate & switch, bar, bezel, base, mount \\ arm 1 & bent finger 3, bent finger 2, bent finger 1, finger 1, finger bent 1, body, shoulder 1, finger flat \\ handle & fork, wheel, clamp, grip, brake, deck \\ bearing plug & 52 mm wheel, mold base \\ rook & queen, castle, pawn, knight \\ tail & frontsc wings, t 25 aelius \\ base holder & clamp, umbrella pole \\ shelf & turntable, wall \\ lid & case \\ cover & holder, shell \\ base & dowel, gear \\ \hline \hline \end{tabular} \end{table} Table 7: Missing part (part-part) predictions DistilBERT-FT made correctly where FastText failed. \begin{table} \begin{tabular}{l l} \hline \hline Document Name & Input Parts \\ \hline miniatures game box & miniatures game tray, cover, miniatures game box \\ t-25 helios base delta tail & tail, delta wings, t 25 helios base \\ ds spinner ball 15mm & skull pin 8.1mm, spinner top, spinner bottom, spinner roundcurved top, spinner roundcurved bottom, spinner ninja duo top, spinner curved top, spinner curved bottom, spinner dragon top, spinner dragon bottom, unicorn pin 8.1 r, ninjago spinner bottom, minjago spinner top, spinner wing top, spinner wing bottom, spinner wing duo top, spinner wing duo bottom, ninjago pin, unicorn pin 8.1 l \\ t-25r sorin base ogoc wings & ogoc wings, left thruster, right thruster, sorin base \\ bluetooth speaker & speaker small, speaker large \\ ctc e3d titan mount & pcb mount, sensor mount, bearing mount, e3d blower, carriage, titan mount \\ din 912 m3 screw & din 912 m3x14, din 912 m3x12, din 912 m3x10, din 912 m3x8, din 912 m3x6, din 912 m3x4 \\ chair tm8 blocks2.step & chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2, chair tm8 blocks2 \\ 2020 clip lock mount for led lamp & lock, mount \\ t-25 aelius base starfighter tail fixing & tail, starfighter wings, t 25 aelius \\ \hline \hline \end{tabular} \end{table} Table 8: Document name predictions that both DistilBERT-FT and FastText made correctly which include many non-semantic clues in the text. We see a similar situation looking at the "Document Name" predictions. Looking at examples both FastText and DistilBERT-FT were able to correctly predict (Table 8), we again see many common words in the document names compared with the input parts. However, when considering only examples that DistilBERT-FT predicted correctly where FastText failed (Table 9) we again see examples that could not be correctly predicted without part-whole semantic knowledge. ### Limitations The performance of any LLM (or in fact any model) operating on text alone will be limited by how well this text describes the geometry or function of the part(s) it is associated with. As shown in Table 1 and 2, the ABC dataset has many ambiguous strings that bear little or no relation to the geometry or function of the parts they describe, and we expect that in industrial engineering data we may find a similar situation. While we leave multi-modal extension to further work, we hypothesize that the inclusion of geometry will enable models to overcome this inherent limitation of LLMs and draw more value than from each modality alone, thus offering a means to better represent and reason with part affordances. We therefore consider multi-modal investigation with geometry to be highly justified. Additionally, we note our finding that the text data contained in CAD document parts is often highly correlated in ways that make it easy for language-based models to find trivial solutions, i.e. predicting a missing "pin 3" when presented with "pin 1", "pin 2" and "pin 4". As discussed in subsection 5.4 we believe using a FastText baseline enabled us to better understand any semantic gains over these trivial solutions and we note the importance of such considerations for any future work using CAD text data. Given our desire to make quantitative evaluations, we opted for a classification style setup when predicting a missing part or document name. While this makes it easier to compare different methods and draw meaningful conclusions, it limits our approach to settings where a set of plausible candidate targets can be provided. For many use cases this may be \begin{table} \begin{tabular}{l l} \hline \hline Document Name & Input Parts \\ \hline suspension & tire 2, wheel 2, upright 2, tire 1, wheel 1, lower a-arm 2, upper a-arm 2, upper a-arm 1, upright 1, lower a-arm mount, lower a-arm 1 \\ service counter & top surface left b, bottom drawer unit, side skirt left, front skirt full, side skirt right, work table surface b, upper storage b-right, 4 inch top writing surface, top surface right b, middle deck left panel 2, upper storage b-left, upper side skirt right, top trim, top front skirt b \\ chess-game & bishop, knight, base, queen, pawn-top, pawn-base, king, rook \\ si15024 org2 radio-controlled helicopter & tail propeller screw, propeller screw, tail rotor gear base axis, tail rotor gear base gear, base, skid 2, tail foil screw, tail fin, tail propeller 2, tail propeller 1, tail rotor base, tail rotor axis, tail rotor gear, tail rotor gear t, tail motor base gear t, tail motor base p, tail motor axis, motor, motor gear, propeller pole gear, motor pole, propeller2, propeller1, propeller base, canopy, pole \\ gimbal & right servo, right gear, left servo, center t, top gear, left gear, frame \\ bullet feeder & nose guide spacer, collator, nose guide, flip ramp, collator side wall, tube attachment, base mounting bracket, motor \\ ball bearing & inner ring, outer ring \\ brushless rc car starter document & tg-3 receiver, m3x14 bolt, m3x12 bolt, 625 bearing, shaft, upper face, main \\ & casing, lower face, motor mount, plastic input, heatsink, base pcb, plastic \\ & base, m3x25 bolt, servo mount screw, pla, m3 nut, m5x40 bolt standard head, \\ & m3x8 bolt, m3x20 bolt, m3x35 bolt, m3x16 bolt, base plate, 9g micro servo, \\ & m5 nut, rear drive gear, rear axel, 500mah 3s lipo, m3x10 bolt, m3x30 bolt \\ robot arm & pot gear, upper clevis, lower clevis, rod knuckle, elbow idler, tube socket, lifter \\ & arm rotor, base tube rotor, riser base, fore clevis, rear clevis, base support r, base support l, can, gripper frame, actuator arm, servo shaft, servo, right \\ & gripper, left gripper, right frame, base bottom, left frame, main gear \\ robot & hbridge holder, arduino holder, shaft, fr wheel, wheel holder, arduino uno, \\ & h-bridge, battery, sensor-holder, pin, sensor, base, bracket, motor, wheel, pin \\ \hline \hline \end{tabular} \end{table} Table 9: Semantically-based document name predictions DistilBERT-FT got correct that FastText did not. appropriate, e.g. recommendations of fastener or other commercially available parts, however, extensions of this work may seek to provide generated open-ended natural language outputs for the predictions. ## 6 Conclusion In this work we demonstrate that pre-trained LLMs contain substantial mechanical engineering knowledge, outperforming numerous benchmarks on a number of self-supervised tasks by a significant margin. We show that traditional statistical word-count approaches are insufficient to handle the noisy natural language contained in CAD files, and surprisingly, we observe that a pre-trained language model performs better on these tasks than TechNet, which is specifically trained on mechanical engineering patent data, and in some cases improves by an order of magnitude. We also demonstrate that fine-tuning the model on CAD data, using a standard masked token predication language modelling task, leads to improvement in performance on all of the downstream self-supervised tasks tested. This confirms our hypothesis that unstructured natural language text information inside CAD files can be utilized to improve language models understanding of mechanical engineering concepts. We publicly release the cleaned text data corpus extracted from the ABC dataset, the first CAD text corpus of its kind, along with all of our code in the hope that this will inspire future research and can be used for bench-marking future tasks. In addition to the contributions discussed above, this work also identifies key limitations to using LLMs with CAD text data alone. For example, the large number of trivial solutions enabled by highly correlated text in part names within documents, for which we propose that FastText with a subword tokenizer can provide a baseline. We also provide an insight into the proportion of part and document names within the dataset that actually provide useful semantic or geometric information creating a strong motivation for multi-modal approaches including geometry. In future work, we plan to investigate how embeddings from multi-modal text-CAD training can allow better semantic clustering of geometries into groups with similar affordance. Moreover, the findings of our work could also be leveraged to improve and support generative workflows in CAD software. ## Acknowledgements Thanks go to the authors of TechNet, in particular Serhad Sarica and Jianxi Luo, for their support with the TechNet pre-processing pipeline, and to Pradeep Kumar Jayaraman for his advice and guidance on transformers.
2310.19816
Benchmarking GPUs on SVBRDF Extractor Model
With the maturity of deep learning, its use is emerging in every field. Also, as different types of GPUs are becoming more available in the markets, it creates a difficult decision for users. How can users select GPUs to achieve optimal performance for a specific task? Analysis of GPU architecture is well studied, but existing works that benchmark GPUs do not study tasks for networks with significantly larger input. In this work, we tried to differentiate the performance of different GPUs on neural network models that operate on bigger input images (256x256).
Narayan Kandel, Melanie Lambert
2023-10-19T17:09:06Z
http://arxiv.org/abs/2310.19816v1
# Benchmarking GPUs on SVBRDF Extractor Model ###### Abstract With the maturity of deep learning, its use is emerging in every field. Also, as different types of GPUs are becoming more available in the markets, it creates a difficult decision for users. How can users select GPUs to achieve optimal performance for a specific task? Analysis of GPU architecture is well studied, but existing works that benchmark GPUs do not study tasks for networks with significantly larger input. In this work, we tried to differentiate the performance of different GPUs on neural network models that operate on bigger input images (256x256). GPUs, Benchmark, Neural Network, Rendering ## 1 Introduction The growth of research in deep learning is causing it to be relevant in numerous different fields. Deep learning requires computationally-intensive training and utilizes high-performance computing (HPC) to handle large amounts of data. Graphics processing units (GPUs) are perfect for HPC because of their abundance of cores and potential for memory. Leading manufacturers in the GPU industry include NVIDIA and AMD, whose GPUs are used in top-performing supercomputers. As of 2021, in the Top500 list [5], the world's fastest supercomputer is Fukagu, with 7,630,848 using 27,000 NVIDIA Tesla GPUs linked to 9,000 IBM Power9 CPUs. NVIDIA GPUs are used in all the top three supercomputers. The fifth highest is Perlmutter and uses AMD EPYC based nodes, and 1536 NVIDIA A100 accelerated nodes. Fukagu achieves 442 Pflop/s, and Perlmutter achieves 64.6. We will be focusing on NVIDIA GPUs in our experiments for our purposes and because of availability. As varying GPU architecture becomes more available, it can be a challenge to get optimal performance of GPU. In order to approach the problem of getting optimal value, we present a benchmarking report. We compare the performance of an extensive network with sizable input/output features on a CPU and different GPUs. In this work, we do a comprehensive analysis regarding the performance of a neural network with significantly large input/output features. This gives a better understanding of resource usage and presents techniques for optimal performance in the visual computing field. Additionally, this comparative report on power usage, memory usage, and time-required comparison between GPU machines provides valuable information when making decisions regarding hardware selection and optimum utilization. We organize this paper according to the following. The literature review is discussed in section 2, and in section 3, we outline our methodology. In section 4, we discuss and analyze results, comparing different features of each GPU and their impact on performance. Finally, we draw our conclusion in section 5. ## 2 Literature Review Analysis of GPU architecture is a well-studied problem, and some existing works analyze the architectural differences between GPUs and their impact on performance (Zhang et al. [2], Wang et al. [3]). These approaches generally test on tasks such as matrix multiplication and clustering jobs. However, there is no known work benchmarking for networks with bigger images such as 256x256 in our case. Zhang et al. [2] provide a comparison of older AMD and NVIDIA GPU architectures. Specifically, they provide an in-depth study to distinguish specific differences in NVIDIA's Fermi and ATI's Cypress architecture and how they impact performance. They also compare energy efficiencies, an area of great interest in high-performance computing. They find that Cypress is more energy-efficient if the program is a perfect fit for the processors; otherwise, the Fermi is the more energy-efficient choice. Additionally, their conclusion shows that both architectures work better for different tasks, as they have differing amounts of cores and memory subsystems. While this study is on older architectures, it provides an in-depth analysis and comparison of two GPU architectures. A more recent work Wang et al. [3] performs an extensive benchmarking analysis of a deep neural network on TPU, GPU, and CPU architectures. They identify a memory bandwidth bottleneck of the TPU architecture and conclude that improving memory access for memory-bound operations is necessary for performance improvement. They also compare the hardware and software of the varying platforms. While existing research studies different architectures, their efficiency, and their influence on performance, there is no known work benchmarking for networks with bigger images such as 256x256 in our case. ## 3 Methodology In order to do GPU benchmarking, we followed six steps. Those are: 1. Selecting Problem 2. Selecting machine for benchmarking 3. Selecting Benchmarking Tools 4. Performing Neural Network Training 5. Collecting/Analyzing data 6. Drawing conclusion The first step is selecting the problem. In this work, our objective is benchmarking training performance of the Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF) Extractor Model. This model is proposed by V. Deschaininte et al [1]. The basic block diagram of the model is shown in figure 2. This model has a total of 80,505,488 parameters and takes 256x256 RGB images as an input. One of the input samples is shown in figure 3. The output of the network is of size 256x256x9. The result is partitioned into four parts. They are Surface Normal, Roughness, Specular Albedo, and Diffuse Albedo. One sample of output is shown in figure 4. We used a total of 75 GBs of training data. It consists of approximately 200k images. The second step is selecting a machine for benchmarking. Based on the availability of the machine on Palmetto Cluster and usability of the machine, we had selected a single-core GPU machine of type K40, P100, V100, and A100, and a single-node 24 core CPU machine. The basic architecture difference between selected GPU is shown in the table 1. For the third step, we selected benchmarking tools. We utilized Weight and Bias APIs [4] to record performance-related data during our training. Our selection is based on two reasons; firstly, the API has a simple configuration, and secondly, all data can be easily visualized using a provided web platform. The fourth step is training. We train a neural network model for one epoch on each machine. During the training process, we had collected GPU utilization data, GPU memory data, power usage data, total completion data, image processed per second, etc, and store those data on a weight and bias server. In the fifth and sixth steps, we had analyzed those data and used them to do benchmarking. We have presented some of the analysis in the result section. We had used the Tensorflow framework to build a neural network and a palmetto cluster to train the model. During the neural network training process, we had used a learning rate of 0.00002 and 84 test images for validating the model. ## 4 Evaluation and Results In order to do a comparison between different configurations, we ran a number of experiments and then recorded performance-related data. We break our experiment into four parts. In the first part, we compared the performance of K40 GPUs with two different batch sizes. In the second part, we trained the network on different GPUs with batch size 5 and compare their performance. In the third part, we repeated the analysis done on part two with batch size 25. And in the fourth part, we trained the network on a CPU machine and compared its performance with GPU (K40) machine. The network's output is evaluated by comparing it with the output authors has provided. \begin{table} \begin{tabular}{|l||l|l|l|l|} \hline & K40 & P100 & V100 & A100 \\ \hline \hline Architecture & Kepler & Pascal & Volta & Ampere \\ \hline \hline Cuda Core & 2880 & 3584 & 5120 & 6912 \\ \hline ROP & 48 & 96 & 128 & - \\ \hline Base & Clock & 745 & 1189(1324) & 1246(1380) & 765(1410) \\ (MHz) & & & & \\ \hline Power (watt) & 245 & 250 & 250 & 400 \\ \hline Memory Size & 12288 & 12288 & 16384 & 40960 \\ (MB) & & & & \\ \hline Memory Clock & 1502 & 715 & 879 & 1215 \\ (MHz) & & & & \\ \hline Memory Bandwidth(GB/s) & & & & \\ \hline Price (S) & 500 & 5900 & 6900 & 10000 \\ \hline Release & 2013 & 2016 & 2017 & 2020 \\ \hline \end{tabular} \end{table} Table 1: GPU Internal Comparison. ([6], [7]) Figure 1: Block Diagram Figure 3: Input Image Figure 4: Output as an Image Figure 5 shows the total time taken to train a neural network for one epoch. In the figure, the longest time corresponds to the CPU machine with batch size 5 and the shortest time corresponds to the A100 GPU machine with batch size 25. The time trend seen in the figure 5 matches the computing power of the machine. Among different batch sizes on the same machine, it is seen that training with a higher batch size would reduce the total time requirement by a small margin. ### Using different batch sizes on K40 GPU In the first step, We train networks with batch sizes 5 and 25 on K40 GPU. The GPU memory allocation, process GPU time spent accessing memory, and change in GPU power usage are shown in figure 8, figure,6 and figure 7. From the figure, it is observed that processing time spent accessing memory increased with increasing batch sizes though GPU power usage remains constant. In addition, total GPU memory allocation increased with an increase in batch size. In figure 8, we can see that GPU memory allocation reached near 100% with batch size 25. So, based on the three plots, we can say that we are limited to utilizing full GPU power by GPU memory. ### Performance Comparison between GPUs on batch size 25 In the third part, we trained the network with batch size 25 for one epoch. The pattern on the number of images processed per sec as shown in figure 14 is similar to the pattern on batch size 5. The machine with A100 GPU has the highest image rate and the machine with K40 GPU has the lowest image rate among the four GPUs machine. The Process memory allocation increased with batch size and it reached near 100% for batch size 25. The GPU utilization plot shown in figure 14 shows that GPUs aren't fully utilized and there is a possibility to get higher performance with an increase in data size. But, as GPU memory already reached near 100%, we are unable to increase the batch size and hence it acts as one of the walls of the system. Figure 16 shows the Memory access time pattern among four GPUs. As expected, The machine with A100 GPU has lower memory access time and the machine with K40 GPU has higher memory access time. As an additional observation, Figure 14 shows that A100 GPU and V100 GPU have a much higher image per second rate than K40 GPU. This is expected because A100 and V100 have considerably more render output units (ROPs) than K40. A100 has 160 ROPs, and V100 has 128, whereas K40 only has 49 ROPs. This allows A100 and V100 to draw more images per second, increasing performance. As shown in Figure 5, K40 also takes longer than other GPUs because it has the least amount of cores out of all the GPUs. In addition, the CPU has limited parallelization capability in comparison to the GPU. As a result, image throughput is better with GPU, and because the CPU doesn't utilize as many resources, it produces fewer images per second. Therefore CPU use doesn't provide an efficient solution when training larger image-based neural networks. ## Acknowledgments The authors would like to thank Professor Dr. Rong Ge for her constant feedback regarding the project. In addition, the authors would like to thank Palmetto Supports for their guidance regarding cluster setup and Palmetto Cluster for providing computers to run this project.
2307.16132
Lengths of modules over short Artin local rings
Let $(A,\mathfrak{m})$ be a short Artin local ring (i.e., $\mathfrak{m}^3 = 0$ and $\mathfrak{m}^2 \neq 0$). Assume $A$ is not a hypersurface ring. We show there exists $c_A \geq 2$ such that if $M$ is any finitely generated module with bounded betti-numbers then $c_A $ divides $\ell(M)$, the length of $M$. If $A$ is not a complete intersection then there exists $b_A \geq 2$ such that if $M$ is any module with $curv(M) < \ curv(k)$ then $b_A$ divides $\ell(\Omega^i_A(M))$ for all $i \geq 1$ (here $\Omega^i_A(M)$ denotes the $i^{th}$-syzygy of $M$).
Tony J. Puthenpurakal
2023-07-30T04:57:42Z
http://arxiv.org/abs/2307.16132v1
# Lengths of modules over short Artin local rings ###### Abstract. Let \((A,\mathfrak{m})\) be a short Artin local ring (i.e., \(\mathfrak{m}^{3}=0\) and \(\mathfrak{m}^{2}\neq 0\)). Assume \(A\) is not a hypersurface ring. We show there exists \(c_{A}\geq 2\) such that if \(M\) is any finitely generated module with bounded betti-numbers then \(c_{A}\) divides \(\ell(M)\), the length of \(M\). If \(A\) is not a complete intersection then there exists \(b_{A}\geq 2\) such that if \(M\) is any module with \(\operatorname{curv}(M)<\operatorname{curv}(k)\) then \(b_{A}\) divides \(\ell(\Omega^{i}_{A}(M))\) for all \(i\geq 1\) (here \(\Omega^{i}_{A}(M)\) denotes the \(i^{th}\)-syzygy of \(M\)). Key words and phrases: Grothendieck group, Betti-numbers, minimal resolutions, multiplicity, stable category, derived category 2020 Mathematics Subject Classification: Primary 13D02, 13D09; Secondary 13D15, 13A30 ## 1. introduction Let \((A,\mathfrak{m})\) be an Artin local commutative ring with residue field \(k\). All modules under discussion are finitely generated. Let \(M\) be an \(A\)-module. Let \(\beta_{n}(M)=\ell(\operatorname{Tor}^{A}_{n}(M,k))\) be the \(n^{th}\)-betti number of \(M\) (here \(\ell(-)\) denotes length). In general the sequence \(\{\beta_{n}(M)\}_{n\geq 0}\) might be unbounded. It is thus interesting to study non-free modules with bounded betti numbers. Let \(P_{M}(z)=\sum_{n\geq 0}\beta_{n}(M)z^{n}\) be the Poincare series of \(M\). If \(M=k\) then we by abuse of notation call it the Poincare series of \(A\). If \(A\) is a hypersurface ring i.e., \(A=Q/(f)\) where \((Q,\mathfrak{n})\) is regular and \(f\in\mathfrak{n}^{2}\), then it is well-known that every module \(M\) has bounded betti-numbers (in fact it is eventually periodic), see [12, 6.1]. So we will assume that \(A\) is not a hypersurface ring. If \(\mathfrak{m}^{2}=0\) and \(\ell(\mathfrak{m})\geq 2\) then every non-free module \(M\) has the property that its first syzygy is \(\cong k^{r}\) for some \(r\geq 1\). Thus \(M\) has unbounded betti-numbers, for instance see 10.1. Thus the simplest non-trivial case is when \(A\) is short, i.e., \(\mathfrak{m}^{3}=0\) and \(\mathfrak{m}^{2}\neq 0\). Note that such rings can exhibit sufficiently general behaviour. For instance the Poincare series of any local ring \(R\) is rationally related to the Poincare series of a short Artin local ring, see [2, Theorem 2]. It was conjectured by Eisenbud, [12], that if \(M\) has bounded betti numbers then \(M\) is eventually periodic (with period \(\leq 2\)). This conjecture was disproved by Gasharov and Peeva [13]. In fact they construct short Artin local rings disproving Eisenbud conjecture. They give examples of short Artin rings having periodic module of period any integer \(n\geq 2\). They also gave an example of a short Artin ring having a non-free module \(M\) with bounded betti-numbers such that \(M\) is not eventually periodic. See [13, 3.4] for these examples. Furthermore short Artin rings have been source of many examples/counterexamples in local algebra, see [4], [20], [15] and [11]. **1.1**.: Our first result is **Theorem 1.2**.: _Let \((A,\mathfrak{m})\) be a short Artin local ring. Assume \(A\) is not a hypersurface ring. Then there exists \(c_{A}\geq 2\) such that if \(M\) is any finitely generated module with bounded betti-numbers then \(c_{A}\) divides \(\ell(M)\)._ The proof of Theorem 1.2 splits into two cases. The first case is when \(A\) is a complete intersection of length four. For complete intersections there is a notion of support varieties, see [5]. We note that if \(k=A/\mathfrak{m}\) is algebraically closed then for each point \(\{a\}\) in \(\mathbb{P}^{1}_{k}\) there exists a module \(M\) with bounded betti numbers and \(Var(M)=\{a\}\). Furthermore if \(A\) is equicharacteristic then for each \(a\in\mathbb{P}^{1}_{k}\) there exists indecomposable modules \(\{M_{n}\}_{n\geq 1}\) with \(Var(M_{n})=\{a\}\) and \(\ell(M_{n})\geq n\), see [19, 1.6] (the proof of [19, 1.6] also essentially works in mixed characteristic (for the Artinian case), one has to use [3, VI.1.4] and arguments similar to proof of [19, 1.6].) The second case is when \(A\) is not a complete intersection. In this case if \(A\) has a non-free module with bounded betti numbers then by [16, Theorem B] we have \(\ell(\mathfrak{m}/\mathfrak{m}^{2})=h+1\), \(\ell(\mathfrak{m}^{2})=h\geq 2\), socle of \(A\) is \(\mathfrak{m}^{2}\) and its Poincare series is \[P_{A}(z)=\frac{1}{1-(h+1)z+hz^{2}}=\frac{1}{(1-z)(1-hz)}.\] We note that the rings in examples in [13], [15] and [11] are of this form. **1.3**.: There is no general method to construct modules with bounded betti-numbers. If \(A=R/(f)\) where \(R\) is local of dimension one and \(f\in\mathfrak{n}^{2}\) is regular then let \(\mathcal{P}_{R}\) be the full subcategory of \(A\)-modules \(M\) with \(\operatorname{projdim}_{R}M\) finite. Such modules if not free have periodic resolution (over \(A\)) with period \(\leq 2\). There is essentially a unique method to construct non-free modules over \(A\) having finite projective dimension over \(R\). This is essentially due to Buchweitz et al, see [9, 2.3]. Also see the paper [8, 1.2] by Brennan et al. **1.4**.: We say an Artin local ring \(R\) has property \(\mathcal{B}\) if there exists \(c\geq 2\) such that \(c\) divides \(\ell(M)\) for every \(R\)-module \(M\) with bounded betti-numbers. By Theorem 1.2, short Artin local rings satisfy \(\mathcal{B}\). The next result shows that property \(\mathcal{B}\) is preserved under certain flat extensions. **Proposition 1.5**.: _Let \((R,\mathfrak{m})\to(S,\mathfrak{n})\) be an extension of Artin local rings such that \(S\) is a finite free \(R\)-module. Assume the induced extension of residue fields \(R/\mathfrak{m}\to S/\mathfrak{n}\) is an isomorphism. Then if \(R\) has property \(\mathcal{B}\) then so does \(S\)_ Proposition 1.5 produces bountiful examples of Artin local rings which satisfy \(\mathcal{B}\). See section 7. The following is a consequence of Theorem 1.2 and Proposition 1.5, **Corollary 1.6**.: _Let \(A=k[X_{1},\ldots,X_{d}]/(X^{a_{1}},\ldots,X^{a_{d}})\) where \(d\geq 2\) and \(a_{i}\geq 2\). If two among \(a_{i}\) are even then if \(M\) is any \(A\)-module with bounded multiplicity then its length is even._ **1.7**.: For our next result let us recall the notion of curvature of a module \[\operatorname{curv}(M)=\limsup_{n\to\infty}\sqrt[n]{\beta_{n}(M)}.\] It is known that \(\operatorname{curv}(M)\leq\operatorname{curv}(k)<\infty\) (see [4, 4.2.4]. Furthermore if \(A\) is not a complete intersection then \(\operatorname{curv}(k)>1\), see [4, 8.2.2]. Our next result is **Theorem 1.8**.: _Let \((A,\mathfrak{m})\) be a short Artin local ring. Assume \(A\) is not a complete intersection ring. There exists \(b_{A}\geq 2\) such that if \(M\) is any module with \(\operatorname{curv}(M)<\operatorname{curv}(k)\) then \(b_{A}\) divides \(\ell(\Omega^{i}_{A}(M))\) for all \(i\geq 1\)._ By [16, Theorem B] if a short ring has a module \(M\) with \(\operatorname{curv}(M)<\operatorname{curv}(k)\) then its Poincare series is \[P_{A}(z)=\frac{1}{1-dz+az^{2}}=\frac{1}{(1-r_{1}z)(1-r_{2}z)}.\] where \(d=\ell(\mathfrak{m}/\mathfrak{m}^{2})\), \(a=\ell(\mathfrak{m}^{2})\), socle of \(A\) is \(\mathfrak{m}^{2}\), \(r_{1}\) and \(r_{2}\) are positive integers with \(r_{1}<r_{2}\) and \(\operatorname{curv}(M)=r_{1}\) and \(\operatorname{curv}(k)=r_{2}\). Furthermore \(a\geq d-1\). **1.9**.: _A result for a higher dimensional ring._ Let \((R,\mathfrak{m})\) be a Cohen-Macaulay local ring of dimension \(d\geq 1\). For convenience we will assume that the residue field \(k=A/\mathfrak{m}\) of \(A\) is infinite. Set \(h=\operatorname{embdim}(R)-d\) and let \(e_{0}(R)\) be its multiplicity. Let \(\operatorname{gr}R=\bigoplus_{n\geq 0}\mathfrak{m}^{n}/\mathfrak{m}^{n+1}\) be the associated graded ring of \(R\). Recall \(R\) is said to have minimal multiplicity if \(e_{0}(R)=h+1\). We assume \(R\) is not regular. It is well-known that if \(R\) has minimal multiplicity then \(\operatorname{gr}R\) is Cohen-Macaulay, see [22]. Let \(g\in\mathfrak{m}^{2}\setminus\mathfrak{m}^{3}\) be such that the initial for \(g^{*}\) of \(g\) in \(\operatorname{gr}R\) is a non-zero divisor. Let \[\mathcal{B}_{g}=\{M\mid M\ \text{ is a MCM }R/(g)\text{-module such that }\operatorname{projdim}_{R}M<\infty\}.\] Here MCM stands for maximal Cohen-Macaulay. Also set \[\mathcal{B}^{(2)}=\bigcup_{\deg g^{*}=2\atop g^{*}\operatorname{nzd}\text{ in }\operatorname{gr}R}\mathcal{B}_{g}.\] We prove **Theorem 1.10**.: _(with hypotheses as above). If \(M\in\mathcal{B}^{(2)}\) then \(h+1\) divides \(e_{0}(M)\)._ **1.11**.: _Technique used to prove our results:_ The techniques to prove our results to the best of our knowledge have not been used earlier in commutative algebra. We first discuss proof of Theorem 1.2 when \(A\) is a Artin complete intersection of length \(4\). Let \(\underline{\operatorname{CM}}(A)\) denote the stable category of \(A\)-modules and let \(\underline{\operatorname{CM}}^{\leq 1}(A)\) denote the category of modules with bounded betti numbers. Then \(\underline{\operatorname{CM}}^{\leq 1}(A)\) is a thick subcategory of \(\underline{\operatorname{CM}}(A)\) and let \(\mathcal{T}\) denote the Verdier quotient \(\underline{\operatorname{CM}}(A)/\underline{\operatorname{CM}}^{\leq 1}(A)\). We have an exact sequence of Grothendieck groups \[G_{0}(\underline{\operatorname{CM}}(A))\xrightarrow{\xi}G_{0}(A)\to G_{0}( \mathcal{T})\to 0.\] It suffices to show \(\xi\) is NOT surjective. Equivalently we have to prove \(G_{0}(\mathcal{T})\neq 0\). We recall a result due to Thomason [23]. Let \(\mathcal{C}\) be a skeletally small triangulated category. Recall a subcategory \(\mathcal{D}\) is dense in \(\mathcal{C}\) if the smallest thick subcategory of \(\mathcal{C}\) containing \(\mathcal{D}\) is \(\mathcal{C}\) itself. In [23] a one-to one correspondence between dense subcategories of \(\mathcal{C}\) and subgroups of the Grothendieck group \(G_{0}(\mathcal{C})\) is given. In particular if \(G_{0}(\mathcal{C})=0\) then any dense subcategory of \(\mathcal{C}\) is \(\mathcal{C}\) itself. Now \(G_{0}(\underline{\operatorname{CM}}(A))=\mathbb{Z}/4\mathbb{Z}\). Let \(\mathcal{A}\) be the dense subcategory of \(\underline{\operatorname{CM}}(A)\) corresponding to the subgroup \(\{0\}\) of \(G_{0}(\underline{\operatorname{CM}}(A))\). By a general construction in 3.1 we construct a dense subcategory \(\mathcal{A}^{*}\) in \(\mathcal{T}\) which is closely related to \(\mathcal{A}\). So if \(G_{0}(\mathcal{T})=0\) then \(\mathcal{A}^{*}=\mathcal{T}\). In particular \(k\in\mathcal{A}^{*}\). We use this fact to get a contradiction, see 4.1. The technique to prove Theorem 1.2 (when \(A\) is not a complete intersection) and Theorem 1.8 is similar. We have to work with \(D^{b}(A)\) the bounded derived category of \(A\) and construct an appropriate thick subcategory \(\mathcal{C}\) of \(D^{b}(A)\). The proofs consists of analyzing \(G_{0}(D^{b}(A)/\mathcal{C})\) and showing either it is non-zero or if it zero then it still yields some information on \(\ell(M)\) (for suitable \(M\)). To prove Theorem 1.10 we use techniques used in the proof of Theorem 1.2. We now describe in brief the contents of this paper. In section two we discuss some preliminaries on triangulated categories, Grothendieck groups and Hilbert functions that we need. In section three we describe a dense category of a Verdier quotient of \(\mathcal{C}\) corresponding to a dense sub category of \(\mathcal{C}\) (here \(\mathcal{C}\) is a triangulated category). In section four we prove Theorem 1.2 when \(A\) is a Artin complete intersection of multiplicity four. In section five we describe a construction which is useful to us. In section six we prove Theorem 1.2. In section seven we prove Proposition 1.5 and give bountiful examples of rings satisfying property \(\mathcal{B}\). In the next section we prove Theorem 1.8. In section nine we prove Theorem 1.10. In section ten we prove some results on ration of betti-numbers that we need. The results of this section are essentially known. We provide details due to lack of a suitable reference. In the appendix we calculate a limit which is crucial for us. ## 2. Preliminaries Throughout this paper all rings are Noetherian and all modules considered are finitely generated. ### Triangulated categories: We use [17] for notation on triangulated categories. However we will assume that if \(\mathcal{C}\) is a triangulated category then \(\operatorname{Hom}_{\mathcal{C}}(X,Y)\) is a set for any objects \(X,Y\) of \(\mathcal{C}\). **2.2**.: Let \(\mathcal{C}\) be a triangulated category with shift functor \(\sum\). A full subcategory \(\mathcal{D}\) of \(\mathcal{C}\) is called a _triangulated subcategory_ of \(\mathcal{C}\) if 1. \(X\in\mathcal{D}\) then \(\sum X,\sum^{-1}X\in\mathcal{D}\). 2. If \(X\to Y\to Z\to\sum X\) is a triangle in \(\mathcal{C}\) then if \(X,Y\in\mathcal{C}\) then so does \(Z\). 3. If \(X\cong Y\) in \(\mathcal{C}\) and \(Y\in\mathcal{D}\) then \(X\in\mathcal{D}\). **Remark 2.3**.: In some sources a triangulated category is defined to have only property (1) and (2). Furthermore in this notation if a triangulated sub category satisfies (3) then it is called a strict triangulated subcategory. However for us triangulated subcategories is as in 2.2. This is also as in [17]. **2.4**.: A triangulated subcategory \(\mathcal{D}\) of \(\mathcal{C}\) is said to be _thick_ if \(U\oplus V\in\mathcal{D}\) then \(U,V\in\mathcal{D}\). A triangulated subcategory \(\mathcal{D}\) of \(\mathcal{C}\) is _dense_ if for any \(U\in\mathcal{C}\) there exists \(V\in\mathcal{C}\) such that \(U\oplus V\in\mathcal{D}\). **2.5**.: Let \(\mathcal{C}\) be a skeletally small triangulated category. The _Grothendieck group_\(G_{0}(\mathcal{C})\) is the quotient group of the free abelian group on the set of isomorphism classes of objects of \(\mathcal{C}\) by the Euler relations: \([Y]=[X]+[Z]\) whenever \(X\to Y\to Z\to\sum X\) is a triangle in \(\mathcal{C}\). We always have a triangle \(X\to 0\to\sum X\xrightarrow{1}\sum X\). So we have \([X]+[\sum X]=[0]=0\) in \(G_{0}(\mathcal{C})\). Therefore \([\sum X]=-[X]\) in \(G_{0}(\mathcal{C})\). It follows that _every_ element of \(G_{0}(\mathcal{C})\) is of the form \([X]\) for some \(X\in\mathcal{C}\). **2.6**.: Let \(\mathcal{C}\) be skeletally small and let \(\mathcal{D}\) be a thick subcategory of \(\mathcal{C}\). Set \(\mathcal{T}=\mathcal{C}/\mathcal{D}\) be the Verdier quotient. Then there exists an exact sequence (see [14, VIII, 3.1], also see [24, II.6.4]) \[G_{0}(\mathcal{D})\to G_{0}(\mathcal{C})\to G_{0}(\mathcal{T})\to 0.\] **2.7**.: (with setup as in 2.5). Thomason [23, 2.1] constructs a one-to-one correspondence between dense subcategories of \(\mathcal{C}\) and subgroups of \(G_{0}(\mathcal{C})\) as follows: To \(\mathcal{D}\) a dense subcategory of \(\mathcal{C}\) corresponds the subgroup which is the image of \(G_{0}(\mathcal{D})\) in \(G_{0}(\mathcal{C})\). To \(H\) a subgroup of \(G_{0}(\mathcal{C})\) corresponds the full subcategory \(\mathcal{D}_{H}\) whose objects are those \(X\) in \(\mathcal{C}\) such that \([X]\in H\). **2.8**.: Let \(D^{b}(A)\) be the bounded derived category of a ring \(A\). We index complexes cohomologically. The obvious functor \(\operatorname{mod}(A)\to D^{b}(A)\) yields a group homomorphism \(\phi\colon G_{0}(A)\to G_{0}(D^{b}(A))\). The map \(\phi\) is an isomorphism with inverse \(\psi\colon G_{0}(D^{b}(A))\to G_{0}(A)\) defined by \(\psi(\mathbf{X}_{\bullet})=\sum_{i\in\mathbb{Z}}(-1)^{i}[H^{i}(\mathbf{X}_{ \bullet})]\). **2.9**.: Let \((A,\mathfrak{m})\) be an Artin local ring. Then we have a group isomorphism \(f\colon G_{0}(A)\to\mathbb{Z}\) defined by \(f([M])=\ell(M)\). **2.10**.: Let \((A,\mathfrak{m})\) be Artinian Gorenstein local ring. Let \(\underline{\operatorname{CM}}(A)\) be the stable category of \(A\)-modules. Then \(G_{0}(\underline{\operatorname{CM}}(A))=\mathbb{Z}/\ell(A)\mathbb{Z}\), see [10, 4.4.9] **2.11**.: _Associated graded rings, modules, Hilbert functions, superficial elements and multiplicity._ Let \((A,\mathfrak{m})\) be local. Let \(\operatorname{gr}A=\bigoplus_{n\geq 0}\mathfrak{m}^{n}/\mathfrak{m}^{n+1}\) be the _associated graded ring_ of \(A\). We note that \(\operatorname{gr}A\) is a graded Noetherian \(k=A/\mathfrak{m}\)-algebra. If \(a\in A\) is non-zero then \(a\in\mathfrak{m}^{i}\setminus\mathfrak{m}^{i+1}\) for some \(i\). Then \(a^{*}=\) image of \(a\) in \(\mathfrak{m}^{i}/\mathfrak{m}^{i+1}\) (considered as a subset of \(\operatorname{gr}A\)) is called the initial form of \(a\). Let \(M\) be an \(A\)-module. Let \(\operatorname{gr}M=\bigoplus_{n\geq 0}\mathfrak{m}^{n}M/\mathfrak{m}^{n+1}M\) be the _associated grade module_ of \(M\). Note \(\operatorname{gr}M\) is finitely generated as a \(\operatorname{gr}A\)-module. **2.12**.: The function \(H(M,n)=\ell(\mathfrak{m}^{n}M/\mathfrak{m}^{n+1}M)\) for \(n\geq 0\) is called the Hilbert function of \(M\). We assemble it \(H_{M}(z)=\sum_{n\geq 0}H(M,n)z^{n}\), the Hilbert series of \(M\). It is well-known that \[H_{M}(z)=\frac{h_{M}(z)}{(1-z)^{\dim M}},\quad\text{where $h_{M}(z)\in \mathbb{Z}[z]$ and $h_{M}(1)\neq 0$.}\] The element \(h_{M}(1)=e_{0}(M)\) is called the multiplicity of \(M\). **2.13**.: If \(a\in\mathfrak{m}^{r}\setminus\mathfrak{m}^{r+1}\) (here \(r\geq 1\)) is such that \(a^{*}\) is \(\operatorname{gr}M\)-regular then \(a\) is \(M\)-regular. Set \(N=M/aM\). Then \(e_{0}(N)=re_{0}(M)\). Furthermore \[h_{N}(z)=h_{M}(z)(1+z+\cdots+z^{r-1}).\] **2.14**.: An element \(x\in\mathfrak{m}\) is said to be \(M\)-superficial if there exists \(c\) and \(n_{0}\) such that for all \(n\geq n_{0}\) we have \((\mathfrak{m}^{n+1}M\colon x)\cap\mathfrak{m}^{c}M=\mathfrak{m}^{n}M\). Superficial elements exist if \(k\) is infinite. A sequence \(x_{1},\ldots,x_{s}\) is called an \(M\)-superficial sequence if \(x_{i}\) is \(M/(x_{1},\ldots,x_{i-1})\)-superficial for \(i=1,\ldots,s\). **2.15**.: If \(\operatorname{depth}M>0\) then every \(M\)-superficial element \(x\) is \(M\)-regular. Furthermore in this case we have \((\mathfrak{m}^{n+1}M\colon x)=\mathfrak{m}^{n}M\) for \(n\gg 0\). **2.16**.: Let \(x_{1},\ldots,x_{r}\) be a \(M\)-superficial sequence. Then \(\operatorname{depth}\operatorname{gr}M\geq r\) if and only if \(x_{1}^{*},\ldots,x_{r}^{*}\) is \(\operatorname{gr}M\)-regular, see [18, Theorem 8]. **2.17**.: Suppose \(\operatorname{depth}M\geq r\) and \(x_{1},\ldots,x_{r}\) is a \(M\)-superficial sequence. Set \(N=M/(x_{1},\ldots,x_{r})M\). Then \(e_{0}(N)=e_{0}(M)\), see [18, Corollary 11]. ## 3. A subcategory associated to a Verdier quotient In this section \(\mathcal{C}\) is a triangulated category with shift functor [1], \(\mathcal{T}\) is a thick subcategory of \(\mathcal{C}\) and \(\mathcal{D}=\mathcal{C}/\mathcal{T}\) is the Verdier quotient. Let \(\mathcal{A}\) be a dense triangulated subcategory of \(\mathcal{A}\). Consider the full subcategory \(\mathcal{A}^{*}\) of \(\mathcal{D}\) whose objects are \[\{X\mid X\cong Y\text{ in }\mathcal{D}\text{ for some }Y\text{ in }\mathcal{A}\}.\] In this section we prove **Theorem 3.1**.: _(with hypotheses as above) \(\mathcal{A}^{*}\) is a dense triangulated sub-category of \(\mathcal{D}\)._ Proof.: It is clear that if \(X\in\mathcal{A}^{*}\) then \(X[1],X[-1]\in\mathcal{A}^{*}\). It is clear that \(\mathcal{A}^{*}\) is dense. It remains to show \(\mathcal{A}^{*}\) is triangulated. Let \(t^{\prime}\colon X^{\prime}\to Y^{\prime}\to Z^{\prime}\to X^{\prime}[1]\) be a triangle in \(\mathcal{D}\) with \(X^{\prime},Y^{\prime}\in\mathcal{A}^{*}\). Then \(t^{\prime}\) is the image in \(\mathcal{D}\) of a triangle \[t\colon X\xrightarrow{\eta}Y\to Z\to X[1]\quad\text{ in }\mathcal{C}.\] We note \(\phi\colon X\cong\widehat{X}\) in \(\mathcal{D}\) with \(\widehat{X}\in\mathcal{A}\). We have left fraction where \(\operatorname{cone}(u)\in\mathcal{T}\). As \(\phi\) is an isomorphism we also have \(\operatorname{cone}(f)\in\mathcal{T}\). We have a morphism of triangles Note in \(\mathcal{D}\) the above morphism of triangles is an isomorphism. Set \(W=\operatorname{cone}(\eta\circ u)\). Consider the morphism of triangles Here \(\widetilde{s}\) is a rotation of \(s\). Note \(r\) is isomorphic to \(\widetilde{s}\) in \(\mathcal{D}\). Set \(L=\operatorname{cone}(f\circ v)\). Rotating \(r\) we get a triangle \[\widetilde{r}\colon\widehat{X}\xrightarrow{\delta}L\to W\to\widehat{X}[1]\] with \(\widetilde{r}\cong t\) in \(\mathcal{D}\). However note that \(\widehat{X}\in\mathcal{A}\). We have \(\psi\colon L\cong\widehat{L}\) in \(\mathcal{D}\) where \(\widehat{L}\in\mathcal{A}\). Consider the right fraction where \(\operatorname{cone}(w)\in\mathcal{T}\). As \(\psi\) is an isomorphism we also have \(\operatorname{cone}(g)\in\mathcal{T}\). We have a morphism of triangles We note that \(t\cong\widetilde{r}\cong l\) in \(\mathcal{D}\). We have a triangle \[\widehat{L}\xrightarrow{w}T\to C\to\widehat{L}[1]\quad\text{ where }C= \operatorname{cone}(w)\in\mathcal{T}.\] As \(\mathcal{A}\) is dense in \(\mathcal{C}\) we get that \(C\oplus C[1]\in\mathcal{A}\), see [17, 4.5.12]. Taking the direct sum of the above triangle with the triangle \(h\colon 0\to C[1]\xrightarrow{1}C[1]\to 0\) we obtain that \(T\oplus C[1]\in\mathcal{A}\). We take the direct sum of \(h\) and \(l\) to obtain the triangle \[\widetilde{l}\colon\widehat{X}\to T\oplus C[1]\to\operatorname{cone}(g\circ \delta)\oplus C[1]\to\widehat{X}[1].\] Note \(\operatorname{cone}(g\circ\delta)\oplus C[1]\in\mathcal{A}\). As \(h=0\) in \(\mathcal{D}\) it follows that \(\widetilde{l}\cong l\cong t\) in \(\mathcal{D}\). The result follows. ## 4. Artin Complete intersections of multiplicity four In this section \((A,\mathfrak{m})\) is a Artin local complete intersection of multiplicity (= length) four. It is readily seen that in this case \(A=Q/(f,g)\) where \((Q,\mathfrak{n})\) is regular local and \(f,g\in\mathfrak{n}^{2}\setminus\mathfrak{n}^{3}\) is a regular sequence. Let \(\underline{\operatorname{CM}}(A)\) denote the stable category of finitely generated \(A\)-modules (these are maximal Cohen-Macaulay as \(A\) is Artinian). Note \(\underline{\operatorname{CM}}(A)\) is a triangulated category, see [10, 4.7]. Let \(\underline{\operatorname{CM}}^{\leq 1}(A)\) be the full subcategory of \(A\)-modules with complexity \(\leq 1\) (equivalently the category of all \(A\)-modules with bounded betti numbers). It is readily seen that \(\underline{\operatorname{CM}}^{\leq 1}(A)\) is a thick subcategory of \(\underline{\operatorname{CM}}(A)\). Let \(\mathcal{T}=\underline{\operatorname{CM}}(A)/\,\underline{\operatorname{CM}}^ {\leq 1}(A)\) be the Verdier quotient. We note the map \[\eta\colon G_{0}(\underline{\operatorname{CM}}(A)) \to\mathbb{Z}/4\mathbb{Z},\] \[M \mapsto\ell(M)+4\mathbb{Z}.\] is an isomorphism. We have an exact sequence \[G_{0}(\underline{\operatorname{CM}}^{\leq 1}(A))\xrightarrow{\xi_{A}}G_{0}( \underline{\operatorname{CM}}(A))\to G_{0}(\mathcal{T})\to 0.\] We show **Theorem 4.1**.: _(with hypotheses as above.) Then the map \(\xi_{A}\) is NOT surjective. Equivalently \(G_{0}(\mathcal{T})\neq 0\). In particular if \(M\) is any module of complexity \(\leq 1\) then \(\ell(M)\) is even._ Proof.: Fisrst assume that the residue field \(k=A/\mathfrak{m}\) is algebraically closed. Suppose if possible \(G_{0}(\mathcal{T})=0\). Let \(\mathcal{A}\) be Thomason dense subcategory corresponding to the zero subgroup of \(G_{0}(\underline{\operatorname{CM}}(A))\). Consider the full subcategory \(\mathcal{A}^{*}\) of \(\mathcal{T}\) whose objects are \[\{X\mid X\cong Y\text{ in }\mathcal{T}\text{ for some }Y\text{ in }\mathcal{A}\}.\] Then by Theorem 3.1, \(\mathcal{A}^{*}\) is a dense triangulated subcategory of \(\mathcal{T}\). But \(G_{0}(\mathcal{T})=0\). It follows that \(\mathcal{A}^{*}=\mathcal{T}\). So there exists \(X\in\mathcal{A}\) such that \(X\cong k\) in \(\mathcal{T}\). As \(k\neq 0\) in we get that \(X\) has complexity two. So there exists \(n_{0}\) such that \(\beta_{n}(X)<\beta_{n+1}(X)\) for all \(n\geq n_{0}\), see [4, 9.2.1]. Note \(\Omega_{A}^{j}X\cong\Omega_{A}^{j}k\) in \(\mathcal{T}\) for all \(j\geq 0\). Fix \(n\geq n_{0}\) and let \(i\in\{n,n+1\}\). Let \(\phi_{i}\colon\Omega_{A}^{i}(X)\cong\Omega_{A}^{i}(k)\) be an isomorphism in \(\mathcal{T}\). We have left fraction where \(\operatorname{cone}(u_{i})\in\underline{\operatorname{CM}}^{\leq 1}(A)\). As \(\phi_{i}\) is an isomorphism we also get \(\operatorname{cone}(f_{i})\in\underline{\operatorname{CM}}^{\leq 1}(A)\). Notice \[W=\bigcup_{i=n,n+1}Var(\operatorname{cone}(u_{i})\cup Var(\operatorname{cone} (f_{i}))\quad\text{is a finite set of points in $\mathbb{P}_{k}^{1}$}.\] Choose a point \(a\in\mathbb{P}_{k}^{1}\setminus W\). By [6, 2.3] there exists a module \(H\) with \(Var(H)=\{a\}\). It follows that \(H\) has complexity one. Furthermore by [5, 6.1, 4.9] we get for \(j\geq 1\) we have \[\operatorname{Tor}_{j}^{A}(H,\operatorname{cone}(u_{i}))=\operatorname{Tor}_{ j}^{A}(H,\operatorname{cone}(f_{i}))=0\quad\text{for $i=n,n+1$}.\] We may assume that \(H\) has no free summands. We have a short exact sequence of \(A\)-modules \[0\to L_{i}\to F_{i}\oplus\Omega_{A}^{i}(k)\to\operatorname{cone}(f_{i})\to 0,\quad\text{where $F_{i}$ is a free $A$-module}.\] It follows that \[\operatorname{Tor}_{1}^{A}(L_{i},H)\cong\operatorname{Tor}_{1}^{A}(\Omega_{A} ^{i}(k),H).\] Similarly we obtain \[\operatorname{Tor}_{1}^{A}(L_{i},H)\cong\operatorname{Tor}_{1}^{A}(\Omega_{A} ^{i}(X),H).\] Let \(c=\mu(H)\). As \(\operatorname{Tor}_{1}^{A}(\Omega_{A}^{i}(k),H)=k^{c}\) we get \(\operatorname{Tor}_{1}^{A}(\Omega_{A}^{i}(X),H)=k^{c}\). We have an exact sequence \[0\to\Omega_{A}^{1}(H)\to A^{c}\to H\to 0.\] So we have an exact sequence \[0\to\operatorname{Tor}_{1}^{A}(\Omega_{A}^{i}(X),H)\to\Omega_{A}^{1}(H)\otimes \Omega_{A}^{i}(X)\to\Omega_{A}^{i}(X)^{c}\to H\otimes\Omega_{A}^{i}(X)\to 0.\] As \(\ell(U\otimes V)\geq\mu(U)\mu(V)\) we obtain \[c+c\ell(\Omega_{A}^{i}(X))\geq 2c\beta_{i}(X).\] Thus \(\ell(\Omega_{A}^{i}(X))\geq 2\beta_{i}(X)-1\). But \(\Omega_{A}^{i}(X)\in\mathcal{A}\). So its length is divisible by four. In particular it is even. So \(\ell(\Omega_{A}^{i}(X))\geq 2\beta_{i}(X)\) for \(i=n,n+1\). We have a short exact sequence \[0\to\Omega_{A}^{n+1}(X)\to A^{\beta_{n}(X)}\to\Omega_{A}^{n}(X)\to 0.\] So we get \[4\beta_{n}(X) =\ell(A)\beta_{n}(X),\] \[=\ell(\Omega_{A}^{n}(X))+\ell(\Omega_{A}^{n+1}(X)),\] \[\geq 2\beta_{n}(X)+2\beta_{n+1}(X),\] \[>4\beta_{n}(X).\] So we get \(1>1\), a contradiction. Thus \(G_{0}(\mathcal{T})\neq 0\). Now assume \(k\) is not algebraically closed. Suppose if possible \(\xi_{A}\) is surjective. Then there exists module \(M\) of complexity one with \(\ell(M)\cong 1\mod(4).\) By [7, App. Theoreme 1, Corollaire], there exists a flat local extension \(A\subseteq\widetilde{A}\) such that \(\widetilde{\mathfrak{m}}=\mathfrak{m}\widetilde{A}\) is the maximal ideal of \(\widetilde{A}\) and the residue field \(\widetilde{k}\) of \(\widetilde{A}\) is an algebraically closed extension of \(k\). The module \(\widetilde{M}=M\otimes_{A}\widetilde{A}\) has complexity one and its length as an \(\widetilde{A}\)-module is equal to length of \(M\) as an \(A\)-module. It follows that \(\xi_{\widetilde{A}}\) is surjective, a contradiction. Thus \(\xi_{A}\) is not surjective. Note \(\operatorname{image}\xi=(2)\mathbb{Z}/4\mathbb{Z}\) or is zero. Thus if \(M\) is any module of complexity \(\leq 1\) then either \(2\) divides \(\ell(M)\) or \(4\) does. Either case \(2\) divides \(\ell(M)\). ## 5. A construction In this section we make a construction which is useful to us. **5.1**.: Let \((A,\mathfrak{m})\) be a Artin local ring. We assume \[\lim_{n\to\infty}\beta_{n+1}^{A}(k)/\beta_{n}^{A}(k)=\alpha>1.\] In particular \(A\) is not a complete intersection. Let \(D^{b}(A)\) be the bounded derived category of \(A\). We often identify \(D^{b}(A)\) with \(K^{-,b}(\operatorname{proj}A)\). In this section we make a construction which is very useful for us. **5.2**.: Let \(\mathbf{X_{\bullet}}\in D^{b}(A)\). We assume \(\mathbf{X_{\bullet}}\in K^{b,-}(\operatorname{proj}A)\). Let \[\beta_{n}(\mathbf{X_{\bullet}})=\ell(\operatorname{Hom}_{D^{b}(A)}(\mathbf{X_ {\bullet}},k[n])).\] Note if \(\mathbf{X_{\bullet}}\) is a minimal complex then \(\beta_{n}(\mathbf{X_{\bullet}})\) is the rank of the free \(A\)-module \(\mathbf{X_{\bullet}^{-n}}\). Set \[\operatorname{curv}(\mathbf{X_{\bullet}})=\limsup_{n\to\infty}\sqrt[n]{\beta_ {n}(\mathbf{X_{\bullet}})}.\] It is not difficult to prove \(\operatorname{curv}(\mathbf{X_{\bullet}})\leq\alpha\). **5.3**.: Let \(1<\beta\leq\alpha\). Let \(\mathcal{C}_{\beta}=\{\mathbf{X_{\bullet}}\mid\operatorname{curv}(\mathbf{X_ {\bullet}})<\beta\}\). It is clear that \(\mathcal{C}_{\beta}\) is a full subcategory of \(D^{b}(A)\) closed under [1]. We show **Lemma 5.4**.: \(\mathcal{C}_{\beta}\) _is a thick triangulated sub-category of \(D^{b}(A)\)._ Proof.: If \(\mathbf{X_{\bullet}}\in\mathcal{C}_{\beta}\) then clearly \(\mathbf{X_{\bullet}}[1],\mathbf{X_{\bullet}}[-1]\in\mathcal{C}_{\beta}\). Let \(\mathbf{X_{\bullet}}\to\mathbf{Y_{\bullet}}\to\mathbf{Z_{\bullet}}\to\mathbf{ X_{\bullet}}[1]\) be a triangle with \(\mathbf{X_{\bullet}},\mathbf{Y_{\bullet}}\in\mathcal{C}_{\beta}\).We can choose \(\gamma<\beta\) such that \(\operatorname{curv}(\mathbf{X_{\bullet}}[1]),\operatorname{curv}(\mathbf{Y_{ \bullet}})<\gamma\). Choose \(\epsilon>0\) with \(\gamma+\epsilon<\beta\). So we have for \(n\gg 0\), \[\ell(\operatorname{Hom}_{D^{b}(A)}(\mathbf{X_{\bullet}}[1],k[n]))<(\gamma+ \epsilon)^{n}\quad\text{and}\quad\operatorname{Hom}_{D^{b}(A)}(\mathbf{Y_{ \bullet}},k[n]))<(\gamma+\epsilon)^{n}.\] It follows that \[\limsup_{n\to\infty}\sqrt[n]{\operatorname{Hom}_{D^{b}(A)}(\mathbf{Z_{\bullet }},k[n]))}\leq(\limsup_{n\to\infty}2^{1/n})(\gamma+\epsilon)=\gamma+\epsilon<\beta.\] So \(\mathcal{C}_{\beta}\) is a triangulated subcategory of \(D^{b}(A)\). It is elementary to see that it is thick. **5.5**.: Let \[\mathcal{C}_{b}=\{X\in D^{b}(A)\mid\sup_{i\in\mathbb{Z}}\big{(}\ell( \operatorname{Hom}_{D^{b}(A)}(X,k[i])\big{)}<\infty\}.\] It can be easily verified that \(\mathcal{C}_{b}\) is a thick subcategory of \(D^{b}(A)\). We need the following elementary fact. We give a proof for the convenience of the reader. **Proposition 5.6**.: _Let \(\{\theta_{n}\}_{n\geq i_{0}}\) be positive real numbers such that \(\lim_{n\to\infty}\theta_{n+1}/\theta_{n}=\alpha>1\). Let \(\{r_{n}\}_{n\geq i_{0}}\) be a sequence of non-negative real numbers such that \(\limsup_{n\to\infty}\sqrt[n]{r_{n}}<\alpha\). Then_ \[\lim_{n\to\infty}\,\frac{r_{n}}{\theta_{n}}=0.\] Proof.: Set \(c=\limsup_{n\to\infty}\sqrt[n]{r_{n}}\). By [21, 3.36] we have \(\lim_{n\to\infty}\sqrt[n]{\theta_{n}}=\alpha\). Let \(\epsilon>0\) be such that \(\alpha-\epsilon>c+\epsilon\). So for \(n\gg 0\) we have \(\theta_{n}\geq(\alpha-\epsilon)^{n}\) and \(r_{n}\leq(c+\epsilon)^{n}\). It follows that \[\limsup\sqrt[n]{r_{n}/\theta_{n}}\leq\frac{c+\epsilon}{\alpha-\epsilon}<1.\] Therefore the series \(\sum_{n\geq i_{0}}r_{n}/\theta_{n}\) is convergent. The result follows. **5.7**.: Let \(1<\beta\leq\alpha\) Let \(\operatorname{mod}_{\beta}(A)\) be the class of \(A\)-modules with curvature \(<\beta\). By an argument similar to 5.4 we get that \(\operatorname{mod}_{\beta}(A)\) is an extension closed subcategory of \(\operatorname{mod}(A)\). Let \(\operatorname{mod}_{b}(A)\) be the class of \(A\)-modules with bounded betti-numbers. We note that \(\operatorname{mod}_{b}(A)\) is an extension closed subcategory of \(\operatorname{mod}(A)\). Let \(\mathcal{B}\) be \(\operatorname{mod}_{\beta}(A)\) or \(\operatorname{mod}_{b}(A)\). We have a natural map \[G_{0}(\mathcal{B})\xrightarrow{\xi}G_{0}(A).\] It is difficult to analyze \(\xi\) directly. **5.8**.: Let \(\mathcal{C}=\mathcal{C}_{\beta}\) if \(\mathcal{B}=\operatorname{mod}_{\beta}(A)\) and let \(\mathcal{C}=\mathcal{C}_{b}\) if \(\mathcal{B}=\operatorname{mod}_{b}(A)\). Let \(\mathcal{T}\) be the Verdier quotient \(D^{b}(A)/\mathcal{C}\). We have obvious functors \(\mathcal{B}\to\mathcal{C}\) and \(\operatorname{mod}(A)\to D^{b}(A)\) by sending a module \(M\) to its stalk complex. So we have a commutative diagram It is well-known that \(\theta\) is an isomorphism. So if \(\eta\) is not surjective then neither is \(\xi\). **5.9**.: We have a group isomorphism \(f\colon G_{0}(\operatorname{mod}(A))\to\mathbb{Z}\) defined by sending \(M\) to \(\ell(M)\). We have a group homomorphism \(\delta\colon G_{0}(D^{b}(A))\to\mathbb{Z}\) defined by \(\delta(\mathbf{Y_{\bullet}})=\sum_{i\in\mathbb{Z}}(-1)^{i}\ell(H^{i}(\mathbf{ Y_{\bullet}}))\). Note in fact \(\delta=f\circ\theta^{-1}\). So \(\delta\) is an isomorphism. Let \(\mathbf{Y_{\bullet}}\) be a bounded above complex of finitely generated \(A\)-modules. We do NOT assume \(H^{i}(\mathbf{Y_{\bullet}})=0\) for \(i\ll 0\). For \(m\in\mathbb{Z}\) set \[\chi_{m}(\mathbf{Y_{\bullet}})=\sum_{j\geq 0}(-1)^{j}\ell(H^{m+j}(\mathbf{Y_{ \bullet}})).\] Next we show **Lemma 5.10**.: _(with hypotheses as in 5.8). Suppose if possible \(\eta\) is surjective. Then there exists a minimal bounded above complex \(\mathbf{X_{\bullet}}\in K^{-,b}(\operatorname{proj}A)\) with_ 1. \(\chi(\mathbf{X_{\bullet}})=\sum_{n\in\mathbb{Z}}(-1)^{i}\ell(H^{i}(\mathbf{ X_{\bullet}}))=0\)_._ 2. _Set_ \(\theta_{n}=\ell(\operatorname{Hom}_{D^{b}(A)}(\mathbf{X_{\bullet}},k[n]))\)_. Then_ \(\lim_{n\to\infty}\theta_{n+1}/\theta_{n}=\alpha\)_._ 3. _Let_ \(D\in\operatorname{mod}_{b}(A)\)_. Then_ 1. _The set_ \(\{|\chi_{m}(D\otimes\mathbf{X_{\bullet}})|\mid m\in\mathbb{Z}\}\) _is bounded._ 2. _The set_ \(\{\ell(H^{m}(D\otimes\mathbf{X_{\bullet}}))|m\in\mathbb{Z}\}\) _is bounded._ 4. _Let_ \(D\in\operatorname{mod}_{\beta}(A)\)_. Then_ 1. \[\lim_{n\to\infty}\frac{|\chi_{-n}(D\otimes\mathbf{X}_{\bullet})|}{\theta_{n}}=0.\] 2. \[\lim_{n\to\infty}\frac{\ell(H^{-n}(D\otimes\mathbf{X}_{\bullet}))}{\theta_{n}}=0.\] Proof.: Let \(\mathcal{T}=D^{b}(A)/\mathcal{C}\). We have \(G_{0}(\mathcal{T})=0\). Let \[\mathcal{A}=\{\mathbf{Z}_{\bullet}\mid\sum_{i\in\mathbb{Z}}(-1)^{i}\ell(H^{i} (\mathbf{Z}_{\bullet}))=0\}.\] Then \(\mathcal{A}\) is a dense subcategory of \(D^{b}(A)\) corresponding to the zero subgroup of \(G_{0}(D^{b}(A))\). Consider the full subcategory \(\mathcal{A}^{*}\) of \(\mathcal{T}\) whose objects are \[\{\mathbf{Z}_{\bullet}\mid\mathbf{Z}_{\bullet}\cong\mathbf{Y}_{\bullet}\text{ in }\mathcal{T}\text{ for some }\mathbf{Y}_{\bullet}\text{ in }\mathcal{A}\}.\] Then by Theorem 3.1, \(\mathcal{A}^{*}\) is a dense triangulated subcategory of \(\mathcal{T}\). But \(G_{0}(\mathcal{T})=0\). It follows that \(\mathcal{A}^{*}=\mathcal{T}\). So there exists \(\mathbf{X}_{\bullet}\in\mathcal{A}\) such that we have an isomorphism \(\phi\colon\mathbf{X}_{\bullet}\cong\mathbf{P}_{\bullet}\) in \(\mathcal{T}\) where \(\mathbf{P}_{\bullet}\) is the minimal free resolution of \(k\). We assume \(\mathbf{X}_{\bullet}\) is a minimal complex of free \(A\)-modules. We have a left fraction where \(\operatorname{cone}(u)\in\mathcal{C}\). As \(\phi\) is an isomorphism we also get \(\operatorname{cone}(f)\in\mathcal{C}\). Let \(r_{n}=\ell(\operatorname{Hom}_{D^{b}(A)}(\mathbf{Z}_{\bullet},k[n])\) and \(\theta_{n}=\ell(\operatorname{Hom}_{D^{b}(A)}(\mathbf{X}_{\bullet},k[n])\). Claim 1: \(\lim_{n\to\infty}r_{n}/\beta_{n}(k)=1\) and \(\lim_{n\to\infty}r_{n+1}/r_{n}=\alpha\) We have a triangle \(t\colon\mathbf{Z}_{\bullet}\xrightarrow{f}\mathbf{P}_{\bullet}\to\mathbf{C}_ {\bullet}\to\mathbf{Z}_{\bullet}[1]\) where \(\mathbf{C}_{\bullet}\in\mathcal{C}\). Set \(c_{n}=\ell(\operatorname{Hom}_{D^{b}(A)}(\mathbf{C}_{\bullet},k[n])\). Taking \(\operatorname{Hom}_{\mathcal{D}^{b}(A)}(-,k[n])\) we have an exact sequence \[\operatorname{Hom}(\mathbf{C}_{\bullet},k[n])\to\operatorname{Hom}(\mathbf{P} _{\bullet},k[n])\to\operatorname{Hom}(\mathbf{Z}_{\bullet},k[n])\to \operatorname{Hom}(\mathbf{C}_{\bullet},k[n+1])\] So we have \[\beta_{n}(k) \leq r_{n}+c_{n},\] \[r_{n} \leq\beta_{n}(k)+c_{n+1}.\] From the first estimate we have \[1-c_{n}/\beta_{n}(k)\leq r_{n}/\beta_{n}(k).\] Note either \(\{c_{n}\}\) is bounded or \(\mathbf{C}_{\bullet}\in\mathcal{C}_{\beta}\). At any case we have \(\lim_{n\to\infty}c_{n}/\beta_{n}(k)=0\), see 5.6. Therefore we have \[1\leq\liminf r_{n}/\beta_{n}(k)\] From the second estimate we have \[r_{n}/\beta_{n}(k)\leq 1+c_{n+1}/\beta_{n}(k).\] By 5.6 we have \[\limsup r_{n}/\beta_{n}(k)\leq 1.\] So \(\lim_{n\to\infty}r_{n}/\beta_{n}(k)=1\). Finally note that \[\frac{r_{n+1}}{r_{n}}=\frac{r_{n+1}}{\beta_{n+1}}\frac{\beta_{n+1}}{\beta_{n}} \frac{\beta_{n}}{r_{n}}\to\alpha\quad\text{as $n\to\infty$}.\] Using this and a similar argument as in Claim-1 yields Claim 2: \(\lim_{n\to\infty}\theta_{n}/r_{n}=1\) and \(\lim_{n\to\infty}\theta_{n+1}/\theta_{n}=\alpha\). Recall \(\mathbf{X}_{\bullet}\) is a minimal complex. Then \(\theta_{n}=\) number of minimal generators of \(\mathbf{X}_{\bullet}^{-n}\). We assume \(H^{i}(\mathbf{X}_{\bullet})=0\) for \(i\leq m_{0}\). 3(a), 4(a): Let \(D\in\mathcal{B}\). Consider \[0\to\Omega^{1}(D)\to A^{\mu(D)}\to D\to 0.\] Taking \(-\otimes\mathbf{X}_{\bullet}\) we get ( \[\dagger\] ) \[0\to\Omega^{1}(D)\otimes\mathbf{X}_{\bullet}\to\mathbf{X}_{\bullet}^{\mu(D)} \to D\otimes\mathbf{X}_{\bullet}\to 0.\] Note \(\mathbf{X}_{\bullet}\in\ker\delta\). So \(\sum_{i\in\mathbb{Z}}(-1)^{i}\ell(H^{i}(X))=0\). Let \(m\leq m_{0}\) and assume \(m+c=m_{0}\). By (\(\dagger\)) we have \[\chi_{m}(D\otimes\mathbf{X}_{\bullet})=\chi_{m+1}(\Omega^{1}(D)\otimes \mathbf{X}_{\bullet})=\chi_{m+2}(\Omega^{2}(D)\otimes\mathbf{X}_{\bullet})= \cdots=\chi_{m_{0}}(\Omega^{c}(D)\otimes\mathbf{X}_{\bullet}).\] 3(a) Suppose \(D\in\mathrm{mod}_{b}(A)\). Say \(\beta_{n}(D)\leq g\) for all \(n\geq 0\). Then \(e_{0}(\Omega^{c}(D))\leq g\ell(A)\). We have \[|\chi_{m_{0}}(\Omega^{c}(D)\otimes\mathbf{X}_{\bullet})|\leq g\ell(A)(\sum_{n \leq-m_{0}}\theta_{n}).\] Thus \(|\chi_{m}(D\otimes\mathbf{X}_{\bullet})|\) is bounded. 4(a) Suppose \(D\in\mathrm{mod}_{\beta}(A)\). Then \(e_{0}(\Omega^{c}(D))\leq\beta_{-m+m_{0}}(D)\ell(A)\). By 5.6 it follows that \[\frac{|\chi_{m}(D\otimes\mathbf{X}_{\bullet})|}{\theta_{-m}}\leq\frac{\beta_ {-m+m_{0}}(D)\ell(A)}{\theta_{-m}}(\sum_{n\leq-m_{0}}\theta_{n})\to 0\quad \text{as $-m\to\infty$}.\] 3(b), 4(b). By (\(\dagger\)) we have for \(m\leq m_{0}\) and \(m+c=m_{0}\) we have \[H^{m}(D\otimes\mathbf{X}_{\bullet})=H^{m+1}(\Omega^{1}(D)\otimes\mathbf{X}_{ \bullet})=\cdots=H^{m_{0}}(\Omega^{c}(D)\otimes\mathbf{X}_{\bullet}).\] 3(b) Suppose \(D\in\mathrm{mod}_{b}(A)\). Say \(\beta_{n}(D)\leq g\) for all \(n\geq 0\). We note that \[\ell(H^{m_{0}}(\Omega^{c}(D)\otimes\mathbf{X}_{\bullet}))\leq\theta_{-m_{0}} \ell(\Omega^{c}(D))\leq\theta_{-m_{0}}g\ell(A).\] The result follows. 4(b)Suppose \(D\in\mathrm{mod}_{\beta}(A)\). Then \(e_{0}(\Omega^{c}(D))\leq\beta_{-m+m_{0}}(D)\ell(A)\). By 5.6 it follows that \[\frac{\ell(H^{m}(D\otimes\mathbf{X}_{\bullet}))}{\theta_{-m}}\leq\frac{\beta_ {-m+m_{0}}(D)\ell(A)}{\theta_{-m}}\theta_{-m_{0}}\to 0\quad\text{as $-m\to\infty$}.\] Next we prove: **Theorem 5.11**.: _(with hypotheses as in 5.8). Further assume there exists \(D\in\mathcal{B}\) with \(\mathfrak{m}^{2}D=0\). Then_ 1. _If_ \(\alpha\) _is irrational then_ \(\eta\) _is not surjective._ 2. _If_ \(\eta\) _is surjective, let_ \((\alpha+1)/\alpha=p/q\) _where_ \(p,q\) _are co-prime positive integers. Then_ \(p\) _divides_ \(\ell(D)\) Proof.: Suppose \(\eta\) is surjective. We prove that necessarily \(\alpha\) is rational. Set \(r(D)=\mu(\mathfrak{m}D)\). We have an exact sequence \[0\to k^{r(D)}=\mathfrak{m}D\to D\to k^{\mu(D)}\to 0.\] Let \(\mathbf{X}_{\bullet}\) be as in Lemma 5.10. Taking \(-\otimes\mathbf{X}_{\bullet}\) we get \[0\to\overline{\mathbf{X}_{\bullet}}^{r(D)}\to D\otimes\mathbf{X}_{\bullet}\to \overline{\mathbf{X}_{\bullet}}^{\mu(D)}\to 0\] where \(\overline{\mathbf{X}_{\bullet}}=\mathbf{X}_{\bullet}\otimes k\). Assume \(H^{i}(\mathbf{X}_{\bullet})=0\) for \(i\leq m_{0}\). Let \(-n\leq m_{0}\). Then we have \[0\to T_{n}\to H^{-n}(\overline{\mathbf{X}_{\bullet}})^{r(D)}\to H^{-n}(D \otimes\mathbf{X}_{\bullet})\to H^{-n}(\overline{\mathbf{X}_{\bullet}})^{\mu( D)}\to\cdots.\] So we have \[\ell(T_{n})+\chi_{-n}(D\otimes\mathbf{X}_{\bullet})=\ell(D)\chi_{-n}( \overline{\mathbf{X}_{\bullet}}).\] Thus we get \[\ell(T_{n})/\theta_{n}+\chi_{-n}(D\otimes\mathbf{X}_{\bullet})/\theta_{n}= \ell(D)\left(1-(\sum_{j\geq 0}(-1)^{j}\theta_{n-1-j}/\theta_{n})\right).\] By 5.103(a), 4(a) and 11.2 we get \[\lim_{n\to\infty}\ell(T_{n})/\theta_{n}=\ell(D)(1-1/(\alpha+1))=\ell(D)\alpha /(\alpha+1).\] We also have \[0\to T_{n}\to H^{-n}(\overline{\mathbf{X}_{\bullet}})^{r(D)}\to K_{n}\to 0 \quad\text{where $K_{n}$ is a submodule of $H^{-n}(D\otimes\mathbf{X}_{\bullet})$.}\] So we have \[\ell(T_{n})+\ell(K_{n})=\theta_{n}r(D).\] By 5.103(b), 4(b) it follows that \(\lim_{n\to\infty}\ell(K_{n})/\theta_{n}=0\). So \[\lim_{n\to\infty}\ell(T_{n})/\theta_{n}=r(D).\] Thus \[r(D)=\ell(D)\alpha/(\alpha+1).\] Therefore \[\ell(D)/r(D)=1+1/\alpha.\] It follows that \(\alpha\) is rational. Furthermore if \(1+1/\alpha=p/q\) where \(p,q\) are positive co-prime integers with \(p>q\). Then it follows that \(p\) divides \(\ell(D)\). Thus we have shown both the assertions. ## 6. Modules with bounded betti numbers In this section we prove Theorem 1.2. We first show: **Theorem 6.1**.: _Let \((A,\mathfrak{m})\) be a short Artin local ring. Assume that \(A\) is not a complete intersection and that there exists a non-free \(A\)-module with bounded betti-numbers. Then there exists \(c>1\) such that \(c\) divides \(\ell(M)\) for every module \(M\) with bounded betti-numbers._ Proof.: By 10.2 we have \(\ell(A)=2h+2\) and \(\lim_{n\to\infty}\beta_{n+1}^{A}(k)/\beta_{n}^{A}(k)=h\). So the techniques in section 5 are applicable. Set \(\mathcal{B}=\operatorname{mod}_{b}(A)\) and \(\mathcal{C}=\mathcal{C}_{b}\). Claim : \(\eta\) (as defined in 5.8) is NOT surjective. Suppose if possible \(\eta\) is surjective. Let \(N\in\mathcal{B}\). Set \(D=\Omega_{A}^{1}(N)\). Then \(D\) has bounded betti-numbers and \(\mathfrak{m}^{2}D=0\). Then by 5.11 it follows that \(h+1\) divides \(\ell(D)\). As \(h+1\) divides \(\ell(A)\) also, it follows that \(h+1\) divides \(\ell(N)\) also. Let \(\mathbf{X_{\bullet}}\in\mathcal{C}\). We assume \(\mathbf{X_{\bullet}}\in K^{b,-}(\operatorname{proj}A)\) and that (after shifting) \(H^{i}(\mathbf{X_{\bullet}})=0\) for \(i\leq 1\). So \(\mathbf{X_{\bullet}}\) is quasi-isomorphic to a complex \[\mathbf{Y_{\bullet}}\colon 0\to M\to F_{1}\to F_{2}\to\cdots\to F_{s}\to 0,\] where \(F_{i}\) are free \(A\)-modules and \(M\) is an \(A\)-module with bounded betti-numbers. Let \(\delta\colon G_{0}(D^{b}(A))\to\mathbb{Z}\) be the isomorphism defined by \(\delta(\mathbf{C_{\bullet}})=\sum_{i\in\mathbb{Z}}(-1)^{i}\ell(H^{i}(\mathbf{C _{\bullet}})).\) We note that \[\delta(\mathbf{X_{\bullet}})=\delta(\mathbf{Y_{\bullet}})=\ell(M)-\ell(F_{1}) +\ell(F_{2})+\cdots+(-1)^{s}\ell(F_{s}).\] We note that \(\delta(\mathbf{X_{\bullet}})\in(h+1)\mathbb{Z}\). Thus \(\eta\) is not surjective. Therefore \(\xi\colon G_{0}(\mathcal{B})\to G_{0}(A)\) is also not surjective. So there exists \(c\geq 2\) such that \(c\) divides \(\ell(N)\) for every \(N\in\mathcal{B}\). We now give Proof of Theorem 1.2.: If \(A\) is a complete intersection of multiplicity four then by 4.1 it follows that 2 divides \(\ell(M)\) for every module \(M\) with bounded betti numbers. Now assume that \(A\) is not a complete intersection. (1) If \(A\) has _no non-free \(A\)-module with bounded multiplicity_ then clearly \(\ell(A)\) divides lengths of every free \(A\)-module. (2) If \(A\) has a non-free \(A\)-module with bounded multiplicity then the result holds by Theorem 6.1. ## 7. Flat extensions We say an Artin local ring \(R\) has property \(\mathcal{B}\) if there exists \(c\geq 2\) such that \(c\) divides \(\ell(M)\) for every \(R\)-module \(M\) with bounded betti-numbers. By Theorem 1.2, short Artin local rings satisfy \(\mathcal{B}\). Theorem 1.5 shows that property \(\mathcal{B}\) is preserved under certain flat extensions. We restate it for the convenience of the reader. **Proposition 7.2**.: _Let \((R,\mathfrak{m})\to(S,\mathfrak{n})\) be an extension of Artin local rings such that \(S\) is a finite free \(R\)-module. Assume the induced extension of residue fields \(R/\mathfrak{m}\to S/\mathfrak{n}\) is an isomorphism. Then if \(R\) has property \(\mathcal{B}\) then so does \(S\)_ Proof.: We note that as \(S\) is a finite \(R\)-module, any finitely generated \(S\) module is also finitely generated as a \(R\)-module. Let \(N\) be a \(S\)-module. By considering a composition series of \(N\) it follows that \(\ell_{S}(N)=\ell_{R}(N)\). As \(R\) has property \(\mathcal{B}\), there exists \(c_{R}\geq 2\) such that \(c_{A}\) divides \(\ell_{R}(M)\) for any \(R\)-module with bounded betti-numbers. Now let \(N\) be a \(S\)-module with bounded betti-numbers as a \(S\)-module. Let \(\mathbf{P_{\bullet}}\) be a minimal projective resolution of \(N\) as an \(S\)-module. Then \(\mathbf{P_{\bullet}}\) is also a free resolution (need not be minimal) of \(N\) as an \(R\)-module. It follows that the betti-numbers of \(N\) as an \(R\)-module are bounded. So \(c_{R}\) divides \(\ell_{R}(N)\). But \(\ell_{S}(N)=\ell_{R}(N)\). The result follows. The following provides a large class of examples where Proposition 7.2 is applicable. **Example 7.3**.: Let \(R=k[X_{1},\ldots,X_{n}]/I\) be a Artin local ring with maximal ideal \((X_{1},\ldots,X_{m})R\). Assume \(R\) has property \(\mathcal{B}\) (for example \(R\) can be any short ring which is not a hypersurface). Let \(T=k[Y_{1},\ldots,Y_{d}]/(f_{1},\ldots,f_{d})\) where \(f_{1},\ldots,f_{d}\) is a graded \(T\)-regular sequence. Then \(S=R\otimes_{k}T=R[Y_{1},\ldots,Y_{d}]/(f_{1},\ldots,f_{d})\) is a finite free extension of \(R\). Furthermore the residue field of \(S\) is also \(k\), the residue field of \(R\). The following example also yields another class of examples where Proposition 7.2 is applicable. **Example 7.4**.: Let \(A=k[X,Y]\) Let \(f_{1}\) and \(f_{2}\) be an \(A\)-regular sequence such that \(\sqrt{(f_{1},f_{2})}=(X,Y)\). Assume there exists \(g_{1},g_{2}\in B=k[U,V]\) such that 1. \(g_{1},g_{2}\) are homogeneous quadratic regular sequence in \(B\). 2. \(\sqrt{(g_{1},g_{2})}=(U,V)\). 3. \(k[U,V]/(g_{1},g_{2})\) has multiplicity four. 4. \(f_{i}=g_{i}(X^{m},Y^{n})\) for fixed \(m,n\). .Note the extension \(k[X^{m},Y^{n}]\to k[X,Y]\) is flat. So it induces a finite flat extension of Artin algebras \(k[X^{m},Y^{n}]/(f_{1},f_{2})\to k[X,Y]/(f_{1},f_{2})\). Notice \(k[X^{m},Y^{n}]/(f_{1},f_{2})\cong R=k[U,V]/(g_{1},g_{2})\) is short Artin complete intersection of multiplicity four. Thus \(k[X,Y]/(f_{1},f_{2})\) has property \(\mathcal{B}\). Furthermore as 2 divides \(\ell_{R}(M)\) for any \(R\)-module \(M\) with bounded betti-numbers, it follows that \(\ell_{S}(N)\) is divisible by 2 for any \(S\)-module \(N\) with bounded betti-numbers. A specific example where 7.4 is applicable is the following: **Example 7.5**.: Let \(A=k[X,Y]/(X^{2m},Y^{2n})\) for some \(m,n\geq 1\). Then if \(M\) is any \(A\)-module with bounded betti-numbers then \(\ell_{A}(M)\) is even. We now give Proof of Corollary 1.6.: This follows easily from 7.3 and 7.5. ## 8. Proof of Theorem 1.8 In this section we give a proof of Theorem 1.8. We restate it for the convenience of the reader. **Theorem 8.1**.: _Let \((A,\mathfrak{m})\) be a short Artin local ring. Assume \(A\) is not a complete intersection ring. There exists \(b_{A}\geq 2\) such that if \(M\) is any module with \(\operatorname{curv}(M)<\operatorname{curv}(k)\) then \(b_{A}\) divides \(\ell(\Omega_{A}^{i}(M))\) for all \(i\geq 1\)._ Proof.: We may assume there exists a non-free \(A\)-module \(M\) with \(\operatorname{curv}(M)<\operatorname{curv}(k)\) for otherwise there is nothing to prove. By [16, Theorem B] if a short ring has a module \(M\) with \(\operatorname{curv}(M)<\operatorname{curv}(k)\) then its Poincare series is \[P_{A}(z)=\frac{1}{1-dz+az^{2}}=\frac{1}{(1-r_{1}z)(1-r_{2}z)}.\] where \(d=\ell(\mathfrak{m}/\mathfrak{m}^{2})\), \(a=\ell(\mathfrak{m}^{2})\), \(r_{1}\) and \(r_{2}\) are positive integers with \(r_{1}<r_{2}\) and \(\operatorname{curv}(M)=r_{1}\) and \(\operatorname{curv}(k)=r_{2}\). Furthermore \(a\geq d-1\). We note that \(\lim_{n\to\infty}\beta_{n+1}(k)/\beta_{n}(k)=r_{2}>1\). So the techniques in section 5 are applicable. Let \(\mathcal{B}=\operatorname{mod}_{\alpha}(A)\) and \(\mathcal{C}=\mathcal{C}_{\alpha}\). Let \(\eta,\xi\) be as in 5.8. (1) If \(\eta\) is not surjective then \(\xi\) is also not surjective. Then there exists \(c\geq 2\) such that \(c\) divides \(\ell(M)\) for every \(M\in\mathcal{B}\). (2) Assume \(\eta\) is surjective. Let \(M\in\mathcal{B}\). Fix \(i\geq 1\) and set \(D=\Omega^{i}_{A}(M)\). Then \(\mathfrak{m}^{2}D=0\). So by Theorem 5.11 it follows that \(r_{2}+1\) divides \(\ell(D)\). ## 9. Higher dimensional rings In this section \((A,\mathfrak{m})\) is a Cohen-Macaulay local ring of dimension \(d\geq 1\). For convenience we will assume that the residue field \(k=A/\mathfrak{m}\) of \(A\) is infinite. Set \(h=\operatorname{embdim}(A)-d\) and \(e_{0}(A)\) its multiplicity. Let \(\operatorname{gr}A=\bigoplus_{n\geq 0}\mathfrak{m}^{n}/\mathfrak{m}^{n+1}\) be the associated graded ring of \(A\). **9.1**.: Recall \(A\) is said to have minimal multiplicity if \(e_{0}(A)=h+1\). We assume \(A\) is not regular. It is well-known that if \(A\) has minimal multiplicity then \(\operatorname{gr}A\) is Cohen-Macaulay, see [22]. Let \(g\in\mathfrak{m}^{2}\setminus\mathfrak{m}^{3}\) be such that the initial for \(g^{*}\) of \(f\) in \(\operatorname{gr}A\) is a non-zero divisor. Let \[\mathcal{B}_{g}=\{M\mid M\ \text{ is a MCM }A/(g)\text{-module such that }\operatorname{projdim}_{A}M<\infty\}.\] Also set \[\mathcal{B}^{(2)}=\bigcup_{g^{*}\operatorname{nzd}\,\operatorname{alg}\, \operatorname{alg}\,A}\mathcal{B}_{g}.\] We state Theorem 1.10 here for the convenience of the reader. **Theorem 9.2**.: _(with hypotheses as in 9.1). If \(M\in\mathcal{B}^{(2)}\) then \(h+1\) divides \(e_{0}(M)\)._ We need a few preliminaries to prove this result. **9.3**.: Let \(\mathbf{P}_{\bullet}\) be the minimal projective resolution of \(k\) as a \(A\)-module. Let \(g\in\mathfrak{m}^{2}\setminus\mathfrak{m}^{3}\) be such that \(g^{*}\) is \(\operatorname{gr}A\)-regular. Let \(\overline{\mathbf{P}_{\bullet}}=\mathbf{P}_{\bullet}\otimes_{A}A/(g)\). We have an exact sequence \[0\to\mathbf{P}_{\bullet}\xrightarrow{g}\mathbf{P}_{\bullet}\to\overline{ \mathbf{P}_{\bullet}}\to 0.\] It follows that \(H^{-n}(\overline{\mathbf{P}_{\bullet}})=0\) for \(n\ll 0\). Furthermore \[\chi(\overline{\mathbf{P}_{\bullet}})=\sum_{n\in\mathbb{Z}}(-1)^{i}\ell(H^{i} (\overline{\mathbf{P}_{\bullet}}))=0.\] Let \(M\in\mathcal{B}_{g}\). Set \(R=A/(g)\). Let \(\mathbf{x}=x_{1},\dots,x_{d-1}\) be maximal \(M\oplus\Omega^{1}_{R}(M)\oplus R\)-superficial sequence. Set \(\mathbf{X}_{\bullet}=\overline{\mathbf{P}_{\bullet}}\otimes_{R}R/(\mathbf{x})\). Then one can show similarly \(H^{-n}(\mathbf{X}_{\bullet})=0\) for \(n\ll 0\). Furthermore \(\chi(\mathbf{X}_{\bullet})=0\). Set \(B=R/(\mathbf{x})\). Note \(\mathbf{X}_{\bullet}\in K^{b,-}(\operatorname{proj}B)\) is a minimal complex and \(\operatorname{Hom}_{D^{b}(B)}(\mathbf{X}_{\bullet},k[n])=\beta_{n}\) where \(\beta_{n}=\beta^{A}_{n}(k)\) since \(\beta^{A}_{n}(k)\) is the minimal number of generators of \(\mathbf{P}_{\bullet}^{-n}\). **Lemma 9.4**.: _(with hypotheses as 9.3). Set \(D=\Omega^{1}_{R}(M)/\mathbf{x}\Omega^{R}_{1}(M)\). Then_ 1. \(\Omega^{2}_{B}(D)=D\)_._ 2. _The set_ \(\{|\chi_{m}(D\otimes\mathbf{X}_{\bullet})|\mid m\in\mathbb{Z}\}\) _is bounded._ 3. _The set_ \(\{\ell(H^{m}(D\otimes\mathbf{X}_{\bullet}))|m\in\mathbb{Z}\}\) _is bounded._ 4. _The Hilbert series of_ \(B\) _is_ \(1+(h+1)z+hz^{2}\)_. In particular_ \(\mathfrak{m}^{3}B=0\)_._ 5. \(\mathfrak{m}^{2}D=0\)_._ Proof.: (a). This follows from the fact that \(\Omega^{3}_{R}(M)=\Omega^{1}_{R}(M)\). (b), (c) This follows from a same argument as in Lemma 5.10(3). (d) Note \(h_{A}(z)=1+hz\). We have \(g^{*}\) is \(\operatorname{gr}A\)-regular. So \(h_{R})(z)=(1+hz)(1+z)=1+(h+1)z+hz^{2}\), see 2.13. We have \(\operatorname{gr}R\) is Cohen-Macaulay. So by 2.16 it follows that \(x_{1}^{*},\cdots,x_{d-1}^{*}\) is \(\operatorname{gr}R\)-regular. By 2.13 it follows that \(h_{B}(z)=1+(h+1)z+hz^{2}\). The result follows as \(B\) is Artin. (e) This follows as \(D=\Omega_{B}^{1}(M/\mathbf{x}M)\). We now give Proof of Theorem 9.2.: Let \(g\in\mathfrak{m}^{2}\setminus\mathfrak{m}^{3}\) be such that the initial for \(g^{*}\) of \(f\) in \(\operatorname{gr}A\) is a non-zero divisor. Let \(M\in\mathcal{B}_{g}\). We first consider the case when \(h=1\). Then \(A\) is a hypersurface of multiplicity two. Then \(A/(g)\) is a complete intersection of multiplicity four. Let \(\mathbf{x}=x_{1},\dots,x_{d-1}\) be a maximal \(A/(g)\oplus M\)-superficial sequence. Then \(e_{0}(M)=\ell(M/\mathbf{x}M)\). The latter is even by Theorem 4.1. Thus \(2=h+1\) divides \(e_{0}(M)\). Next we consider the case when \(h\geq 2\). Then \(\lim_{n\to\infty}\beta_{n+1}/\beta_{n}=h>1\). We do the construction as in 9.3 and 9.4 Note \(\mathfrak{m}^{2}D=0\), see 9.4. Set \(r(D)=\mu(\mathfrak{m}D)\). We have an exact sequence \[0\to k^{r(D)}=\mathfrak{m}D\to D\to k^{\mu(D)}\to 0.\] Let \(\mathbf{X}_{\bullet}\) be as in Lemma 9.4. Taking \(-\otimes\mathbf{X}_{\bullet}\) we get \[0\to\overline{\mathbf{X}_{\bullet}}^{r(D)}\to D\otimes\mathbf{X}_{\bullet}\to \overline{\mathbf{X}_{\bullet}}^{\mu(D)}\to 0\] where \(\overline{\mathbf{X}_{\bullet}}=\mathbf{X}_{\bullet}\otimes k\). Assume \(H^{i}(\mathbf{X}_{\bullet})=0\) for \(i\leq m_{0}\). Let \(-n\leq m_{0}\). Then we have \[0\to T_{n}\to H^{-n}(\overline{\mathbf{X}_{\bullet}})^{r(D)}\to H^{-n}(D \otimes\mathbf{X}_{\bullet})\to H^{-n}(\overline{\mathbf{X}_{\bullet}})^{\mu( D)}\to\cdots.\] So we have \[\ell(T_{n})+\chi_{-n}(D\otimes\mathbf{X}_{\bullet})=\ell(D)\chi_{-n}( \overline{\mathbf{X}_{\bullet}}).\] Thus we get \[\ell(T_{n})/\beta_{n}+\chi_{-n}(D\otimes\mathbf{X}_{\bullet})/\beta_{n}=\ell( D)\left(1-(\sum_{j\geq 0}(-1)^{j}\beta_{n-1-j}/\beta_{n})\right).\] We have \(|\chi_{-n}(D\otimes\mathbf{X}_{\bullet})|\) is bounded (see 9.4). So by 11.2 we get \[\lim_{n\to\infty}\ell(T_{n})/\theta_{n}=\ell(D)(1-1/(h+1))=\ell(D)h/(h+1).\] We also have \[0\to T_{n}\to H^{-n}(\overline{\mathbf{X}_{\bullet}})^{r(D)}\to K_{n}\to 0 \quad\text{where $K_{n}$ is a submodule of $H^{-n}(D\otimes\mathbf{X}_{\bullet})$.}\] So we have \[\ell(T_{n})+\ell(K_{n})=\beta_{n}r(D).\] By 9.4, it follows that \(\ell(K_{n})\) is bounded. Therefore \[\lim_{n\to\infty}\ell(T_{n})/\beta_{n}=r(D).\] Thus \[r(D)=\ell(D)h/(h+1).\] Therefore \[\ell(D)h=(h+1)r(D).\] It follows that \(h+1\) divides \(\ell(D)\). But \(\ell(D)=e_{0}(\Omega^{1}(M))\) As \(e_{0}(A)=2(h+1)\), it follows that \(h+1\) also divides \(e_{0}(M)\) ## 10. Ratios of betti numbers of certain Artin local rings In this section we compute \(\lim_{n\to\infty}\beta_{n+1}(k)/\beta_{n}(k)\) of certain Artin local rings with residue field \(k\). The results in this section is essentially known. We give proofs as we do not have a reference. If \((A,\mathfrak{m})\) is a Noetherian local ring then \(P_{A}(z)=\sum_{n\geq 0}\beta_{n}^{A}(k)z^{n}\) denotes its Poincare series. We begin with **Lemma 10.1**.: _Let \((S,\mathfrak{n})\), \((T,\mathfrak{m})\) be rings with minimal multiplicity of codimension \(h\geq 2\) and dimensions \(0,1\) respectively. Let \(f\in\mathfrak{m}^{2}\) be a non-zero divisor on \(T\) and let \(A=T/(f)\). Then_ 1. \(\beta_{n}^{S}(k)=h^{n}\) _for all_ \(n\geq 0\)_._ 2. \(P_{S}(z)=1/(1-hz)\)__ 3. \(P_{T}(z)=(1+z)P_{S}(z)\)_._ 4. \(P_{A}(z)=P_{T}(z)/(1-z^{2})=\frac{1}{(1-hz)(1-z)}\)_._ 5. \(\lim_{n\to\infty}\beta_{n+1}^{A}(k)/\beta_{n}^{A}(k)=h\)_._ Proof.: (1) Note \(\mathfrak{n}=k^{h}\). It follows that \(\beta_{1}(k)=h\) and \(\beta_{n+1}=h\beta_{n}\) for \(n\geq 1\). The result follows. (2) This follows from (1). (3) After possibly going to a flat extension (to get infinite residue field) we may assume that there exists \(x\in\mathfrak{m}\) which is \(T\)-superficial. So \(T/(x)\) has minimal multiplicity. The result follows from [4, 3.3.5]. (4) This follows from [4, 3.3.5]. (5) We have by (2), (3) \[\sum_{n\geq 0}\beta_{n}^{A}(k)z^{n}=\frac{P_{S}(z)}{(1-z)}.\] Set \(\beta_{n}=\beta_{n}^{A}(k)\). Then by above equality we obtain \[\beta_{n}=1+h+\cdots+h^{n}=\frac{h^{n+1}-1}{h-1}.\] So we obtain \[\frac{\beta_{n+1}}{\beta_{n}}=\frac{h^{n+1}-1}{h^{n}-1}=\frac{h-\frac{1}{h^{n }}}{1-\frac{1}{h^{n}}}.\] Taking limits as \(n\to\infty\) we obtain the result. **10.2**.: Let \((A,\mathfrak{m})\) be a short Artin local ring (i.e., \(\mathfrak{m}^{3}=0\) and \(\mathfrak{m}^{2}\neq 0\)). If \(A\) has a non-free module \(M\) with bounded betti-numbers then by [16, Theorem B] we have \(\ell(\mathfrak{m}/\mathfrak{m}^{2})=h+1\), \(\ell(\mathfrak{m}^{2})=h\geq 2\), socle of \(A\) is \(\mathfrak{m}^{2}\) and \[P_{A}(z)=\frac{1}{1-(h+1)z+hz^{2}}=\frac{1}{(1-z)(1-hz)}.\] Than by an argument similar to 10.1(5) we obtain \(\lim_{n\to\infty}\beta_{n+1}^{A}(k)/\beta_{n}^{A}(k)=h\). ## 11. Appendix In the appendix we calculate a limit which is crucial to us. **11.1**.: Let \(\{\theta_{n}\}_{n\geq c}\) be a sequence such that \(\lim_{n\to\infty}\theta_{n+1}/\theta_{n}=\xi>1\). Set \(\theta_{n}=0\) for \(n<c\). Also set \[r_{n}=\sum_{j\geq 0}(-1)^{j}\frac{\theta_{n-1-j}}{\theta_{n}}.\] We show **Lemma 11.2**.: _(with hypotheses as in 11.1). We have_ \[\lim_{n\to\infty}r_{n}=\frac{1}{\xi+1}.\] Proof.: We note \(\lim_{n\to\infty}\theta_{n}=\infty\). Choose \(\epsilon>0\) such that \(0<\epsilon<1/\xi\) and \((1/\xi)+\epsilon<1\). We note that \(\lim_{n\to\infty}\theta_{n}/\theta_{n+1}=1/\xi\). Choose \(m_{0}\) such that \[\frac{1}{\xi}-\epsilon<\frac{\theta_{m-1}}{\theta_{m}}<\frac{1}{\xi}+\epsilon \quad\text{for all }m\geq i_{0}.\] We note for \(j\geq 2\) we have \[\frac{\theta_{m-j}}{\theta_{m}}=(\frac{\theta_{m-j}}{\theta_{m-j+1}})(\frac{ \theta_{m-j+1}}{\theta_{m-j+2}})\cdots(\frac{\theta_{m-1}}{\theta_{m}}).\] So we have ( \[\dagger\] ) \[(\frac{1}{\xi}-\epsilon)^{j}<\frac{\theta_{m-j}}{\theta_{m}}<(\frac{1}{\xi}+ \epsilon)^{j}.\] Let \[t_{n}=\sum_{n-1-j\leq i_{0}-1}(-1)^{j}\frac{\theta_{n-1-j}}{\theta_{n}}.\] Then clearly \(\lim_{n\to\infty}t_{n}=0\). Set \[\alpha=(\epsilon+1/\xi)\quad\text{and}\quad\beta=((1/\xi)-\epsilon).\] We have \[r_{n} =t_{n}+\sum_{n-1-j\geq i_{0}}(-1)^{j}\frac{\theta_{n-1-j}}{\theta _{n}}\] \[=t_{n}+\sum_{n-1-j\geq i_{0},j\text{ even}}\frac{\theta_{n-1-j}} {\theta_{n}}\ -\ \sum_{n-1-j\geq i_{0},j\text{ odd}}\frac{\theta_{n-1-j}}{\theta_{n}}\] \[\leq t_{n}+\alpha(1+\alpha^{2}+\cdots+\alpha^{2l})-\beta^{2}(1+ \beta^{2}+\cdots+\beta^{2s})\] where \(l,s\to\infty\) as \(n\to\infty\). So we have \(r_{n}\leq t_{n}+u_{n}(\epsilon)\) where \[u_{n}(\epsilon)=\alpha\frac{1-\alpha^{2l+2}}{1-\alpha^{2}}-\beta^{2}\frac{1- \beta^{2s+2}}{1-\beta^{2}}.\] We have that \(0<\max\{\alpha,\beta\}<1\). So \[u(\epsilon)=\lim_{n\to\infty}u_{n}(\epsilon)=\frac{\alpha}{1-\alpha^{2}}-\frac {\beta^{2}}{1-\beta^{2}}.\] We have \(\limsup r_{n}\leq u(\epsilon)\). Taking \(\epsilon\to 0\) we obtain \[\limsup r_{n}\leq\frac{1/\xi}{1-(1/\xi)^{2}}-\frac{1/\xi^{2}}{1-(1/\xi)^{2}}= \frac{1}{\xi+1}.\] We now compute \(\liminf r_{n}\). We note as before \[r_{n} =t_{n}+\sum_{n-1-j\geq i_{0},j\text{ even}}\frac{\theta_{n-1-j}}{\theta_{n }}\ -\ \sum_{n-1-j\geq i_{0},j\text{ odd}}\frac{\theta_{n-1-j}}{\theta_{n}}\] \[\geq t_{n}+\beta(1+\beta^{2}+\cdots+\beta^{2s})-\alpha^{2}(1+ \alpha^{2}+\cdots+\alpha^{2l})\] where \(l,s\to\infty\) as \(n\to\infty\). So we have \(r_{n}\geq t_{n}+v_{n}(\epsilon)\) where \[v_{n}(\epsilon)=\beta\frac{1-\beta^{2s+2}}{1-\beta^{2}}-\alpha^{2}\frac{1- \alpha^{2s+2}}{1-\alpha^{2}}.\] We have that \(0<\max\{\alpha,\beta\}<1\). So \[v(\epsilon)=\lim_{n\to\infty}v_{n}(\epsilon)=\frac{\beta}{1-\beta^{2}}-\frac{ \alpha^{2}}{1-\alpha^{2}}.\] We have \(\liminf r_{n}\geq v(\epsilon)\). Taking \(\epsilon\to 0\) we obtain \[\liminf r_{n}\geq\frac{1/\xi}{1-(1/\xi)^{2}}-\frac{1/\xi^{2}}{1-(1/\xi)^{2}}= \frac{1}{\xi+1}.\] The result follows.
2302.13694
DLOFTBs -- Fast Tracking of Deformable Linear Objects with B-splines
While manipulating rigid objects is an extensively explored research topic, deformable linear object (DLO) manipulation seems significantly underdeveloped. A potential reason for this is the inherent difficulty in describing and observing the state of the DLO as its geometry changes during manipulation. This paper proposes an algorithm for fast-tracking the shape of a DLO based on the masked image. Having no prior knowledge about the tracked object, the proposed method finds a reliable representation of the shape of the tracked object within tens of milliseconds. This algorithm's main idea is to first skeletonize the DLO mask image, walk through the parts of the DLO skeleton, arrange the segments into an ordered path, and finally fit a B-spline into it. Experiments show that our solution outperforms the State-of-the-Art approaches in DLO's shape reconstruction accuracy and algorithm running time and can handle challenging scenarios such as severe occlusions, self-intersections, and multiple DLOs in a single image.
Piotr Kicki, Amadeusz Szymko, Krzysztof Walas
2023-02-27T11:54:04Z
http://arxiv.org/abs/2302.13694v2
# DLOFTBs - Fast Tracking of Deformable Linear Objects with B-splines ###### Abstract While the manipulation of rigid objects is an extensively explored research topic, deformable linear object (DLO) manipulation seems significantly underdeveloped. A potential reason for this is the inherent difficulty in describing and observing the state of the DLO as its geometry changes during manipulation. This paper proposes an algorithm for fast-tracking the shape of a DLO based on the masked image. Having no prior knowledge about the tracked object, the proposed method finds a reliable representation of the shape of the tracked object within tens of milliseconds. This algorithm's main idea is to first skeletonize the DLO mask image, walk through the parts of the DLO skeleton, arrange the segments into an ordered path, and finally fit a B-spline into it. Experiments show that our solution outperforms the State-of-the-Art approaches in DLO's shape reconstruction accuracy and algorithm running time and can handle challenging scenarios such as severe occlusions, self-intersections, and multiple DLOs in a single image. ## I Introduction Deformable Linear Objects (DLOs) are a class of objects that are characterized by two main features: deformability, which refers to the fact that the object is not a rigid body and its geometry can change, and linearity, which stands for the fact that the object is elongated and the ratio of its length to its width is substantial [1]. Objects of this type are ubiquitous both in everyday life and in industry, where one can find ropes, cables, pipes, sutures, etc. While the manipulation of rigid bodies is already solved for a wide range of objects [2], manipulating DLOs is still unsolved even for everyday objects such as cables and hoses. Due to the ubiquity of the DLOs, manipulating them poses a complex and vital challenge, which has been in the scope of researchers for over three decades [3]. The interest in this topic has grown over the last few years, as the automatic wiring harness assembly is crucial for car manufacturers [4], as well as automatic completion of surgical sutures, which could help surgeons [5]. To perform such tasks autonomously, robotic manipulators need to perceive the configuration of the manipulated object, as this is crucial for calculating the adequate control signal. To do so, accurate and real-time DLO tracking is necessary. However, state-of-the-art DLO tracking algorithms are relatively slow and do not meet the real-time requirements of the control systems. Additionally, they cannot correctly handle manipulation sequences, which contain occlusions [6] and self-intersections [7], or make many assumptions about the model of the tracked object [8, 9]. This paper proposes a fast, non-iterative method for estimating a DLO's shape using a walk through object's mask and B-spline regression. The proposed algorithm takes as an input the mask of the DLO and returns a sequence of control points of the B-spline curve that approximates the shape of the tracked DLO. Our solution can deterministically identify the shape of a DLO on the HD image within \(40\,\mathrm{ms}\) while handling non-trivial scenarios, like occlusions, self-intersections, and multiple DLOs in the scene. The general scheme of the proposed approach is presented in Figure 1. The main contribution of this work is twofold: * a novel deterministic real-time DLO tracking algorithm, which can handle occlusions, self-intersections, and multiple DLOs in the scene, while requiring no prior knowledge about the tracked DLO and is faster and more accurate than State-of-the-Art solutions for assumed quality of the output shape, * dataset of real and artificial 2D and 3D videos and images of several different DLOs, on which we performed a verification of the proposed method and which we share with the community for objective performance evaluation and to encourage the development of real-time DLO tracking 1. Footnote 1: [https://github.com/PPI-PUT/cable_observer/tree/master](https://github.com/PPI-PUT/cable_observer/tree/master) ## II Related Work ### _DLO representation_ In the literature, there are several ways to represent the geometric shape of the DLO. The most straightforward one is to represent it as a sequence of points [10, 11]. However, more complex models are usually necessary for accurate cable modeling and tracking, like a B-spline model with multiple chained random matrices, proposed in [6]. A similar approach, but using Bezier curves and rectangle chains, was proposed in [12], while in [13] NURBS curves were used. Fig. 1: Using the proposed DLO tracking algorithm, one can transform the DLO mask into a low-dimensional B-spline representation in real-time. In our research, we use a B-spline representation (similar to the one used in [12, 13]) as it is flexible and enables one to accurately track the shape of a generic DLO while being compact, relatively easy, and cheap to work with. Using this representation, one can build more complex models, which consider the kinematics and dynamics of the DLO [14, 15]. ### _DLO tracking_ DLO tracking requires transforming the data gathered with sensors into the chosen representation. While there are attempts to use data from tactile sensors [16], the most successful way to perceive the DLO shape is to use vision and depth sensors. One of the most straightforward approaches to DLO shape tracking is to use the fiducial markers located along the DLO, and track them [11] or use them to estimate the shape of a DLO [17]. A similar approach was presented in [18], where colors denote consecutive rope segments. The most common approach is to create a model of the DLO and use images or point clouds as measurements to modify its parameters and track the object deformation iteratively. One of the examples of this approach is the modified expectation-maximization algorithm (EM), proposed in [19], which is used to update the predefined DLO model based on the registered deformations and simulation in the physics engine. Similarly, in [8], the FEM methods were used to track the deformation of the predefined model. Whereas, in [9], a Structure Preserved Registration algorithm with the object represented as a Mixture of Gaussians was used. Authors of [12] performed DLO tracking using Recursive Bayesian Estimator on Spatial Distribution Model, built with the Bezier curve and the chain of rectangles. Due to the iterative and often probabilistic character of the model updates, these methods usually have problems tracking rapidly deforming objects and require an appropriate model and accurate initialization. To mitigate the slow initialization problem, authors of [6] used the Euclidean minimum spanning tree and the Breadth-first search method to speed up initialization. However, it still takes hundreds of milliseconds to obtain the DLO shape estimate. A much faster EM-based tracking approach, which utilizes a coherent point drift method extended with some geometric-based regularization and physically and geometrically inspired constraints, was presented in [7]. The instance segmentation method for multiple DLOs, which also can serve for tracking, was initially proposed in [20] and extended using Deep Learning solutions in [21] and [22]. The solution presented in this paper utilizes a similar idea to the one presented in [21, 22]. However, using skeletons instead of the super-pixel graphs reduces the computational complexity [21], and the lack of Deep Learning in our solution facilitates better generalization without sacrificing performance [21, 22]. In our work, we do not try to model the DLO, but instead, quickly provide a compact representation of the DLO state, that is consistent and can be tracked between frames. Thus, the proposed method can aid the existing model-based methods with accurate and real-time structured measurements of the system state. ## III DLO Tracking ### _Problem Formulation_ The problem considered in this paper is to track the DLO on the video sequence. By tracking, we understand transforming consecutive video frames of the DLO's binary mask, obtained from the selected segmentation algorithm, into a 1D curve resembling the object's shape, which representation should be consistent between frames. In this paper, we will not consider the image segmentation problem similarly to [6, 7, 19]. But instead, we will focus on shape tracking only, with the assumption that for homogeneously colored cables, the mask is given by any color-based segmentation algorithm, or in more challenging scenarios, state-of-the-art deep learning method [21] is used. ### _Proposed Method_ In this section, we introduce our proposed novel approach to fast tracking of DLO, called DLOFTBs, which by using the walks on the DLO mask's skeleton, enables rapid fitting of the B-spline curve into the masked image of the DLO. The general scheme of the proposed algorithm is presented in Figure 2. To transform the mask image into a B-spline curve 4 main processing steps are made: (i) morphological open & skeletonization, (ii) walk along the skeleton segments, (iii) filtering and ordering of segments, and finally, (iv) B-spline fitting. In the following subsections, we will describe each of these steps in detail. #### Iii-B1 Morphological open & skeletonization The first operation we perform on the mask of the DLO is a \(3\times 3\) (the smallest possible) morphological open. We used it to remove some false positive pixels, which are common because of imperfect segmentation. After that, one of the most essential steps in mask processing is performed - skeletonization [23]. This operation takes the mask image as an input and creates its skeleton, i.e., a thin version of the mask, which lies in the geometric centers of the DLO segments, preserves its topology, and reduces its width to a single pixel. This significantly reduces the amount of information about the pixels representing the DLO while preserving the crucial information encoded in central pixels along the DLO. Moreover, using the skeleton, one can easily find the crucial parts of the DLO mask, such as segment endpoints - pixels with only one neighbor or branching - pixels with more than two neighbors. While the segment endpoints will constitute starting points for the walks on segments, the branching points are crucial while performing a walk, as they require the walker to choose one of several possible paths. To avoid this inconvenience, we propose removing branching points and postponing the decision to make connections between segments for further processing. By doing so, we can simplify the segment walk algorithm considerably. #### Iii-B2 Walk algorithm Having the skeleton prepared and segment endpoints determined, we can perform a walk along each segment. To do so, we start from a random segment endpoint and go pixel by pixel till the end of the segment, collecting the subsequent pixel coordinates. Such traversing is always possible and unambiguous, as we removed all pixels with more than two neighbors in the previous step. After each walk, we remove two points from the set of endpoints that were the beginning and end of the considered segment. Next, we draw another segment endpoint and perform a walk, which repeats until the end of the segment endpoints. As a result, we obtain a set of paths i.e., ordered lists of pixels representing all segments. #### Ii-B3 Filtering and ordering of segments In this step, we first filter out segments shorter than \(p\) pixels, which are likely to represent some artifacts of the mask or resulting from the skeletonization procedure. While this approach may also result in the situation where a short part of the actual DLO is removed, it will not affect the resultant path significantly, as it will be treated as occlusion and handled by our algorithm at the next stage. In order to fit a B-spline effectively into a set of segments, we need to order them. As a result of the previous processing step, we have an unordered set of ordered lists of pixels. To order them, we need to find segment endpoints pairs, which are most likely to connect each other. While there are many possible criteria and algorithms for deciding about connections, we decided to use criterion that takes into account both distance and orientation of the endpoints and is defined by \[J=mJ_{d}+(1-m)J_{o}, \tag{1}\] where \(J_{d}\) is a euclidean distance between segment endpoints, \(J_{o}\) is a criterion related to the mutual orientation of the segment endpoints, and \(m\in[0;1]\) is a linear mixing factor. While the definition of \(J_{d}\) is rather straightforward, the exact formula of the \(J_{o}\) is given by \[J_{d}=|\pi-\phi_{1}-\phi_{2}|, \tag{2}\] where \(\phi_{1},\phi_{2}\) are approximated orientations of the segment endpoints. Using the criterion \(J\) (1) one can decide about the pairs of the segment endpoints. The most accurate solution would be to check for all possible pairing schemes and find the one with the lowest \(J\). However, it is also the most computationally expensive one, as it requires checking \((2s-1)(2s-3)\ldots 1\) pairings, where \(s\) is the number of segments. To limit the computational burden, we decided to use a potentially less accurate but much faster approach - a greedy one. Thus, we need to choose only \(s-1\) connections out of \(s(2s-1)\) pairs of endpoints, considering the already taken endpoints. Interestingly, our algorithm can be easily modified to track multiple cables also. In the single cable case, we are trying to make all possible connections between segment endpoints, except the last one, in the order determined by the criterion \(J\). To make it work for the multiple cables, we have to be more conservative and make the connections only if the criterion value is smaller than some user-defined threshold \(J_{th}\). Thus, we are performing the connections in the greedy order as in the single cable case, but if we reach the limit \(J_{th}\), the algorithm stops. As a result, we obtain several segment sequences instead of one, as for the single cable case. Finally, the B-spline fitting phase (described in the next point) is performed multiple times, separately for all resultant segment sequences, resulting in the multiple B-splines that represent the shapes of the cables in the image. #### Ii-B4 B-spline fitting To fit the B-spline to the sequence of segments, we need an argument for the B-spline i.e., the vector \(t\) of the relative position of the pixels on the curve we want to define. To do so, we calculate the distance along the segments, as well as euclidean distances between segments, and concatenate them into a single vector, the cumulative sum of which serves as the B-spline argument \(t\). Using the Euclidean distance between segments, we introduce an estimate of the distance along the DLO (we do not have access to the true one, as the parts of a DLO are occluded). This procedure prevents the sudden changes in the pixel's positions in terms of the B-spline argument \(t\). Moreover, we need to define the number and positions of knots. In the proposed solution, we defined knots as a sequence of \(k\) equidistant, in terms of the element number, elements of the vector \(t\), as it will ensure that the Schonenberg-Whitney conditions [24] are met. Finally, one can fit two B-splines, one for each axis, using the prepared argument vector \(t\), knots, and \(x\) and \(y\) coordinates of the ordered pixels. We used cubic splines, as higher continuity is unnecessary for the considered problem. #### Ii-B5 3D data Even though the proposed DLO tracking algorithm is meant to work on images, it can be easily extended to work for the 3D data obtained from the RGBD sensor. In this case, we deal with the mask in the same way as for the 2D case till the moment of the B-spline fitting. Given a sequence of segments in the 2D space, we augment it with the corresponding depth coordinates and then perform the B-spline fitting. Thus, we obtain three B-splines, each Fig. 2: The general scheme of DLOFTBs. The mask image is transformed through several processing stages to obtain B-spline representation of its shape. representing the value of a different coordinate (\(x\), \(y\), \(z\)) with respect to the curve length estimate. ## IV Experiments To perform all experiments, we used a single core of the Intel Core i7-9750H CPU and following, heuristically chosen, set of parameters of our algorithm \(m=0.05\), \(p=10\), and \(k=25\), which are the mixing factor of segments connection criteria, segments length threshold, and the number of knots. ### _Datasets_ To evaluate the proposed cable tracking method (DLOFTBs), we conducted several experiments, which show the performance of the proposed algorithm on 4 datasets: #### Iv-A1 RGB real 7 sequences of RGB images (\(\approx 900\) frames in total), collected with the Intel RealSense D435 camera, of a single cable being manipulated by two UR3 manipulators. #### Iv-A2 RGBD real 10 sequences of RGBD images (\(\approx 2500\) frames in total), collected with the Kinect Azure, of the single cable being manipulated by a human. #### Iv-A3 RGBD artificial 5 sequences of the artificially created RGBD images (\(\approx 1400\) frames in total), generated from a reference curve evolving in time. This dataset allows us to compare directly to the reference curve instead to mask. #### Iv-A4 Ariadne+ The test set taken from [21], which consists of 62 images of multiple cables. We enriched this dataset with manual annotations of the cable shapes to facilitate direct comparison between curve shapes. ### _Performance criteria_ Assessing the quality of the DLO shape tracking is not a trivial task [25], especially when the only ground truth data available is the mask of the DLO (datasets 1 and 2). For dataset 1 and dataset 2 we use two Mean Minimal Distance (MMD) criteria, which build upon the ideas of Modified Hausdorff Distance [26] and are defined by \[\mathcal{L}_{1}=\mathrm{MMD}(\mathcal{M},\mathcal{C}_{d})\quad\text{and}\quad \mathcal{L}_{2}=\mathrm{MMD}(\mathcal{C}_{d},\mathcal{M}), \tag{3}\] where \[\mathrm{MMD}(X,Y)=\frac{1}{|X|}\sum_{x\in X}\min_{y\in Y}d(x,y), \tag{4}\] where \(d(x,y)\) is a Euclidean distance between \(x\) and \(y\), \(\mathcal{M}\) is a set of pixels belonging to a mask, while \(\mathcal{C}_{d}\) is a set of points on the predicted curve \(\mathcal{C}\). In turn, for dataset 3 and dataset 4 we have access to the mathematical curve representing the reference shape \(\mathcal{C}_{r}\). Therefore, we can formulate a much more accurate measure of the performance, which builds upon the Frechet distance [27], and is defined by \[\mathcal{L}_{3}(\mathcal{C}_{d},\mathcal{C}_{r_{d}})=\frac{F(\mathcal{C}_{d}, \mathcal{C}_{r_{d}})+F(\mathcal{C}_{r_{d}},\mathcal{C}_{d})}{2}, \tag{5}\] where \(\mathcal{C}_{r_{d}}\) is a discretized version of the reference path, and where \[F(X,Y)=\frac{1}{|X|}\sum_{i=0}^{|X|-1}\min_{w\in[0;1]}d(X(i),(1-w )Y(k(i))\\ +wY(k(i)+1)), \tag{6}\] where \(k(i)\) satisfies \(D_{Y}(k(i))\leq D_{X}(i)\leq D_{Y}(k(i)+1)\) and is monotonically non-decreasing, where \(D_{X}(i)\) is a normalized distance along \(X\) curve at \(i\)-th discretization point. This function allows for fair alignment of curves despite possible differences in parameterization. ### _2D videos of a single cable_ In the first stage of the experimental evaluation, we evaluated DLOFTBs on _RGB real_ dataset (see Section IV-A1), with masks generated using hue-based segmentation, and compared it with Ariadne+ [21] and FastDLO [22] learned approaches. We used cables with different widths and lengths, and tested all algorithms on challenging setups, shown in Table I, including self-intersection (_scenario 6_) and occlusions (_scenarios 2-6_). Obtained results show that the proposed algorithm achieves the most stable behavior and outperforms all baselines in terms of criterion \(\mathcal{L}_{1}\) and processing time, and achieves similar results in terms of criterion \(\mathcal{L}_{2}\). Huge values of \(\mathcal{L}_{1}\) for the baselines indicate, that, unlike DLOFTBs, they are unable to cover the whole cable with the predicted spline (extreme case is Ariadne+ which was unable to generate any curve for _scenario 6_). Whereas, small values of \(\mathcal{L}_{2}\) for almost all cases, ensures that predicted splines do not cover empty areas. Our method achieves a relatively big value of \(\mathcal{L}_{2}\) only for _scenario 3_, in which large parts of the cable are outside the camera's field of view, therefore, even reasonable and plausible curves generated by the proposed method results in the growth of \(\mathcal{L}_{2}\). The behavior of the algorithms for some sample challenging frames is presented in Figure 3. Even though both baselines were provided with a very clean mask of the tracked cable, they were unable to handle occlusions and self-intersections, whereas DLOFTBs handled them perfectly. ### _2D masks of multiple cables_ In this experiment, we evaluated the ability of DLOFTBs to identify multiple cables at once on the masked image Fig. 3: Comparison of 2D DLO tracking algorithms. Only our approach was able to handle occlusions and self-intersections of the tracked cable. and compared directly with Ariadne+ and FastDLO algorithms [21, 22] on the augmented version of the Ariadne+ test set (Section IV-A4) segmented using DeepLabV3+ network III for all algorithms. The result of this comparison can be found in Table III. We outperformed Ariadne+ and FastDLO in terms of algorithm execution time, and the accuracy of the DLO shape reconstruction, and scored second in the number of wrongly identified DLOs. The relatively high number of redundant curves fitted by DLOFTBs is a result of extremely noisy masks generated by DeepLabV3+ (see 3rd column of Figure 4). In Figure 4 we present a qualitative analysis of algorithms behavior on 3 challenging images. Ariadne+ has severe problems with handling complex backgrounds and bends at the intersections of cables, while FastDLO cannot solve the intersection in the 2nd image properly and produces a wavy shape for the left cable in the 1st image. In turn, DLOFTBs generates the most accurate solutions for the first two images, however, if the mask is very noisy (3rd image) it fits curves into linear false positives regions of the mask. ### _3D video sequences_ To accurately compare the proposed approach with another State-of-the-Art method, we made our algorithm work with 3D data, for which the State-of-the-Art CDCPD2 algorithm [7] was designed. #### Iv-E1 Real data In our experiments, to accurately compare the precision of shape tracking, we adjusted the video frame rate to enable each algorithm to process it at its own pace and reported the times needed to process a single frame.Furthermore, because the performance of the CDCPD2 method is strongly related to the cable length estimate, we tested it for several different lengths for each scenario and reported only the best result, while our algorithm required no parameter tuning. Fig. 4: Comparison of multiple DLOs tracking methods on Ariadne+ dataset. In Table II we present mean values of the \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) errors and mean algorithm running times for 10 scenarios of cable being manipulated by a human, and sample frames for each scenario from the _RGBD real_ dataset (see Section IV-A.2). DLOFTB achieves lower errors than CDCPD2 for all considered scenarios and criteria, and its running times are about 3 times shorter. While for many scenarios values of the criteria are rather comparable, there are some cases where the proposed approach outperforms the CDCPD2 by a large margin (scenarios 4, 8, 9). In these cases, the CDCPD2 algorithm lost the track of the cable shape due to the fast movements of the cable (scenarios 8 and 9) or the complexity of the initial shape (scenario 4). In Figure 5 we present a sample tracking sequence, in which our algorithm is able to keep track of the cable movements and deformations. At the same time, for CDCPD2 the changes are too significant to be possible to follow. Moreover, for the last images in the sequence, CDCPD2 produces a wavy shape, which does not reflect the actual cable shape but does not increase the performance error measures significantly. #### Iv-B2 Artificial data To expose the aforementioned types of errors and accurately measure the quality of tracking, we need to utilize the \(\mathcal{L}_{3}\) criterion.To do so, we used _RGBD artificial_ dataset (see Section IV-A.3), which also includes challenging cases like high cable curvature (_scenarios 0, 1_), self-intersections (_scenarios 1, 2_) and rapid cable moves (_scenarios 3, 4_). The results of this comparison are presented in Table IV. Also, in this experiment, our proposed approach outperforms the CDCPD2 algorithm. However, the use of the more accurate criterion emphasized the differences between compared methods. DLOFTB achieves mean \(\mathcal{L}_{3}\) values that are from 6 to 20 times smaller than those achieved by CDCPD2. Minimal mean values of \(\mathcal{L}_{3}\) show that our approach is, on average, more accurate than the best predictions made by the CDCPD2 in 4 out of 5 scenarios. Moreover, maximal mean values show that throughout 3 out of 5 scenarios, DLOFTBs does not produce any significantly wrong measurements (max mean \(\mathcal{L}_{3}>10\)), while CDCPD2 does so for all scenarios. In Figure 6 we present a part of the _scenario 2_ in which cable was recovering from the self-intersection. Our algorithm was able to accurately track the cable throughout the whole process. In contrast, the CDCPD2 crushed when the cable was occluding itself a moment before the untangling and lost track for the rest of the sequence. ## V Conclusions This paper proposes a novel approach to DLO tracking on 2D and 3D images and videos called DLOFTBs. Using a segmented mask of the cable, we can precisely fit a B-spline representation of its shape within tens of milliseconds. The experimental analysis showed that DLOFTB is accurate and can handle tedious cases like occlusions, self-intersections, or even multiple DLOs at one time. Moreover, it outperforms the State-of-the-Art DLO tracking algorithms CDCPD2 [7], Ariadne+ [21], and FastDLO [22] in all considered scenarios both in terms of the quality of tracking, identification of multiple cables and algorithm runtime. Moreover, the proposed solution does not require any training, thus does not depend on the training data, and, unlike the CDCPD2, does not need any prior information about the DLO. Our method was extensively tested against algorithmic and learned methods. The weakness of the approach with learning is that it has substantial problems with generalization and is not working with the data outside of the distribution present in the training set. Our approach is not suffering from this issue, so it is better suitable for robotics. We claim that there is still some space for non-deep-learning approaches, which are better in generalization and are fully explainable. Fig. 5: DLOFTBs (green) and CDCPD2 algorithm (red) on real-data 3D cable tracking sequence. With the fast movement of the cable and insufficiently large cable model updates, CDCPD2 could not track the moving cable. In contrast, our approach accurately tracks the cable shape throughout the sequence. Fig. 6: DLOFTBs (green) and CDCPD2 algorithm (red) on artificially generated data of 3D cable tracking sequence. In the complex case of deforming the cable from the self-intersection into the basic shape, our approach accurately tracks the deformation throughout the whole sequence. In contrast, the CDCPD2 started with accurate tracking but degraded significantly after the last moment when the cable was self-intersecting.
2305.18766
HiFA: High-fidelity Text-to-3D Generation with Advanced Diffusion Guidance
The advancements in automatic text-to-3D generation have been remarkable. Most existing methods use pre-trained text-to-image diffusion models to optimize 3D representations like Neural Radiance Fields (NeRFs) via latent-space denoising score matching. Yet, these methods often result in artifacts and inconsistencies across different views due to their suboptimal optimization approaches and limited understanding of 3D geometry. Moreover, the inherent constraints of NeRFs in rendering crisp geometry and stable textures usually lead to a two-stage optimization to attain high-resolution details. This work proposes holistic sampling and smoothing approaches to achieve high-quality text-to-3D generation, all in a single-stage optimization. We compute denoising scores in the text-to-image diffusion model's latent and image spaces. Instead of randomly sampling timesteps (also referred to as noise levels in denoising score matching), we introduce a novel timestep annealing approach that progressively reduces the sampled timestep throughout optimization. To generate high-quality renderings in a single-stage optimization, we propose regularization for the variance of z-coordinates along NeRF rays. To address texture flickering issues in NeRFs, we introduce a kernel smoothing technique that refines importance sampling weights coarse-to-fine, ensuring accurate and thorough sampling in high-density regions. Extensive experiments demonstrate the superiority of our method over previous approaches, enabling the generation of highly detailed and view-consistent 3D assets through a single-stage training process.
Junzhe Zhu, Peiye Zhuang, Sanmi Koyejo
2023-05-30T05:56:58Z
http://arxiv.org/abs/2305.18766v4
# HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance ###### Abstract Automatic text-to-3D synthesis has achieved remarkable advancements through the optimization of 3D models. Existing methods commonly rely on pre-trained text-to-image generative models, such as diffusion models, providing scores for 2D renderings of Neural Radiance Fields (NeRFs) and being utilized for optimizing NeRFs. However, these methods often encounter artifacts and inconsistencies across multiple views due to their limited understanding of 3D geometry. To address these limitations, we propose a reformulation of the optimization loss using the diffusion prior. Furthermore, we introduce a novel training approach that unlocks the potential of the diffusion prior. To improve 3D geometry representation, we apply auxiliary depth supervision for NeRF-rendered images and regularize the density field of NeRFs. Extensive experiments demonstrate the superiority of our method over prior works, re Figure 2: We show our results of Text-to-3D synthesis rendered from 3 views. The text prompts, arranged from left to right and top to bottom, include: (1) a beautifully carved wooden queen chess piece; (2) A beautiful dress made out of garbage bags, on a mannequin. Studio lighting, high quality, high resolution; (3) a silver candelabra sitting on a red velvet tablecloth, only one candle is lit; (4) small saguaro cactus planted in a clay pot; (5) a high pile of chocolate chip cookies; (6) a kingfisher bird; (7) sorting hat from harry potter; (8) DSLR photo of a beautiful peacock with long neck. Figure 4: We show our rendered images _(column 2 and 4)_ of 3D assets with their corresponding meshes (extracted using marching cubes from NeRF density fields) on their left _(column 1 and 3)_. The text prompts, arranged from left to right and top to bottom, include: (1) A baby bunny sitting on top of a stack of pancakes; (2) DSLR photo of a beautiful peacock with long neck; (3) a beautifully carved wooden queen chess piece; (4) game asset of a leather shoe made from dragon skin; (5) a kingfisher bird (6) head of thanos; (7) a high pile of chocolate chip cookies; (8) small saguaro cactus planted in a clay pot. Figure 3: We show our results of Text-to-3D synthesis rendered from 3 views. The text prompts, arranged from left to right, include: (1) A DSLR photo of chulhu; (2) an overstuffed pastrami sandwich. sulting in advanced photo-realism and improved multi-view consistency._ ## 1 Introduction An automatic text-to-3D synthesis task aims to generate 3D assets from a text description. It has gained significant attention due to its applications in digital content generation, film-making, and Virtual Reality (VR) [1, 8]. However, text-to-3D synthesis is challenging due to the scarcity of large-scale annotated 3D datasets and the need for efficient 3D generative models. Recent advancements have emerged to address these challenges by leveraging pre-trained 2D models as priors for optimizing a 3D representation model [1, 5, 6, 8, 16, 28, 29, 32]. Commonly, pre-trained CLIP [17] or 2D diffusion generative models on images [22, 23] are applied to optimize Neural radiance fields (NeRFs) [11] or meshes [8]. Concretely, in prior work [16], a diffusion model produces a denoising score, specifically the noise residual, for a noisy NeRF-rendered image. Subsequently, this denoising score is utilized in back-propagation to optimize NeRFs. Since the denoising score is obtained through the diffusion model, the backward gradients inevitably incorporate a Jacobian term associated with the diffusion model. To circumvent the back-propagation through the diffusion model, existing approaches either directly exclude the Jacobian term associated with the diffusion model [8, 16] or utilize Monte-Carlo estimation to compute the expectation of scores from multiple noisy NeRF-rendered images [29]. While these methods manage to produce plausible results, we have observed that they are susceptible to artifacts and exhibit issues of multi-view inconsistency in rendered images [8, 16, 28, 29]. We suspect that these methods have not fully unlocked the potential of 2D diffusion prior. In particular, we observed that prior work [16, 29] randomly sampling a noise level in the diffusion process for optimizing NeRFs is suboptimal. It leads to out-of-distribution (OOD) issues with input of the diffusion model during initial training iterations and divergence issues with output of the diffusion model during end training iterations. Our work focuses on enhancing _photo-realism_ and _multi-view consistency_ of text-to-3D synthesis. We adopt a similar setup as prior work [8, 16, 28, 29], employing a pre-trained 2D diffusion model into a 3D generative model of radiance fields. To address the aforementioned issues, we introduce a new optimization approach that utilizes the reformulated diffusion guidance. Concretely, instead of using the noise residual of the diffusion model as the score, we re-derive the score as the image residual (or latent vector residual when using a latent diffusion model [22]). Moreover, rather than randomly sampling a time step (referred to as a noise level) of the diffusion process, we propose a time annealing approach that gradually decreases the time steps during training iterations. In addition, we propose auxiliary supervision on rendered images (and/or latent vectors). To improve the representation of 3D geometry, we incorporate depth supervision and apply regularization techniques to refine the volume density fields of a NeRF. We summarize our technical contributions as follows: * We propose a new optimizing approach by reformulating the denoising score for rendered images, accompanied by auxiliary supervision on rendered images (and/or latent vectors). * We introduce a new time step annealing approach that progressively decreases the time steps during training iterations. * We incorporate depth supervision and regularization of volume density fields of a NeRF. Please visit our website for additional results. ## 2 Related Work **NeRFs**[11] are a family of implicit neural representation methods designed to encode the shape and/or appearance of 3D objects using a parameterized neural network. NeRFs enable impressive novel-view synthesis via volume rendering. In contrast to voxel-based [31, 21, 13] and mesh-based methods [15, 30], NeRFs represent 3D objects in continuous space and free from the constraints of a specific object topology. In our work, we employ NeRFs for 3D asset representation. **Diffusion models**[3], belonging to the broad family of denoising score-matching generative models [25], have achieved remarkable success in a variety of applications such as image editing [10], text-to-image synthesis [18, 19, 22, 23], text-to-video synthesis [2], text-to-3D synthesis [1, 5, 6, 8, 16, 28], and continuous field synthesis [33]. Moreover, to address the challenge of slow sampling speed resulting from the iterative generation process of diffusion models, researchers have proposed efficient sampling, classified into learning-free [9, 26] and learning-based [24, 27] methods. In our work, we utilize a pre-trained StableDiffusion model [22] to incorporate image priors for NeRF optimization. **Text-to-3D synthesis** approaches commonly employ image priors to optimize 3D models. These approaches can be roughly categorized into two groups: (i) CLIP-guided text-to-3D approaches [5, 6] that utilize pre-trained cross-modal matching models, CLIP [17], and (ii) diffusion-guided text-to-3D approaches [1, 8, 16, 28] that utilize 2D diffusion models such as Imagen [23] and StableDiffusion [22]. We follow the diffusion-guided methods due to their superior performance in text-to-3D synthesis. Specifically, Dreamfusion [16] first introduced a Score Distillation Sampling (SDS) loss derived from the distillation of Imagen [23]. SDS minimizes the Kullback-Leibler (KL) divergence between a Gaussian noise distribution and the estimated noise distribution. SDS is widely applied in later text-to-3D synthesis work [1, 28, 8]. Score-Jacobian-Chaining [29] proposed Perturb-and-Average Scoring method to aggregate 2D image gradients of StableDiffusion [22] over multiple viewpoints into a 3D asset gradient. However, due to a lack of 3D structure understanding when lifting 2D diffusion models to 3D, these text-to-3D synthesis methods often result in difficulties in achieving realistic 3D assets in terms of multi-view consistency and photo-realism. In this work, we aim to address these issues by introducing a new optimization approach and incorporating additional supervision on rendered images, depth, and volume densities of NeRFs. ## 3 Preliminary: Score Distillation Sampling Recent works on text-to-3D synthesis [29, 8, 16] have focused on optimizing a 3D model using 2D diffusion prior. In these works, a 3D model is parameterized by \(\theta\), denoted as \(\mathbf{x}=g(\theta)\), which allows for the synthesis of an image \(\mathbf{x}\) based on a given camera pose. The 2D diffusion model contains a denoising autoencoder \(\epsilon_{\phi}\) which predicts the sampled noise \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), denoted as \(\mathbf{\hat{\epsilon}}_{t}\), by employing the noisy image \(\mathbf{x}_{t}\), a time step \(t\), a text embedding \(\mathbf{y}\). Formally, we have \(\mathbf{\hat{\epsilon}}_{t}\coloneqq\epsilon_{\phi}(\mathbf{x}_{t};\mathbf{y};t)\). To simplify the notion, we will omit the sub-index \(t\) and instead use \(\mathbf{\hat{\epsilon}}\) to represent \(\mathbf{\hat{\epsilon}}_{t}\). The diffusion model provides gradients for updating the 3D model \(g(\theta)\): \[\nabla_{\theta}\mathcal{L}_{\text{diff}}(\phi,\mathbf{x}) \tag{1}\] \[= \nabla_{\theta}\mathbb{E}_{t,\mathbf{\epsilon}}[\omega(t)||\mathbf{\hat{ \epsilon}}-\mathbf{\epsilon}||^{2}]\] \[= \mathbb{E}_{t,\mathbf{\epsilon}}[\omega(t)(\mathbf{\hat{\epsilon}}-\mathbf{ \epsilon})\frac{\partial\mathbf{\hat{\epsilon}}}{\partial\mathbf{x}_{t}}\frac{ \partial\mathbf{x}_{t}}{\partial\mathbf{x}}\frac{\partial\mathbf{x}}{\partial\theta}],\] where \(\omega(t)\) is a weighting function. Dreamfusion [16] omitted the U-Net Jacobian term \(\frac{\partial\mathbf{\hat{\epsilon}}}{\partial\mathbf{x}_{t}}\) and the scalar term \(\frac{\partial\mathbf{x}_{t}}{\partial\mathbf{x}}\) in Eq. 1. The new loss is called score distillation sampling (SDS). Consequently, the gradient of the SDS loss can be written as: \[\nabla_{\theta}\mathcal{L}_{\text{SDS}}(\phi,\mathbf{x})= \mathbb{E}_{t,\mathbf{\epsilon}}[\omega(t)(\mathbf{\hat{\epsilon}}-\mathbf{ \epsilon})\frac{\partial\mathbf{x}}{\partial\theta}]. \tag{2}\] ## 4 Method We aim to produce photo-realistic and multi-view consistent 3D models driven by text prompts. For this, we propose our method, as illustrated in Fig. 5, for which we provide an overview next. In our work, we employ a _latent_ diffusion model as a diffusion prior, specifically utilizing a publicly available StableDiffusion model (SD) [22]. In Sec. 4.1, we present Figure 5: Overview of our proposed method for text-to-3D synthesis. We aim to optimize a 3D model \(g(\theta)\) using a pre-trained 2D latent diffusion prior. To achieve this, we employ a latent diffusion model to provide gradients. Specifically, the diffusion model takes a rendered image \(\mathbf{x}\) as input and provides the estimate of the input rendered image, denoted as \(\mathbf{\hat{x}}\). Unlike existing works [16] that solely focus on computing noise residuals in the low-resolution latent space of the diffusion model, we propose a novel loss \(\mathcal{L}_{\text{BGT}}\), that computes a higher-resolution image residual. our formulation of the SDS loss where the U-Net Jacobian term _does not_ appear. Additionally, we introduce regularization techniques in both the latent and image space of SD, which seamlessly complement the use of our formulated SDS. Moreover, we analyze the training challenges encountered in previous works [16, 29] such as an out-of-distribution issue of diffusion input and a divergence issue of diffusion output. To address these issues, we propose a new training approach that incorporates annealed time step scheduling. Furthermore, in Sec. 4.2, we enhance the NeRF representation by introducing depth prior supervision and auxiliary regularization, specifically targeting the variance of sampled coordinates along rays (we name it _z-variance_) in NeRFs. ### Advancing training process with reformulated SDS We employ a pre-trained SD model [22] as 2D diffusion prior. Since the SD model performs the diffusion process in the _latent_ space, it results in changes to the SDS gradients in this scenario. Therefore, we first derive the original SDS gradient within the latent space of SD, followed by its reformulation. Specifically, an SD model consists of an encoder \(\mathcal{E}\), a decoder \(\mathcal{D}\), and a denoising autoencoder \(\epsilon_{\phi}\). The encoder \(\mathcal{E}\) compresses the input image \(\mathbf{x}\) into a latent vector \(\mathbf{z}\) with resolution \(64\times 64\), written as \(\mathbf{z}=\mathcal{E}(\mathbf{x})\) and the decoder \(\mathcal{D}\) reconstructs a latent vector back to an image \(\mathbf{x}=\mathcal{D}(\mathbf{z})\). An input of the autoencoder \(\epsilon_{\phi}\) is a noisy latent vector \(\mathbf{z}_{t}\) at noise level \(t\). The noisy latent vector \(\mathbf{z}_{t}\) is denoted as \[\mathbf{z}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{z}+\sqrt{1-\bar{\alpha}_{t}}\mathbf{\epsilon}, \tag{3}\] where \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). The weight is defined as \(\bar{\alpha}_{t}\coloneqq\prod_{s=1}^{t}\alpha_{s}\), where \(\alpha_{s}\) is a variance schedule for adding Gaus Figure 6: Qualitative text-to-3D synthesis comparison with Dreamfusion [16] and Magic3D [8]. The text prompt is displayed on top of the rendered images. Our proposed method provides significantly improved 3D texture quality and more realistic lighting. Figure 7: Qualitative comparison with Score-Jacobian-Chaining (SJC) [29]. Our method (_bottom_) achieves more appealing renderings. sian noise to the data during the diffusion process (refer to [3] for details). The denoising autoencoder \(\epsilon_{\phi}\) predicts the original noise at noise level \(t\), denoted as \(\hat{\mathbf{\epsilon}}\) (we omit the sub-index \(t\) as before): \[\mathbf{\hat{\epsilon}}:=\epsilon_{\phi}(\mathbf{z}_{t};\mathbf{y};t). \tag{4}\] Subsequently, the SDS gradient in the latent space of SD [22] is written as \[\nabla_{\theta}\mathcal{L}_{\text{SDS}}(\phi,\mathbf{z})= \mathbb{E}_{t,\mathbf{\epsilon}}[\omega(t)(\mathbf{\hat{\epsilon}}-\mathbf{ \epsilon})\frac{\partial\mathbf{z}}{\partial\theta}]. \tag{5}\] To our knowledge, the concrete formulation of this loss \(\mathcal{L}_{\text{SDS}}\) remains unknown.1 Footnote 1: This brings challenges for loss visualization and analysis, and back-propagation when using deep learning frameworks like PyTorch. In contrast, we can formulate a concrete definition for \(\mathcal{L}_{\text{SDS}}\) by changing noise residuals, \((\mathbf{\hat{\epsilon}}-\mathbf{\epsilon})\), to latent vector residuals. Specifically, we have \[\mathcal{L}_{\text{SDS}}(\phi,\mathbf{z})=\mathbb{E}_{t,\mathbf{\epsilon}}[\omega(t) \frac{\sqrt{\alpha_{t}}}{2\sqrt{1-\bar{\alpha}_{t}}}\|\mathbf{z}-\mathbf{\hat{z}}_{ \text{1step}}\|^{2}],\text{where} \tag{6}\] \[\mathbf{\hat{z}}_{\text{1step}}=\frac{1}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{z}_{t}- \sqrt{1-\bar{\alpha}_{t}}\epsilon_{\phi}(\mathbf{z}_{t};\mathbf{y},t)). \tag{7}\] \(\mathbf{\hat{z}}_{\text{1step}}\) is the denoised \(\mathbf{z}\) estimate, obtained by calling upon the denoising autoencoder \(\epsilon_{\phi}\) once and \((\mathbf{\hat{z}}_{\text{1step}}-\mathbf{z})\) is the latent vector residual. Note that \(\mathbf{\hat{z}}_{\text{1step}}\) are from the distribution generated by \(\mathbf{z}\), \(t\) and \(\mathbf{\epsilon}\). A concrete formulation of the loss \(\mathcal{L}_{\text{SDS}}\), as presented in Eq. 6, can facilitate loss analysis and gradient computation in practical scenarios. Consequently, the gradient of \(\mathcal{L}_{\text{SDS}}(\phi,\mathbf{z})\) in Eq. 6 can be written as \[\nabla_{\theta}\mathcal{L}_{\text{SDS}}(\phi,\mathbf{z})=\mathbb{E}_{t,\mathbf{ \epsilon}}[\omega(t)\frac{\sqrt{\bar{\alpha}_{t}}}{\sqrt{1-\bar{\alpha}_{t}}}( \mathbf{z}-\mathbf{\hat{z}}_{\text{1step}})\frac{\partial\mathbf{z}}{\partial\theta}] \tag{8}\] _Proof: we prove that the two SDS gradients in Eq. 5 and Eq. 8 are equivalent._ \[\nabla_{\theta}\mathcal{L}_{\text{SDS}}(\phi,\mathbf{z})\] \[= \omega(t)\frac{\sqrt{\bar{\alpha}_{t}}}{\sqrt{1-\bar{\alpha}_{t} }}(\mathbf{z}-\mathbf{\hat{z}}_{\text{1step}})\frac{\partial\mathbf{z}}{\partial\theta}\] \[= \omega(t)\frac{\sqrt{\bar{\alpha}_{t}}}{\sqrt{1-\bar{\alpha}_{t} }}(\mathbf{z}-\frac{1}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{z}_{t}-\sqrt{1-\bar{\alpha}_ {t}}\epsilon_{\phi}(\mathbf{z}_{t};\mathbf{y},t)))\frac{\partial\mathbf{z}}{\partial\theta}\] \[= \omega(t)\frac{\sqrt{\bar{\alpha}_{t}}}{\sqrt{1-\bar{\alpha}_{t} }}(\frac{\sqrt{1-\bar{\alpha}_{t}}}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{\hat{ \epsilon}}-\mathbf{\epsilon}))\frac{\partial z}{\partial\theta}\] \[= \omega(t)(\mathbf{\hat{\epsilon}}-\mathbf{\epsilon})\frac{\partial\mathbf{z} }{\partial\theta}\] (Eq. 5 ) Therefore, we introduce a specific formulation for the loss \(\mathcal{L}_{\text{SDS}}\), which yields identical gradients as in Eq. 5, while also benefiting from the advantages associated with having a concrete formulation. Moreover, to intuitively understand Eq. (6-8), we consider the \(\hat{z}_{\text{1step}}\) as a constant that is independent of \(\mathbf{z}\); in practice, detach it from the computation graph. In this context, \(\mathbf{\hat{z}}_{\text{1step}}\) is a _bootstrapped ground truth_ as it is the one-step estimate of \(\mathbf{z}\) obtained using the denoising autoencoder \(\epsilon_{\phi}\). Next, we analyze training challenges of prior work [16, 29] and propose two approaches to address those. **Training challenges of prior work** include an out-of-distribution (OOD) issue of the diffusion input and a divergence issue of the diffusion output, due to random time step sampling in prior works [16, 29]. Firstly, during the initial training iterations, rendered images (and latent vectors) contain artifacts, which lie outside the original distribution of diffusion input. Particularly, when the randomly sampled time step \(t\) is small, the step size of denoising is relatively small, as well. As a result, the recovered latent vector \(\mathbf{\hat{z}}\), close to the OOD input, also falls outside the original distribution of latent vectors. Secondly, another significant issue is the divergence of predicted images produced by the diffusion model. Towards the end of the training iterations, a well-trained NeRF renders images of an _almost determined_ 3D asset. However, when a large sampled time step \(t\) is used, the denoised image predicted by the diffusion model can appear _distinct_ and _unrelated_ to the given input rendering. It leads to the generation of inaccurate gradients from the diffusion model. Consequently, these inaccurate gradients are detrimental to the optimization of the 3D model. To address these issues, we introduce two approaches: _(i)_ instead of using \(\mathbf{\hat{z}}_{\text{1step}}\) for SDS, we employ a more accurate estimation for the latent vector \(\mathbf{z}\), denoted as \(\mathbf{\hat{z}}\), obtained through iterative denoising, and _(ii)_ we gradually decrease the intensity of the added noise during training by annealing the step \(t\). In the following, we provide details for each. **Iterative estimation of \(\mathbf{\hat{z}}\)**. We introduce an enhanced formulation of the SDS loss (Eq. 6) by replacing \(\mathbf{\hat{z}}_{\text{1step}}\) with \(\mathbf{\hat{z}}\), where the latent vector \(\mathbf{\hat{z}}\) is better estimation of the original \(\mathbf{z}\) via iterative denoising. This adapted loss of Eq. 6 is denoted as \(\mathcal{L}_{\text{BGT}}\), formally written as \[\mathcal{L}_{\text{BGT}}(\phi,\mathbf{z},\mathbf{\hat{z}})=\mathbb{E}_{t,\mathbf{\epsilon}} \|\mathbf{z}-\mathbf{\hat{z}}\|^{2}. \tag{9}\] Iterative estimation of \(\mathbf{\hat{z}}\) refers to gradually denoise the noisy latent vector \(\mathbf{z}_{t}\) until \(t=0\). Specifically, we initiate the iterative estimation process with a step ratio \(r\) by obtaining \(\mathbf{\hat{z}}_{t-r}\), \(\mathbf{\hat{z}}_{t-2r},\cdots\), iteratively until reaching \(\mathbf{\hat{z}}_{0}\), which we use as the ground truth, i.e., \(\mathbf{\hat{z}}\coloneqq\mathbf{\hat{z}}_{0}\). To accelerate this reverse diffusion process, many sampling approaches, such as DDIM [26], can be adapted here. We take DDIM [26] as an example, we iteratively obtain a denoised latent vector \(\mathbf{\hat{z}}_{i-r}\) from \(\mathbf{\hat{z}}_{i}\): \[\begin{split}\mathbf{\hat{z}}_{i-r}&=\sqrt{\bar{\alpha}_{ i-r}}(\frac{\mathbf{\hat{z}}_{i}-\sqrt{1-\bar{\alpha}_{i}}\epsilon_{\phi}(\mathbf{\hat{z}}_{i}; \mathbf{y},i)}{\sqrt{\bar{\alpha}_{i}}})\\ &+\sqrt{1-\bar{\alpha}_{i-r}-\sigma_{i}^{2}}\cdot\epsilon_{\phi}( \mathbf{\hat{z}}_{i};\mathbf{y},i)+\sigma_{i}\mathbf{\epsilon},\end{split} \tag{10}\] where \(\alpha_{i}\) is a variance schedule function, \(\sigma_{i}=\eta\sqrt{\frac{1-\alpha_{i-r}}{1-\alpha_{i}}}\sqrt{\frac{1-\alpha _{i}}{\alpha_{i-r}}}\), and \(\eta\) is a stochasticity hyperparameter. After obtaining the estimated latent vector \(\mathbf{\hat{z}}\), a further adapted loss can be naturally obtained, by incorporating additional supervision for recovered images. Formally, we define the adapted loss \(\mathcal{L}_{\text{BGT+}}\) as \[\mathcal{L}_{\text{BGT+}}(\phi,\mathbf{z},\mathbf{\hat{z}})=\mathbb{E}_{t,\mathbf{ \epsilon}}[\|\mathbf{z}-\mathbf{\hat{z}}\|^{2}+\lambda_{\text{rgb}}\|\mathbf{x}-\mathbf{\hat {x}}\|^{2}], \tag{11}\] where \(\mathbf{\hat{x}}\) is an recovered image obtained through the decoder \(\mathcal{D}\), formally written as \(\mathbf{\hat{x}}=\mathcal{D}(\mathbf{\hat{z}})\). \(\lambda_{\text{rgb}}\) is a scaling parameter. We mention that when utilizing a pre-trained latent diffusion model, the latent vectors are typically at a low resolution. In contrast, the images \(\mathbf{x}\) and \(\mathbf{\hat{x}}\) are at a higher resolution (\(64\times 64\) and \(512\times 512\), respectively in the case of using StableDiffusion [22]). We ablate these proposed loss terms in Sec. 5. Note that computing the higher-resolution RGB loss is not possible under prior work's formulation of the gradient \(\nabla_{\theta}\mathcal{L}_{\text{SDS}}\) (Eq. 8), since the denoising autoencoder \(\epsilon_{\phi}\) is restricted to latent space only. **Annealed time step scheduling** is an alternative to replace random time step sampling in the training process, resulting in enhanced performance. Specifically, we introduce a high time step \(t\) to the rendered image during the _initial_ training iterations. This intentional noise allows the image to align with the distribution modeled by the diffusion prior. As the training progresses, we gradually reduce the time step \(t\), enabling the rendered image to capture finer details by leveraging more stable gradients with lower variance. In practice, we pick the schedule \[t=t_{\max}-(t_{\max}-t_{\min})\sqrt{\frac{\text{iter}}{\text{total\_iter}}}, \tag{12}\] where \(t\) decreases rapidly at the beginning of the training process and slows down toward the end. This scheduling strategy allocates more training steps to lower values of \(t\), ensuring that fine-grained details can be adequately learned during the latter stages of training. Empirically, we choose \(t_{\max}=0.98\) and \(t_{\min}=0.2\). We find annealing the timestep to be critical. ### Advanced supervision for NeRFs A NeRF renders a color \(\hat{C}_{r}\), denoted as \(\hat{C}_{r}=\sum_{i=1}^{N}w_{i}c_{i}\), where \(w_{i}\) and \(c_{i}\) are respecitvely estimated weights and colors of the sampled coordinate \(z_{i}\) along a ray \(r\) (details explained in [11]). \(N\) refers to the number of sampled points along the ray \(r\). Subsequently, we compute the depth value \(\mu_{z_{r}}\) and the disparity value \(d_{z_{r}}\) of the ray \(r\) as \[\mu_{z_{r}}=\sum_{i}z_{i}\frac{w_{i}}{\sum_{i}w_{i}},\text{and }d_{z_{r}}=\frac{1}{\mu_{z_{r}}}, \tag{13}\] where \(\frac{w_{i}}{\sum_{i}w_{i}}\) can be considered as a probability mass function. **Depth prior supervision** is adopted into NeRFs to ensure multi-view consistency. NeRFs are known to be able to render realistic images with implausible underlying geometry [14]. Since our 2D guidance does not have spatial consistency built into it, we propose to use pre-trained depth prediction models as additional guidance to alleviate this issue. To simplify the notation, we denote rendered disparity map as \(\mathbf{d}\). We apply a regularization loss \(\mathcal{L}_{d}\) on the disparity map \(\mathbf{d}\), as shown in Fig. 8. Specifically, we employ a pre-trained depth predictor [20] to estimate the _pseudo ground truth_ of the disparity map \(\mathbf{d}^{*}\), for the rendered image \(\mathbf{x}\) and compute the depth loss \(\mathcal{L}_{d}\). Formally, we have Figure 8: Illustration of the depth loss \(\mathcal{L}_{d}\). \[\mathbf{d}^{\prime}=\frac{\langle\mathbf{d},\mathbf{d}^{*}\rangle\mathbf{d}}{\|\mathbf{d}\|^{2}},\text{ and }\mathcal{L}_{d}=\mathbb{E}_{\mathbf{d}}\|\mathbf{d}^{\prime}-\mathbf{d}^{*}\|^{2} \tag{14}\] Note that we normalize the disparity maps \(\mathbf{d}\) and \(\mathbf{d}^{*}\). **Regularization for z-variance** aims to reduce the variance of the distribution of the sampled z-coordinates \(z_{i}\) along the ray \(r\). A smaller variance indicates crisper surfaces in geometry. To this end, we compute the z-variance along the ray \(r\), denoted as \(\sigma_{z_{r}}^{2}\): \[\sigma_{z_{r}}^{2}=\mathbb{E}_{z_{i}}[(z_{i}-\mu_{z_{r}})^{2}]=\sum_{i}(z_{i}- \mu_{z_{r}})^{2}\frac{w_{i}}{\sum_{i}w_{i}}. \tag{15}\] The regularization loss \(\mathcal{L}_{\text{var}}\) for the variance \(\sigma_{z_{r}}^{2}\) is defined as \[\mathcal{L}_{\text{zvar}}= \mathbb{E}_{r}\delta_{r}\sigma_{z_{r}}^{2}, \tag{16}\] \[\delta_{r}=1\text{ if }\sum_{i}w_{i}>0.5,\text{ else }0.\] Here, \(\delta_{r}\) serves as an indicator function (or binary weight) for filtering out background rays. We find this loss to be immensely useful in ensuring the geometrical consistency of our 3D model. Removing the loss \(\mathcal{L}_{\text{zvar}}\), as illustrated in Fig. 10, results in the cloudy and less accurate geometry. We define our total loss function as \[\mathcal{L}=\quad\mathcal{L}_{\text{BGT+}}+\lambda_{d}\mathcal{L}_{d}+\lambda _{\text{zvar}}\mathcal{L}_{\text{zvar}}, \tag{17}\] where \(\lambda_{d}\) and \(\lambda_{\text{zvar}}\) are loss weights. We present our training procedure in Algorithm 1. ## 5 Experiments We present our implementation details in Sec. 5.1 and evaluate our method to generate 3D assets from text prompts in Sec. 5.2. Specifically, we compare our method with state-of-the-art text-to-3D synthesis methods including Dreamfusion [16], Magic3D [8], and Score-Jacobian-Chaining [29]. Moreover, we conduct extensive ablation studies to verify the effectiveness of each proposed technique. Finally, we analyze the limitations of our method. ### Implementation details **Model setup.** Our approach is implemented based on a publicly available repository 2. In this implementation, a NeRF is parameterized by a multi-layer perception (MLP), with instant-ngp [12] for positional encoding. To enhance photo-realism and enable flexible modeling of lighting, we discard the Lambertian shading as employed in [16]. Instead, we encode the ray direction using spherical harmonics and utilize it as an input to NeRF. Additionally, we incorporate a background network that predicts background color solely based on the ray direction. For our model, we employ a pre-trained SD model 3 that takes input images at \(512\times 512\) resolution. To predict the disparity map, we employ a pre-trained dense prediction transformer 4. Footnote 2: [https://github.com/shawkey/stable-dreamfusion/tree/main](https://github.com/shawkey/stable-dreamfusion/tree/main). Footnote 3: We use the public code base [https://github.com/huggingface/diffusers](https://github.com/huggingface/diffusers). Footnote 4: [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers). **Training setup.** We use Adam [7] with a learning rate of \(10^{-2}\) for instant-ngp encoding, and \(10^{-3}\) for NeRF weights. In practice, we choose total_iter as \(10^{4}\) iterations. The rendering resolution is \(512\times 512\). To accelerate training, we employ DDIM [26] with empirically chosen parameters \(r=0.25,\text{and}\,\eta=1\). We choose the hyper-parameters \(\lambda_{\text{rgb}}=0.1,\lambda_{d}=0.1,\lambda_{\text{zvar}}=3\). Similar to prior work [8, 16, 29], we use classifier-free guidance [4] of \(100\) for our diffusion model. ### Experimental results **Qualitative rendered results** of the 3D assets synthesized by our approach are depicted in Fig. 1. Our approach excels at generating high-fidelity 3D assets, producing photo-realistic and visually consistent images from multiple viewpoints. **Qualitative comparisons to state-of-the-art methods** are shown in Fig. 6 and Fig. 7. Specifically, as shown in Fig. 6, our rendered images exhibit enhanced photo-realism, showcasing improved texture details of the 3D assets and more plausible lighting effects. Similarly, when comparing to Score-Jacobian-Chaining [29] in Fig. 7, we observe a clear superiority in terms of vividness and the overall quality of image content. **Ablation study** is conducted to validate the effectiveness of the proposed techniques, as illustrated in Fig. 9- 10. Through these studies, we observe that the removal of each proposed technique can result in varying degrees of artifacts in RGB colors or 3D geometry. Moreover, in Fig. 10 we compare the rendered depth maps for images with either the full loss or with the loss \(\mathcal{L}_{\text{zvar}}\) removed. Our observation reveals that the removal of the loss \(\mathcal{L}_{\text{zvar}}\) results in both inaccurate geometry and a distorted rendered image. **Limitations**. While our method successfully synthesizes plausible 3D assets, we have identified several limitations, as illustrated in Fig. 11, which we intend to address in future work. Firstly, we have observed instances where our model struggles to comprehend certain text prompts, as seen in the failures to synthesize "lox" and "peeling" depicted in Fig. 11. This limitation can be attributed to the constrained capacity of a pre-trained CLIP text encoder used in an SD model [22] for conditioning representation. We anticipate that employing a more advanced language model will alleviate this issue. Secondly, our approach yields unsatisfactory synthetic results in certain cases, such as the example of Zelda Link shown in Fig. 11 with 2 ears on one side and inaccurate facial reconstruction. To address this, it may be beneficial to enhance the capabilities of the 2D diffusion prior. ## 6 Conclusion We propose a novel approach for text-to-3D synthesis. Our method leverages several key enhancements, including improved guidance from the diffusion prior, additional supervision on latent vectors and rendered images of the diffusion model, as well as depth and volume density of NeRFs. Extensive experiments highlight that our method significantly enhances the performance w.r.t. photo-realism and multi-view consistency, compared to state-of-the-art methods.
2306.01767
On Schur's irreducibility results and generalised $φ$-Hermite polynomials
Let $c$ be a fixed integer such that $c \in \{0,2\}.$ Let $n$ be a positive integer such that either $n\geq 2$ or $2n+1 \neq 3^u$ for any integer $u\geq 2$ according as $c = 0$ or not. Let $\phi(x)$ belonging to $\mathbb{Z}[x]$ be a monic polynomial which is irreducible modulo all primes less than $2n+c$. Let $a_i(x)$ with $0\leq i\leq n-1$ belonging to $\mathbb{Z}[x]$ be polynomials having degree less than $\deg\phi(x)$. Let $a_n \in \mathbb{Z}$ and the content of $(a_na_0(x))$ is not divisible by any prime less than $2n+c$. For a positive integer $j$, if $u_j$ denotes the product of the odd numbers $\leq j$, then we show that the polynomial $\frac{a_{n}}{u_{2n+c}}\phi(x)^{2n}+\sum\limits_{j=0}^{n-1}a_j(x)\frac{\phi(x)^{2j}}{u_{2j+c}}$ is irreducible over the field $\mathbb{Q}$ of rational numbers. This generalises a well-known result of Schur which states that the polynomial $\sum\limits_{j=0}^{n}a_j\frac{x^{2j}}{u_{2j+c}}$ with $a_j \in \mathbb{Z}$ and $|a_0| = |a_n| = 1$ is irreducible over $\mathbb{Q}$. We illustrate our result through examples.
Anuj Jakhar
2023-05-28T08:08:14Z
http://arxiv.org/abs/2306.01767v1
# On Schur's irreducibility results and generalised \(\phi\)-Hermite polynomials ###### Abstract. Let \(c\) be a fixed integer such that \(c\in\{0,2\}.\) Let \(n\) be a positive integer such that either \(n\geq 2\) or \(2n+1\neq 3^{u}\) for any integer \(u\geq 2\) according as \(c=0\) or not. Let \(\phi(x)\) belonging to \(\mathbf{Z}[x]\) be a monic polynomial which is irreducible modulo all primes less than \(2n+c\). Let \(a_{i}(x)\) with \(0\leq i\leq n-1\) belonging to \(\mathbf{Z}[x]\) be polynomials having degree less than \(\deg\phi(x)\). Let \(a_{n}\in\mathbf{Z}\) and the content of \((a_{n}a_{0}(x))\) is not divisible by any prime less than \(2n+c\). For a positive integer \(j\), if \(u_{j}\) denotes the product of the odd numbers \(\leq j\), then we show that the polynomial \(\frac{a_{n}}{a_{2n+c}}\phi(x)^{2n}+\sum\limits_{j=0}^{n-1}a_{j}(x)\frac{\phi(x )^{2j}}{u_{2j+c}}\) is irreducible over the field \(\mathbf{Q}\) of rational numbers. This generalises a well-known result of Schur which states that the polynomial \(\sum\limits_{j=0}^{n}a_{j}\frac{x^{2j}}{u_{2j+c}}\) with \(a_{j}\in\mathbf{Z}\) and \(|a_{0}|=|a_{n}|=1\) is irreducible over \(\mathbf{Q}\). We illustrate our result through examples. Key words and phrases:Irreducibility, Polynomials, Newton Polygons 2010 Mathematics Subject Classification: 11C08, 11R04 ## 1. Introduction and statements of results For each non-negative integer \(j\), we define \(u_{j}\) as the product of the odd numbers \(\leq j\). In particular, we have \(u_{0}=u_{2}=1\), \(u_{4}=3,\ u_{6}=15,\ \cdots.\) In 1929, Schur [9] proved the following two results. **Theorem 1.1**.: Let \(a_{j}\)'s for \(0\leq j\leq n\), be integers and \(|a_{0}|=|a_{n}|=1\). Then the polynomial \(\sum\limits_{j=0}^{n}a_{j}\frac{x^{2j}}{u_{2j}}\) is irreducible over \(\mathbf{Q}\). **Theorem 1.2**.: Let \(n\) be a positive integer such that \(2n+1\neq 3^{u}\) for any integer \(u\geq 2\). Let \(a_{j}\)'s for \(0\leq j\leq n\), be integers and \(|a_{0}|=|a_{n}|=1\). Then the polynomial \(\sum\limits_{j=0}^{n}a_{j}\frac{x^{2j}}{u_{2j+2}}\) is irreducible over \(\mathbf{Q}\). As a consequence of the above theorems, he also proved that the \(m^{th}\) classical Hermite polynomial, given by \[H_{m}(x)=\sum\limits_{j=0}^{[m/2]}(-1)^{j}\binom{m}{2j}u_{2j}x^{m-2j},\] is irreducible if \(m\) is even and is \(x\) times an irreducible polynomial if \(m\) is odd. In the present paper, we extend Theorems 1.1, 1.2. More precisely, we prove the following results. **Theorem 1.3**.: Let \(n\geq 2\) and \(a_{n}\) be integers. Let \(\phi(x)\) belonging to \(\mathbf{Z}[x]\) be a monic polynomial which is irreducible modulo all primes less than \(2n\). Suppose that \(a_{0}(x),\cdots,a_{n-1}(x)\) belonging to \(\mathbf{Z}[x]\) satisfy the following conditions. 1. \(\deg a_{i}(x)<\deg\phi(x)\) for \(0\leq i\leq n-1\), 2. the content of \((a_{n}a_{0}(x))\) is not divisible by any prime less than \(2n\). Then the polynomial \[f_{1}(x)=\frac{a_{n}}{u_{2n}}\phi(x)^{2n}+\sum_{j=0}^{n-1}a_{j}(x)\frac{\phi(x )^{2j}}{u_{2j}}\] is irreducible over \(\mathbf{Q}\). We wish to point out here that the analogues of the above theorem does not hold for \(n=1\) because if \(\phi(x)\in\mathbf{Z}[x]\) is a monic polynomial of degree \(m\geq 3\), then the polynomial \(\phi(x)^{2}-x^{2}=(\phi(x)+x)(\phi(x)-x)\) is reducible over \(\mathbf{Q}\). It may be pointed out that in Theorem 1.3 the assumption "the content of \(a_{0}(x)\) is not divisible by any prime less than \(2n\)" cannot be dispensed with. For example, consider the polynomial \(\phi(x)=x^{2}-x+5\) which is irreducible modulo \(2\) and \(3\). Then the polynomial \(f(x)=\frac{\phi(x)^{4}}{3}-3=\frac{1}{3}(\phi(x)^{2}+3)(\phi(x)^{2}-3)\) is reducible over \(\mathbf{Q}\). We also give below an example to show that Theorem 1.3 may not hold if \(a_{n}\) is replaced by a (monic) polynomial \(a_{n}(x)\) with integer coefficient having degree less than \(\deg\phi(x)\). Consider \(\phi(x)=x^{2}-x+5\) which is irreducible modulo \(2\) and \(3\). Take \(a_{2}(x)=x-3,a_{1}(x)=x+26\) and \(a_{0}(x)=5(x-5)\). Then the polynomial \[a_{2}(x)\frac{\phi(x)^{4}}{3}+a_{1}(x)\phi(x)^{2}+a_{0}(x)\] has \(0\) as a root. **Theorem 1.4**.: Let \(n\) be a positive integer such that \(2n+1\neq 3^{u}\) for any integer \(u\geq 2\). Let \(\phi(x)\) belonging to \(\mathbf{Z}[x]\) be a monic polynomial which is irreducible modulo all primes less than \(2n+2\). Suppose that \(a_{n},a_{0}(x),\cdots,a_{n-1}(x)\) belonging to \(\mathbf{Z}[x]\) satisfy the following conditions. 1. \(\deg a_{i}(x)<\deg\phi(x)\) for \(0\leq i\leq n-1\), 2. the content of \((a_{n}a_{0}(x))\) is not divisible by any prime less than \(2n+2\). Then the polynomial \[f_{2}(x)=\frac{a_{n}}{u_{2n+2}}\phi(x)^{2n}+\sum_{j=0}^{n-1}a_{j}(x)\frac{\phi(x) ^{2j}}{u_{2j+2}}\] is irreducible over \(\mathbf{Q}\). It may be pointed out that in Theorem 1.4 the assumption "the content of \(a_{0}(x)\) is not divisible by any prime less than \(2n+2\)" cannot be dispensed with. For example, consider the polynomial \(\phi(x)=x^{2}-x+11\) which is irreducible modulo \(2,3\) and \(5\). Then the polynomial \(f(x)=\frac{\phi(x)^{4}}{15}+6\frac{\phi(x)^{2}}{3}+15=\frac{1}{15}(\phi(x)^{2} +15)^{2}\) is reducible over \(\mathbf{Q}\). We also give below an example to show that Theorem 1.4 may not hold if \(a_{n}\) is replaced by a (monic) polynomial \(a_{n}(x)\) with integer coefficient having degree less than \(\deg\phi(x)\). Consider \(\phi(x)=x^{2}-x+11\) which is irreducible modulo \(2,3\) and \(5\). Take \(a_{2}(x)=x-15,a_{1}(x)=x+366\) and \(a_{0}(x)=x-121\). Then the polynomial \[a_{2}(x)\frac{\phi(x)^{4}}{15}+a_{1}(x)\frac{\phi(x)^{2}}{3}+a_{0}(x)\] has \(0\) as a root. As an application of Theorems 1.3, 1.4, we shall prove the following theorem. **Theorem 1.5**.: Let \(m\geq 3\) be an integer. Let \(\phi(x)\) belonging to \(\mathbf{Z}[x]\) be a monic polynomial which is irreducible modulo all primes \(\leq m\). Suppose that \(a_{[\frac{m}{2}]},a_{0}(x),\cdots,a_{[\frac{m}{2}]-1}(x)\) belonging to \(\mathbf{Z}[x]\) satisfy the following conditions. 1. \(\deg a_{i}(x)<\deg\phi(x)\) for \(0\leq i\leq[m/2]-1\), 2. the content of \((a_{[\frac{m}{2}]}a_{0}(x))\) is not divisible by any prime \(\leq m\). Then the polynomial \[H_{m}^{\phi}(x)=a_{[\frac{m}{2}]}\phi(x)^{m}+\sum_{j=1}^{[m/2]}{m\choose 2j}u_{ 2j}a_{[\frac{m}{2}]-j}(x)\phi(x)^{m-2j}\] is irreducible over \(\mathbf{Q}\) if \(m\) is even and is \(\phi(x)\) times an irreducible polynomial if \(m\) is odd with \(m\neq 3^{u}\) for any integer \(u\geq 2\). We wish to point out here that in the above theorem, if we take \(\phi(x)=x\) and \(a_{[\frac{m}{2}]}=(-1)^{[\frac{m}{2}]},\ a_{i}(x)=(-1)^{i}\) for \(0\leq i\leq[m/2]-1\), then \(H_{m}^{\phi}(x)\) becomes \(m\)-th classical Hermite polynomial. The following corollary is an immediate consequence of the above theorem. **Corollary 1.6**.: Let \(m\geq 3\) be an integer. Let \(\phi(x)\) belonging to \(\mathbf{Z}[x]\) be a monic polynomial which is irreducible modulo all primes less than \(m\). Then the polynomial \[H_{m}(\phi(x))=\sum_{j=0}^{[m/2]}(-1)^{j}\binom{m}{2j}u_{2j}\phi(x)^{m-2j}\] is irreducible over \(\mathbf{Q}\) if \(m\) is even and is \(\phi(x)\) times an irreducible polynomial if \(m\) is odd with \(m\neq 3^{u}\) for any integer \(u\geq 2\). We now provide some examples of Theorems 1.3, 1.4. **Example 1.7**.: Consider \(\phi(x)=x^{3}-x+37\). It can be easily checked that \(\phi(x)\) is irreducible modulo \(2,3,5\) and \(7\). Let \(j\geq 2\) and \(a_{j}\) be integers. Let \(a_{i}(x)\in\mathbf{Z}[x]\) be polynomials each having degree less than \(3\) for \(0\leq i\leq j-1\). Assume that the content of \((a_{j}a_{0}(x))\) is not divisible by any prime less than \(2j\). Then by Theorem 1.3, the polynomial \[f_{j}(x)=\frac{a_{j}}{u_{2j}}\phi(x)^{2j}+\sum_{i=0}^{j-1}a_{i}(x)\frac{\phi( x)^{2i}}{u_{2i}}\] is irreducible over \(\mathbf{Q}\) for \(j\in\{2,3,4,5\}\). **Example 1.8**.: Consider \(\phi(x)=x^{2}-x+17\). It can be easily checked that \(\phi(x)\) is irreducible modulo \(2,3,5\) and \(7\). Let \(j\geq 2\) and \(a_{j}\) be integers. Let \(a_{i}(x)\in\mathbf{Z}[x]\) be polynomials each having degree less than \(2\) for \(0\leq i\leq j-1\). Assume that the content of \((a_{j}a_{0}(x))\) is not divisible by any prime less than \(2j+2\). Then by Theorem 1.4, the polynomial \[g_{j}(x)=\frac{a_{j}}{u_{2j+2}}\phi(x)^{2j}+\sum_{i=0}^{j-1}a_{i}(x)\frac{\phi (x)^{2i}}{u_{2i+2}}\] is irreducible over \(\mathbf{Q}\) for \(j\in\{2,3,4\}\). It may be pointed out that, in the above examples, the irreducibility of the polynomials \(f_{5}(x)\) and \(g_{4}(x)\) do not seem to follow from any known irreducibility criterion (cf. [1], [2], [3], [4], [5], [6] and [7]). ## 2. Preliminary results. We first introduce the notion of Gauss valuation and \(\phi\)-Newton polygon. For a prime \(p\), \(v_{p}\) will denote the \(p\)-adic valuation of \(\mathbf{Q}\) defined for any non-zero integer \(b\) to be the highest power of \(p\) dividing \(b\). We shall denote by \(v_{p}^{x}\) the Gaussian valuation extending \(v_{p}\) defined on the polynomial ring \(\mathbf{Z}[x]\) by \[v_{p}^{x}(\sum_{i}b_{i}x^{i})=\min_{i}\{v_{p}(b_{i})\},b_{i}\in\mathbf{Z}.\] **Definition 2.1**.: Let \(p\) be a prime number and \(\phi(x)\in\mathbf{Z}[x]\) be a monic polynomial which is irreducible modulo \(p\). Let \(f(x)\) belonging to \(\mathbf{Z}[x]\) be a polynomial having \(\phi\)-expansion1\(\sum\limits_{i=0}^{n}b_{i}(x)\phi(x)^{i}\) with \(b_{0}(x)b_{n}(x)\neq 0\). Let \(P_{i}\) stand for the point in the plane having coordinates \((i,v_{p}^{x}(b_{n-i}(x)))\) when \(b_{n-i}(x)\neq 0\), \(0\leq i\leq n\). Let \(\mu_{ij}\) denote the slope of the line joining the points \(P_{i}\) and \(P_{j}\) if \(b_{n-i}(x)b_{n-j}(x)\neq 0\). Let \(i_{1}\) be the largest index \(0<i_{1}\leq n\) such that Footnote 1: If \(\phi(x)\) is a fixed monic polynomial with coefficients in \(\mathbf{Z}\), then any \(f(x)\in\mathbf{Z}[x]\) can be uniquely written as a finite sum \(\sum\limits_{i}b_{i}(x)\phi(x)^{i}\) with \(\deg b_{i}(x)<\deg\phi(x)\) for each \(i\); this expansion will be referred to as the \(\phi\)-expansion of \(f(x)\). \[\mu_{0i_{1}}=\min\{\mu_{0j}\ |\ 0<j\leq n,\ b_{n-j}(x)\neq 0\}.\] If \(i_{1}<n\), let \(i_{2}\) be the largest index \(i_{1}<i_{2}\leq n\) such that \[\mu_{i_{1}i_{2}}=\min\{\mu_{i_{1}j}\ |\ i_{1}<j\leq n,\ b_{n-j}(x)\neq 0\}\] and so on. The \(\phi\)-Newton polygon of \(f(x)\) with respect to \(p\) is the polygonal path having segments \(P_{0}P_{i_{1}},P_{i_{1}}P_{i_{2}},\ldots,P_{i_{k-1}}P_{i_{k}}\) with \(i_{k}=n\). These segments are called the edges of the \(\phi\)-Newton polygon of \(f(x)\) and their slopes from left to right form a strictly increasing sequence. The \(\phi\)-Newton polygon minus the horizontal part (if any) is called its principal part. We shall use the following lemma [8, Theorem 1.3] in the sequel. We omit its proof. **Lemma 2.2**.: Let \(n,k\) and \(\ell\) be integers with \(0\leq\ell<k\leq\frac{n}{2}\) and \(p\) be a prime. Let \(\phi(x)\in\mathbf{Z}[x]\) be a monic polynomial which is irreducible modulo \(p\). Let \(f(x)\) belonging to \(\mathbf{Z}[x]\) be a monic polynomial not divisible by \(\phi(x)\) having \(\phi\)-expansion \(\sum\limits_{i=0}^{n}f_{i}(x)\phi(x)^{i}\) with \(f_{n}(x)\neq 0\). Assume that \(v_{p}^{x}(f_{i}(x))>0\) for \(0\leq i\leq n-\ell-1\) and the right-most edge of the \(\phi\)-Newton polygon of \(f(x)\) with respect to \(p\) has slope less than \(\frac{1}{k}\). Let \(a_{0}(x),a_{1}(x),\ldots,a_{n}(x)\) be polynomials over \(\mathbf{Z}\) satisfying the following conditions. 1. \(\deg a_{i}(x)<\deg\phi(x)-\deg f_{i}(x)\) for \(0\leq i\leq n\), 2. \(v_{p}^{x}(a_{0}(x))=0\), i.e., the content of \(a_{0}(x)\) is not divisible by \(p\), 3. the leading coefficient of \(a_{n}(x)\) is not divisible by \(p\). Then the polynomial \(\sum\limits_{i=0}^{n}a_{i}(x)f_{i}(x)\phi(x)^{i}\) does not have a factor in \(\mathbf{Z}[x]\) with degree lying in the interval \([(\ell+1)\deg\phi(x),(k+1)\deg\phi(x))\). We now prove the following elementary result will be used in the proof of Theorems 1.3, 1.4. **Lemma 2.3**.: Let \(n\geq 1\) be an integer and \(p\) be a prime number. Let \(\phi(x)\in\mathbf{Z}[x]\) be a monic polynomial which is irreducible modulo \(p\). Suppose that \(a_{0}(x),a_{1}(x),\cdots,a_{n-1}(x)\) belonging to \(\mathbf{Z}[x]\) are polynomials each having degree less than \(\deg\phi(x)\). Let \(a_{n}\) be an integer with \(p\nmid a_{n}\). Let \(b_{0},b_{1},\cdots,b_{n-1}\) be integers such that \(p|b_{j}\) for each \(j\), \(0\leq j\leq n-1\). Then the polynomial \[F(x)=a_{n}\phi(x)^{2n}+\sum_{j=0}^{n-1}b_{j}a_{j}(x)\phi(x)^{2j}\] can not have any non-constant factor having degree less than \(\deg\phi(x)\). Proof.: Let \(c\) denote the content of \(F(x)\). As \(p\nmid a_{n}\), we have \(p\nmid c\). Now suppose to the contrary that there exists a primitive non-constant polynomial \(h(x)\in\mathbf{Z}[x]\) dividing \(F(x)\) having degree less than \(\deg\phi(x)\). Then in view of Gauss Lemma, there exists \(g(x)\in\mathbf{Z}[x]\) such that \(\frac{F(x)}{c}=h(x)g(x)\). The leading coefficient of \(F(x)\) and hence those of \(h(x)\) and \(g(x)\) are coprime with \(p\). Note that \(p\) divides \(b_{j}\) for \(0\leq j\leq n-1\). Therefore on passing to \(\mathbf{Z}/p\mathbf{Z}\), we see that the degree of \(\bar{h}(x)\) is same as that of \(h(x)\). Hence \(\deg\bar{h}(x)\) is positive and less than \(\deg\phi(x)\). This is impossible because \(\bar{h}(x)\) is a divisor of \(\frac{\overline{F}(x)}{\bar{c}}=\frac{\bar{a}_{n}}{\bar{c}}\bar{\phi}(x)^{2n}\) and \(\bar{\phi}(x)\) is irreducible over \(\mathbf{Z}/p\mathbf{Z}\). This completes the proof of the lemma. The following result is due to Schur [9] and plays an important role in the proof of Theorems 1.3, 1.4. **Lemma 2.4**.: For integers \(k\) and \(n\) with \(n>k>2\), at least one of the \(k\) numbers \(2n+1,2n+3,\cdots,2n+2k-1\) is divisible by a prime \(p>2k+1\). For \(k=2\), the same result holds unless \(2n+1=25\). For \(k=1\), the same result holds unless \(2n+1=3^{u}\) for some integer \(u\geq 2\). _Remark 2.5_.: It can be noted that the first part of the above lemma can be rephrased as saying that, for \(k>2\), the product of any \(k\) consecutive odd numbers each \(>2k+1\) is divisible by a prime \(p>2k+1\). ## 3. Proof of Theorem 1.3 Proof of Theorem 1.3.: By hypothesis, \(n\geq 2\). It suffices to show that \(F_{1}(x)=u_{2n}f_{1}(x)\) can not have a factor in \(\mathbf{Z}[x]\) with degree lying in the interval \([1,(n+1)\deg\phi(x))\). If we choose a prime \(p\) such that \(p\) divides \(2n-1\), then we have \(p\) divides \(\frac{u_{2n}}{u_{2j}}\) for each \(0\leq j\leq n-1\). Hence using Lemma 2.3, we see that \(F_{1}(x)\) can not have any non-constant factor having degree less than \(\deg\phi(x)\). Now assume that \(F_{1}(x)\) has a factor in \(\mathbf{Z}[x]\) with degree lying in the interval \([\deg\phi(x),(n+1)\deg\phi(x))\). We make use of Lemma 2.2 to obtain a contradiction. We consider a new polynomial \(g_{1}(x)\) with \(a_{n}=1=a_{j}(x)\), \(0\leq j\leq n-1\), in \(F_{1}(x)\) given by \[g_{1}(x)=\phi(x)^{2n}+(2n-1)\phi(x)^{2n-2}+\cdots+(2n-1)\cdots 5\cdot 3\cdot\phi( x)^{2}+(2n-1)\cdots 3\cdot 1.\] We define integers \(c_{i}\) such that \(c_{2n}=1\), for an odd integer \(r\) we have \(c_{r}=0\) and \(c_{2n-(r+1)}=(2n-1)(2n-3)\cdots(2n-r+2)(2n-r)\). Hence we can write \(g_{1}(x)=\sum\limits_{i=0}^{2n}c_{i}\phi(x)^{i}\). Observe that for every \(i\in\{0,1,\cdots,2n-1\}\), \(c_{i}\) is divisible by the product of the odd numbers in the interval \((i,2n-1].\) Also, for \(0\leq j\leq n\), \(c_{2j}=\frac{u_{2n}}{u_{2j}}\). Let \(\ell=k-1\) so that \(\ell+1=k\). By Lemma 2.4, there is a prime factor \(p\geq k+1\) that divides \(c_{2n-\ell-1}\) and \(c_{2n-\ell-2}\), which implies that \(p|c_{i}\) for all \(i\in\{0,1,\cdots,2n-\ell-1\}\). Clearly \(p\nmid c_{2n}\). The slope of the right-most edge of the \(\phi\)-Newton polygon of \(g_{1}(x)\) with respect to \(p\) can be determined by \[\max_{1\leq j\leq n}\bigg{\{}\frac{v_{p}(u_{2n})-v_{p}(u_{2n}/u_{2j})}{2j} \bigg{\}}.\] Using the fact that \(v_{p}((2j-1)!)<(2j-1)/(p-1)\), for \(1\leq j\leq n\) we obtain \[v_{p}(u_{2n})-v_{p}(u_{2n}/u_{2j})=v_{p}(u_{2j})\leq v_{p}((2j-1)!)<\frac{2j-1 }{p-1}<\frac{2j}{p-1}.\] As \(p\geq k+1\), we deduce that the slope of the right-most edge of the \(\phi\)-Newton polygon of \(g_{1}(x)\) with respect to \(p\) is \(<\frac{1}{k}\). Hence, using Lemma 2.2, we have a contradiction. This completes the proof of the theorem. ## 4. Proof of Theorem 1.4. Proof of Theorem 1.4.: It suffices to show that \(F_{2}(x)=u_{2n+2}f_{2}(x)\) can not have a factor in \(\mathbf{Z}[x]\) with degree lying in the interval \([1,(n+1)\deg\phi(x))\). If we choose a prime \(p\) such that \(p\) divides \(2n+1\), then we have \(p\) divides \(\frac{u_{2n+2}}{u_{2j+2}}\) for each \(0\leq j\leq n-1\). Hence using Lemma 2.3, we see that \(F_{2}(x)\) can not have any non-constant factor having degree less than \(\deg\phi(x)\). Now assume that \(F_{2}(x)\) has a factor in \(\mathbf{Z}[x]\) with degree lying in the interval \([\deg\phi(x),(n+1)\deg\phi(x))\). We make use of Lemma 2.2 to obtain a contradiction. We consider a new polynomial \(g_{2}(x)\) with \(a_{n}=1=a_{j}(x)\), \(0\leq j\leq n-1\), in \(F_{2}(x)\) given by \[g_{2}(x)=\phi(x)^{2n}+(2n+1)\phi(x)^{2n-2}+\cdots+(2n+1)\cdots 7\cdot 5\cdot \phi(x)^{2}+(2n+1)\cdots 5\cdot 3.\] Now we define integers \(c_{i}\) such that \(c_{2n}=1\), for an odd integer \(r\) we have \(c_{r}=0\) and \(c_{2n-(r+1)}=(2n+1)(2n-1)\cdots(2n-r+2)\). Hence we can write \(g_{2}(x)=\sum\limits_{i=0}^{2n}c_{i}\phi(x)^{i}\). Observe that for every \(i\in\{0,1,\cdots,2n-1\}\), \(c_{i}\) is divisible by the product of the odd numbers in the interval \((i+2,2n+1].\) Also, for \(0\leq j\leq n\), \(c_{2j}=\frac{u_{2n+2}}{u_{2j+2}}\). Let \(\ell=k-1\) so that \(\ell+1=k\). By Lemma 2.4, there is a prime factor \(p\geq k+2\) that divides \(c_{2n-\ell-1}\) and \(c_{2n-\ell-2}\) unless either (i) \(k=2\) and \(2n+1=3^{u}\) with some integer \(u\geq 2\) or (ii) \(k=4\) and \(n=13\). For the moment, suppose we are not in either of the situations described by (i) and (ii), and fix \(p\geq k+2\) dividing \(c_{2n-\ell-1}\) and \(c_{2n-\ell-2}\). Then \(p\nmid c_{2n}\) and \(p|c_{i}\) for all \(i\in\{0,1,\cdots,2n-\ell-1\}\). Next, we show that the slope of the right-most edge of the \(\phi\)-Newton polygon of \(g_{2}(x)\) with respect to \(p\) is \(<\frac{1}{k}\), but we note here that our argument will only depend on \(p\) being a prime \(\geq k+2\) and not on \(p\) dividing \(c_{2n-\ell-1}\) and \(c_{2n-\ell-2}\). The slope of the right-most edge of the \(\phi\)-Newton polygon of \(g_{2}(x)\) with respect to \(p\) can be determined by \[\max_{1\leq j\leq n}\bigg{\{}\frac{v_{p}(u_{2n+2})-v_{p}(u_{2n+2}/u_{2j+2})}{2j }\bigg{\}}.\] For \(1\leq j\leq n\), we obtain \[v_{p}(u_{2n+2})-v_{p}(u_{2n+2}/u_{2j+2})=v_{p}(u_{2j+2})\leq v_{p}((2j+1)!).\] If \(p>2j+1\), then \(v_{p}((2j+1)!)=0.\) If \(p\leq 2j+1\), then \(k+1\geq 2j\) and, from the fact that \(v_{p}((2j+1)!)<(2j+1)/(p-1)\), we deduce that \[v_{p}((2j+1)!)<\frac{2j+1}{p-1}\leq\frac{2j+1}{k+1}<\frac{2j}{k}.\] It follows that the slope of the right-most edge of the \(\phi\)-Newton polygon of \(g_{2}(x)\) with respect to \(p\) is \(<\frac{1}{k}\). Hence, using Lemma 2.2, we have a contradiction. Since by hypothesis, we have \(2n+1\neq 3^{u}\) for any integer \(u\geq 2\), this contradiction completes the proof of the Theorem 1.4. ## 5. Proof of Theorem 1.5. Proof of Theorem 1.5.: For proving the result, it is sufficient to show that \(H_{2n}^{\phi}(x)\) is irreducible for all integers \(n\geq 2\) and \(H_{2n+1}^{\phi}(x)\) is \(\phi(x)\) times an irreducible polynomial for all non-negative integers \(n\) except in the case when \(2n+1\) is of the form \(3^{u}\) for some integer \(u\geq 2.\) Let \(m=2n\) or \(2n+1\) according as \(m\) is even or odd. Let \(b_{i}(x)\) with \(0\leq i\leq n-1\) belonging to \(\mathbf{Z}[x]\) be polynomials having degree less than \(\deg\phi(x)\). Let the content of \((a_{n}b_{0}(x))\) is not divisible by any prime less than less than or equal to \(m\). Then, we define \[f_{1}(x)=\frac{a_{n}}{u_{2n}}\phi(x)^{2n}+\sum_{j=0}^{n-1}b_{j}(x)\frac{\phi(x )^{2j}}{u_{2j}},\text{ if }m=2n,\,n\geq 2\] and \[f_{2}(x)=\frac{a_{n}}{u_{2n+2}}\phi(x)^{2n}+\sum_{j=0}^{n-1}b_{j}(x)\frac{\phi (x)^{2j}}{u_{2j+2}},\text{ if }m=2n+1\text{ and }m\neq 3^{u}\text{ for }u\geq 2.\] Using Theorems 1.3, 1.4, we see that \(f_{1}(x)\) and \(f_{2}(x)\) are irreducible polynomial over \(\mathbf{Q}\). Observe that \[u_{2j}=1\cdot 3\cdots(2j-1)=\frac{(2j)!}{2\cdot 4\cdots 2j}=\frac{(2j)!}{2^{j}j!}.\] Therefore we see that \[\frac{H_{2n}^{\phi}(x)}{u_{2n}}=a_{n}\frac{\phi(x)^{2n}}{u_{2n}}+\sum_{j=0}^{n-1} \binom{n}{j}a_{j}(x)\frac{\phi(x)^{2j}}{u_{2j}},\] and \[\frac{H_{2n+1}^{\phi}(x)}{u_{2n+2}\phi(x)}=a_{n}\frac{\phi(x)^{2n}}{u_{2n+2}}+ \sum_{j=0}^{n-1}\binom{n}{j}a_{j}(x)\frac{\phi(x)^{2j}}{u_{2j+2}}.\] So, by taking \(b_{j}(x):=\binom{n}{j}a_{j}(x)\) in \(f_{1}(x)\) and \(f_{2}(x)\), we have our result.
2306.06008
Time-averaged quantum annealing for weak processes
The quantum Ising chain has shortcuts to adiabaticity when operated with weak processes. However, when exactly do the non-equilibrium effects of the Kibble-Zurek mechanism, inherent to the system, appear in the optimal protocols in such a context? I propose here that such contrasting difference occurs due to the manner by which one measures the excitation spent energy of the system. Therefore, in this work, I made a qualitative analysis of a quantum annealing procedure of the time-averaged excess work, where the system acquires as a diverging decorrelation time the heuristic Kibble-Zurek mechanism relaxation time. Four important effects are then observed: the absence of shortcuts to adiabaticity, the pausing effect around the critical point in the optimal protocol when the Kibble-Zurek mechanism holds, the persistence of the time-averaged work to avoid slowly-varying regime even for large switching times, and diverging fluctuations of the time-averaged work. In the end, by comparing the excess and the time-averaged excess works, I conclude that this last one is not useful to measure the excitation spent energy in weak processes, although brings an intuition to what happens in the strong driving case.
Pierre Nazé
2023-06-09T16:23:43Z
http://arxiv.org/abs/2306.06008v1
# Time-averaged quantum annealing for weak processes ###### Abstract The quantum Ising chain has shortcuts to adiabaticity when operated with weak processes. However, when exactly do the non-equilibrium effects of the Kibble-Zurek mechanism, inherent to the system, appear in the optimal protocols in such a context? I propose here that such contrasting difference occurs due to the manner by which one measures the excitation spent energy of the system. Therefore, in this work, I made a qualitative analysis of a quantum annealing procedure of the time-averaged excess work, where the system acquires as a diverging decorrelation time the heuristic Kibble-Zurek mechanism relaxation time. Four important effects are then observed: the absence of shortcuts to adiabaticity, the pausing effect around the critical point in the optimal protocol when the Kibble-Zurek mechanism holds, the persistence of the time-averaged work to avoid slowly-varying regime even for large switching times, and diverging fluctuations of the time-averaged work. In the end, by comparing the excess and the time-averaged excess works, I conclude that this last one is not useful to measure the excitation spent energy in weak processes, although brings an intuition to what happens in the strong driving case. ## I Introduction Quantum annealing is a procedure by which a quantum system performs a finite-time process in its ground state [1]. The work produced by the non-equilibrium excitations, inherent to the process, is therefore null. Important applications, mainly on adiabatic quantum computing on avoiding informational errors [2; 3; 4; 5; 6; 7; 8], can be applied with such an idea. A paradigmatic model to study quantum annealing is the quantum Ising model [9]. Due to the existence of a second-order phase transition, its dynamics are very rich, being well-explained by the Kibble-Zurek mechanism [1; 10; 11; 12; 13]. This heuristic description is basically a breakdown of the adiabatic theorem when the system operates in the thermodynamic limit, having as a main consequence universal exponents in the work performed. In recent years, an effort has been made in understanding such system in the context of weak drivings, in order to produce possible applications in quantum computing [14; 15; 16]. In particular, it has been found the existence of universal shortcuts to adiabaticity if admissible optimal protocols are extended to distribution functions [17]. This however seems to be in stark contradiction with the expected idea of non-equilibrium excitations associated with the Kibble-Zurek mechanism. Most probably, such effects must appear when high orders than the linear one are accounted for in the work performed by the system. Even though, can linear-response framework give some intuition of what happens to the optimal protocols when the Kibble-Zurek mechanism is present? The key word here is the waiting time associated with the system [17], which depends essentially on the way the excitation energy is measured. In particular, the measurement made with time-averaged work will furnish a waiting time with the same diverging behavior of the heuristic timescale of the Kibble-Zurek mechanism [16], in contrast with the null waiting time furnished by the measurement made with the conventional work [17]. The objective of this work is to make a qualitative analysis of a quantum annealing procedure of the quantum Ising chain when it is measured by the time-averaged work. For doing this, I am going to use the near-optimal protocol recently discovered in the literature [18]. Four characteristics will be observed: the absence of shortcuts to adiabaticity, the pausing effect around the critical point in the optimal protocol when the Kibble-Zurek mechanism holds, the persistence of the time-averaged work to avoid slowly-varying regime even for large switching times, and diverging fluctuations of the time-averaged work. In the end, by comparing the excess and the time-averaged excess works, I conclude that this last one is not useful to measure the excitation spent energy in weak processes, although brings an intuition to what happens in the strong driving case. ## II Excess work Consider a quantum system with a Hamiltonian \(\mathcal{H}(\lambda(t))\), where \(\lambda(t)\) is a time-dependent external parameter. Initially, this system is in contact with a heat bath of temperature \(\beta\equiv\left(k_{B}T\right)^{-1}\), where \(k_{B}\) is Boltzmann's constant. The system is then decoupled from the heat bath and, during a switching time \(\tau\), the external parameter is changed from \(\lambda_{0}\) to \(\lambda_{0}+\delta\lambda\). The average work performed on the system during this process is \[W\equiv\int_{0}^{\tau}\left\langle\partial_{\lambda}\mathcal{H}(t)\right\rangle \dot{\lambda}(t)dt, \tag{1}\] where \(\partial_{\lambda}\) is the partial derivative for \(\lambda\) and the superscripted dot is the total time derivative. The generalized force \(\left\langle\partial_{\lambda}\mathcal{H}(t)\right\rangle\) is calculated using the trace over the density matrix \(\rho(t)\) \[\left\langle A(t)\right\rangle=\operatorname{tr}\left\{A\rho(t)\right\} \tag{2}\] where \(A\) is some observable. The density matrix \(\rho(t)\) evolves according to Liouville equation \[\dot{\rho}=\mathcal{L}\rho:=-\frac{1}{i\hbar}[\rho,\mathcal{H}], \tag{3}\] where \(\mathcal{L}\) is the Liouville operator, \([\cdot,\cdot]\) is the commutator and \(\rho(0)=\rho_{c}\) is the initial canonical density matrix. Consider also that the external parameter can be expressed as \[\lambda(t)=\lambda_{0}+g(t)\delta\lambda, \tag{4}\] where to satisfy the initial conditions of the external parameter, the protocol \(g(t)\) must satisfy the following boundary conditions \[g(0)=0,\quad g(\tau)=1. \tag{5}\] Linear response theory aims to express the average of some observable until the first order of some perturbation considering how this perturbation affects the observable and the non-equilibrium density matrix [19]. In our case, we consider that the parameter considerably does not change during the process, \(|g(t)\delta\lambda/\lambda_{0}|\ll 1\), for all \(t\in[0,\tau]\). Using the framework of linear-response theory, the generalized force can be approximated until the first-order as \[\left\langle\partial_{\lambda}\mathcal{H}(t)\right\rangle= \left\langle\partial_{\lambda}\mathcal{H}\right\rangle_{0}+ \delta\lambda\left\langle\partial_{\lambda\lambda}^{2}\mathcal{H}\right\rangle _{0}g(t) \tag{6}\] \[-\delta\lambda\int_{0}^{t}\phi_{0}(t-t^{\prime})g(t^{\prime})dt^ {\prime},\] where the \(\left\langle\cdot\right\rangle_{0}\) is the average over the initial canonical density matrix. The quantity \(\phi_{0}(t)\) is the so-called response function, which can be conveniently expressed as the derivative of the relaxation function \(\Psi_{0}(t)\) \[\phi_{0}(t)=-\frac{d\Psi_{0}}{dt}, \tag{7}\] where \[\Psi_{0}(t)=\beta\langle\partial_{\lambda}\mathcal{H}(t)\partial_{\lambda} \mathcal{H}(0)\rangle_{0}+\mathcal{C} \tag{8}\] being the constant \(\mathcal{C}\) calculated via the final value theorem [19]. We define its decorrelation time as \[\tau_{c}=\int_{0}^{\infty}\frac{\Psi_{0}(t)}{\Psi_{0}(0)}dt. \tag{9}\] In this manner, the generalized force, written in terms of the relaxation function, is \[\left\langle\partial_{\lambda}\mathcal{H}(t)\right\rangle= \left\langle\partial_{\lambda}\mathcal{H}\right\rangle_{0}- \delta\lambda\widetilde{\Psi}_{0}g(t) \tag{10}\] \[+\delta\lambda\int_{0}^{t}\Psi_{0}(t-t^{\prime})\dot{g}(t^{ \prime})dt^{\prime},\] where \(\widetilde{\Psi}_{0}(t)\equiv\Psi_{0}(0)-\left\langle\partial_{\lambda\lambda }^{2}\mathcal{H}\right\rangle_{0}\). Combining Eqs. (1) and (10), the average work performed at the linear response of the generalized force is \[W= \,\delta\lambda\left\langle\partial_{\lambda}\mathcal{H}\right\rangle _{0}-\frac{\delta\lambda^{2}}{2}\widetilde{\Psi}_{0} \tag{11}\] \[+\delta\lambda^{2}\int_{0}^{\tau}\int_{0}^{t}\Psi_{0}(t-t^{\prime })\dot{g}(t^{\prime})\dot{g}(t)dt^{\prime}dt.\] We remark that in thermally isolated systems, the work is separated into two contributions: the quasistatic work \(W_{\text{qs}}\) and the excess work \(W_{\text{ex}}\). We observe that only the double integral on Eq. (11) has "memory" of the trajectory of \(\lambda(t)\). Therefore the other terms are part of the contribution of the quasistatic work. Thus, we can split them as \[W_{\text{qs}}=\delta\lambda\left\langle\partial_{\lambda}\mathcal{H}\right\rangle _{0}-\frac{\delta\lambda^{2}}{2}\widetilde{\Psi}_{0}, \tag{12}\] \[W_{\text{ex}}=\delta\lambda^{2}\int_{0}^{\tau}\int_{0}^{t}\Psi_{0}(t-t^{ \prime})\dot{g}(t^{\prime})\dot{g}(t)dt^{\prime}dt. \tag{13}\] In particular, the excess work can be rewritten using the symmetry property of the relaxation function, \(\Psi(t)=\Psi(-t)\) (see Ref. [19]), \[W_{\text{ex}}=\frac{\delta\lambda^{2}}{2}\int_{0}^{\tau}\int_{0}^{\tau}\Psi_{ 0}(t-t^{\prime})\dot{g}(t^{\prime})\dot{g}(t)dt^{\prime}dt. \tag{14}\] We remark that such treatment can be applied to classic systems, by changing the operators to functions, and the commutator by the Poisson bracket [19]. ## III Time-averaged excess work Thermally isolated systems performing an adiabatic driven process can be interpreted as having a random decorrelation time [20]. Therefore, at each instant of time that the process is performed, the relaxation function changes with it. This is very similar to what happens with systems performing an isothermal process, where the stochastic aspect of the dynamics changes the relaxation function. In this case, we take a stochastic average on the work to correct such an effect. In the case of thermally isolated systems, I propose as a solution the following time-averaging \[\overline{W}(\tau)=\frac{1}{\tau}\int_{0}^{\tau}W(t)dt. \tag{15}\] Such quantity can be measured in the laboratory considering an average in the data set of processes executed in the following way: first, we choose a switching time \(\tau\). After, we randomly choose an initial condition from the canonical ensemble and a time \(t\) from a uniform distribution, where \(0<t<\tau\). Removing the heat bath, we perform the work by changing the external parameter and collecting then its value at the end. The data set produced will furnish, on average, the time-averaged work. In the following, I present how time-averaged work can be calculated using linear-response theory and how one can calculate the decorrelation time of the system. To do so, we define the idea of time-averaged excess work \[\overline{W}_{\text{ex}}=\frac{1}{\tau}\int_{0}^{\tau}W_{\text{ex}}(t)dt, \tag{16}\] where \(W=W_{\text{ex}}+W_{\text{qs}}\). Now we observe how the time-averaged excess work can be calculated using linear-response theory. In Ref. [14], I have shown that \[\overline{W}_{\text{ex}}(\tau)=\delta\lambda^{2}\int_{0}^{\tau}\int_{0}^{t} \overline{\Psi}_{0}(t-t^{\prime})\dot{g}(t)\dot{g}(t^{\prime})dtdt^{\prime}, \tag{17}\] where \[\overline{\Psi}_{0}(t)=\frac{1}{t}\int_{0}^{t}\Psi_{0}(u)du, \tag{18}\] is the time-averaged relaxation function. This means that calculating the time-averaged excess work is the same as calculating the averaged excess work, but with a time-averaged relaxation function. Again, this is quite similar to what happens to systems performing isothermal processes, where a stochastic average is taken on the relaxation function. Now, when measured with time-averaged work, the thermally isolated system presents a decorrelation time. Indeed, the conditions such that linear-response theory is compatible with the Second Law of Thermodynamics are [21] \[\widetilde{\Psi}_{0}(0)<\infty,\quad\hat{\overline{\Psi}}_{0}(\omega)\geq 0, \tag{19}\] where \(\widetilde{\ }\) and \(\hat{\cdot}\) are respectively the Laplace and Fourier transforms. Therefore, analogously to what happens in an isothermal process, we define a new decorrelation time \[\overline{\tau}_{c}:=\int_{0}^{\infty}\frac{\overline{\Psi}_{0}(t)}{\overline{ \Psi}_{0}(0)}dt=\frac{\widetilde{\overline{\Psi}}_{0}(0)}{\overline{\Psi}_{0} (0)}<\infty. \tag{20}\] ### Optimization of time-averaged excess work Consider the time-averaged excess work rewritten in terms of the protocols \(g(t)\) instead of its derivative \[\overline{W}_{\text{ex}}= \frac{\delta\lambda^{2}}{2}\overline{\Psi}(0)+\delta\lambda^{2} \int_{0}^{\tau}\dot{\overline{\Psi}}_{0}(\tau-t)g(t)dt \tag{21}\] \[-\frac{\delta\lambda^{2}}{2}\int_{0}^{\tau}\int_{0}^{\tau}\dot{ \overline{\Psi}}(t-t^{\prime})g(t)g(t^{\prime})dtdt^{\prime}. \tag{22}\] Using calculus of variations, we can derive the Euler-Lagrange equation that furnishes the optimal protocol \(g^{*}(t)\) of the system that will minimize the time-averaged excess work [22] \[\int_{0}^{\tau}\dot{\Psi}_{0}(t-t^{\prime})g^{*}(t^{\prime})dt^{\prime}=\dot{ \Psi}_{0}(\tau-t). \tag{23}\] In particular, the optimal irreversible work will be [22] \[\overline{W}_{\text{ex}}^{*}=\frac{\delta\lambda^{2}}{2}\overline{\Psi}_{0}(0) +\frac{\delta\lambda^{2}}{2}\int_{0}^{\tau}\dot{\overline{\Psi}}_{0}(\tau-t)g ^{*}(t)dt. \tag{24}\] Also, the Euler-Lagrange equation (23) furnishes also the optimal protocol that minimizes the variance of the time-averaged work [23]. In this case, the optimal variance of the time-averaged work is \[\sigma_{\text{W}}^{2^{*}}=\frac{\beta\delta\lambda^{2}}{4}\overline{\Psi}_{0} (0)+\frac{\beta\delta\lambda^{2}}{4}\int_{0}^{\tau}\dot{\overline{\Psi}}_{0}( \tau-t)g^{*}(t)dt. \tag{25}\] In Ref. [24], the following universal solution was found \[g^{*}(t)=\frac{t+\overline{\tau}_{w}}{\tau+2\tau_{w}}+\sum_{n=0}^{\infty}\frac {a_{n}(\delta^{(n)}(t)-\delta^{(n)}(\tau-t))}{\tau+2\overline{\tau}_{w}}, \tag{26}\] where the time-averaged waiting time \(\overline{\tau}_{w}\) is defined as \[\overline{\tau}_{w}=\widetilde{\overline{\Psi}}(0), \tag{27}\] which is equal to the time-averaged decorrelation time for isothermal processes. In particular, one can consider as a near-optimal protocol only the continuous linear part, with an error around or less than 8% [18]. The objective of this work is to perform a qualitative quantum annealing in the quantum Ising model in the context of weak drivings but in its time-averaged version. By contrast with the shortcut to adiabaticity found in the context of weak drivings, here the optimal protocol will lead to non-zero time-averaged excess work. Therefore, we will observe what are the effects that the diverging time produces in the optimization of the protocol and work. ## IV Time-averaged quantum annealing In Ref. [14], my co-workers and I have shown that the relaxation function per number of spins for the transverse-field quantum Ising chain is \[\Psi_{N}(t)=\frac{16}{N}\sum_{n=1}^{N/2}\frac{J^{2}}{\epsilon^{3}(n)}\sin^{2} \left(\left(\frac{2n-1}{N}\right)\pi\right)\cos\left(\frac{2\epsilon(n)}{ \hbar}t\right)\!, \tag{28}\] where \[\epsilon(n)=2\sqrt{J^{2}+\Gamma_{0}^{2}-2J\Gamma_{0}\cos\left(\left(\frac{2n-1 }{N}\right)\pi\right)}, \tag{29}\] being \(\Gamma_{0}\) the initial value of the magnetic field. The time-averaged relaxation function per number of spins will be \[\overline{\Psi}_{N}(t)=\frac{16}{N}\sum_{n=1}^{N/2}\frac{J^{2}}{\epsilon^{3}(n)} \sin^{2}\bigg{(}\bigg{(}\frac{2n-1}{N}\bigg{)}\,\pi\bigg{)}\text{sinc}\bigg{(} \frac{2\epsilon(n)}{\hbar}t\bigg{)}, \tag{30}\] where \[\text{sinc}(x)=\frac{\sin{(x)}}{x}. \tag{31}\] Given the time-averaged relaxation function per number of spins (30), and using Eq. (20), the time-averaged waiting time will be \[\overline{\tau}_{w}=\frac{\sum_{i=1}^{N/2}\frac{\pi\hbar}{\epsilon^{4}(n)}\sin ^{2}\big{(}\big{(}\frac{2n-1}{N}\big{)}\,\pi\big{)}}{\sum_{i=1}^{N/2}\frac{4}{ \epsilon^{3}(n)}\sin^{2}\big{(}\big{(}\frac{2n-1}{N}\big{)}\,\pi\big{)}}, \tag{32}\] which is naturally measured in units of \(\hbar/J\). In Ref. [16] it is shown that it diverges accordingly to Kibble-Zurek mechanism prediction when \(\Gamma_{0}\to J\). ### Near-optimal protocol To illustrate the effects of the diverging times in a set of parameters where linear-response theory holds [14], I choose the following parameters: \(\hbar=1,J=1,\Gamma_{0}=0.999995,\delta\Gamma=0.00001,N=10^{4}\). In this case, the time-averaged waiting time will be \(\overline{\tau}_{w}=317.099\hbar/J\). The continuous linear part of the optimal protocol, considering \(\tau=1,100,10000\), in units of \(\hbar/J\), is depicted in Fig. 1. For switching times lesser than the time-averaged waiting time, the system is performing a quantum quench and presents the effects of Kibble-Zurek mechanism. Observe that, in this case, the effect of pausing around the critical point in the optimal protocol is similar to the sudden process case, where \(\tau\to 0\)[22]. However, here the switching time is much greater than \(0\), being the appearance of this effect only a manifestation of the ratio between the switching and diverging time-averaged waiting times. Observe also that such an effect is identical to what has been obtained in previous results in the literature [25; 26; 27]. For switching times greater than the time-averaged waiting time, the optimal protocol tends to be a line connecting the initial and final point, which is a universal behavior predicted by our works [22; 24]. This effect happens only for a finite number of particles, where one can find switching times that escape from Kibble-Zurek mechanism effects [16]. Finally, in the thermodynamic limit, ideally one can still find very small parameters where linear-response theory holds. In this case, however, there is no slowly-varying regime, and the optimal protocol should only appear with the pausing effect around the critical point. ### Near-optimal time-averaged excess work I now analyze the optimal work with the same parameters used in the previous section. The analysis will be qualitative since I am going to use only the continuous linear part, which gives a nice approximated result [18]. Fig. 2 depicts the result. First, I observe that the optimal protocol is not a shortcut to adiabaticity, by contrast with its normal case [17]. Also, because of diverging time Figure 2: Near-optimal time-averaged excess work for quantum Ising model. Even for reasonable high values of \(\tau\) in comparison to its time-averaged waiting time, the excess work is close to its maximum value. Also, the system presents a persistence to avoid the slowly-varying regime, which indicates that the non-equilibrium effects Kibble-Zurek are still in the system, even in its optimal version. averaged waiting time, the time-averaged excess work has values close to its maximum value even for reasonably high switching times. Indeed, the convergence to the slowly-varying regime is very slow, and even for switching times of 10 times the time-averaged waiting time the excess work is not close to zero. This indicates that the non-equilibrium effects of the Kibble-Zurek mechanism persist in the system, even for large times and in its optimal version. Finally, in the thermodynamic limit, there is no slowly-varying regime, and the time-averaged excess work should only appear with its maximum value. ### Near-optimal fluctuations of the time-averaged excess work The expression of the variance of the time-averaged work is given by Eq. (25), where we observe that it is proportional to \(\beta\). Since the system ideally starts with \(T=0\), then \(\beta=\infty\). Also, the relaxation function does not depend on \(\beta\), and therefore the optimal variance of the work diverges. Of course, this atypical scenario is just an ideal situation, and experiments performed at really low temperatures should furnish high but finite fluctuations. Also, the scenario in the thermodynamic limit does not change, since the value of the time-averaged excess work achieves its maximum value. ### Usefulness of time average work The consequences of measuring the excitation spent energy via the time-averaged work shows that it is not a good quantity in comparison to its conventional part. This is a conclusion for the weak processes scenario. However, this brings a perspective to strong driving scenarios as well. Indeed, since the waiting time of the quantum Ising model measured with excess work is zero, producing, therefore, shortcuts to adiabaticity, probably in higher orders the Kibble-Zurek mechanism relaxation time should appear. This will break down the complete suppression of excitation spent energy, as we have witnessed in the time-averaged case. ## V Final remarks In this work, I made a qualitative analysis of the quantum annealing process of the quantum Ising chain in the scenario of measuring the excitation spent energy via the time-averaged work. Four important characteristics were observed: lack of shortcuts to adiabaticity, pausing effect around the critical point in the optimal protocol when Kibble-Zurek mechanism effects holds, persistence to avoid slowly-varying regime, and diverging fluctuations of the time-averaged excess work. Therefore, I conclude that this way of measuring the excitation spent energy is not useful. Even so, it brings a perspective of what would happen if higher orders were included in the conventional work since the non-equilibrium effects of the Kibble-Zurek mechanism would probably appear.
2305.16993
Discrete-choice Multi-agent Optimization: Decentralized Hard Constraint Satisfaction for Smart Cities
Making Smart Cities more sustainable, resilient and democratic is emerging as an endeavor of satisfying hard constraints, for instance meeting net-zero targets. Decentralized multi-agent methods for socio-technical optimization of large-scale complex infrastructures such as energy and transport networks are scalable and more privacy-preserving by design. However, they mainly focus on satisfying soft constraints to remain cost-effective. This paper introduces a new model for decentralized hard constraint satisfaction in discrete-choice combinatorial optimization problems. The model solves the cold start problem of partial information for coordination during initialization that can violate hard constraints. It also preserves a low-cost satisfaction of hard constraints in subsequent coordinated choices during which soft constraints optimization is performed. Strikingly, experimental results in real-world Smart City application scenarios demonstrate the required behavioral shift to preserve optimality when hard constraints are satisfied. These findings are significant for policymakers, system operators, designers and architects to create the missing social capital of running cities in more viable trajectories.
Srijoni Majumdar, Chuhao Qin, Evangelos Pournaras
2023-05-26T14:53:36Z
http://arxiv.org/abs/2305.16993v1
Discrete-choice Multi-agent Optimization: Decentralized Hard Constraint Satisfaction for Smart Cities ###### Abstract. Making Smart Cities more sustainable, resilient and democratic is emerging as an endeavor of satisfying hard constraints, for instance meeting net-zero targets. Decentralized multi-agent methods for socio-technical optimization of large-scale complex infrastructures such as energy and transport networks are scalable and more privacy-preserving by design. However, they mainly focus on satisfying soft constraints to remain cost-effective. This paper introduces a new model for decentralized hard constraint satisfaction in discrete-choice combinatorial optimization problems. The model solves the cold start problem of partial information for coordination during initialization that can violate hard constraints. It also preserves a low-cost satisfaction of hard constraints in subsequent coordinated choices during which soft constraints optimization is performed. Strikingly, experimental results in real-world Smart City application scenarios demonstrate the required behavioral shift to preserve optimality when hard constraints are satisfied. These findings are significant for policymakers, system operators, designers and architects to create the missing social capital of running cities in more viable trajectories. Decentralized Architectures, Hard Constraints, Global Cost function, Multi Agent Systems + Footnote †: journal: Information Process. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). A. Ricci, W. Yoh, N. Agmon, B. An (eds.), May 29 - June 2, 2023, London, United Kingdom. © 20023 International Foundation for Autonomous Agents and Multiagent Systems (www:filamus.org). All rights reserved. + Footnote †: journal: Information Process. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). A. Ricci, W. Yoh, N. Agmon, B. An (eds.), May 29 - June 2, 2023, London, United Kingdom. © 20023 International Foundation for Autonomous Agents and Multiagent Systems (www:filamus.org). All rights reserved. + Footnote †: journal: Information Process. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). A. Ricci, W. Yoh, N. Agmon, B. An (eds.), May 29 - June 2, 2023, London, United Kingdom. © 20023 International Foundation for Autonomous Agents and Multiagent Systems (www:filamus.org). All rights reserved. ## 1. Introduction Setting hard constraints in how we consume, produce, distribute and manage urban resources becomes paramount for the sustainability of our cities (Pournaras et al., 2018). Coordinated responses to climate change often aim to satisfy hard constraints for carbon footprint emissions and net-zero (Srijoni, 2018). Smart Grid technologies are still under development because of challenges to satisfy hard operational constraints that can cause catastrophic power blackouts (Srijoni, 2018) (see Figure 1). The satisfaction of hard constraints is also the safeguards for safety and the social capital for trust in establishing autonomous self-driving cards at scale (Pournaras et al., 2018). Currently, the complexity, scale and decentralization of socio-technical infrastructures in Smart Cities are a barrier for satisfying hard constraints by design. Instead, soft constraints prevail in the vast majority of optimization and learning approaches applied to the broader spectrum of Smart City applications (Srijoni, 2018; Srijoni, 2018; Srijoni, 2018; Srijoni, 2018). This research gap is the focus and subject of this paper. This paper studies the decentralized hard constraint satisfaction in discrete-choice multi-agent optimization, in particular distributed multi-objective combinatorial optimization problems. This is a large class of problems (Srijoni, 2018) in which agents autonomously determine a number of finite options to choose from (operational flexibility). The agents may have their own preferences over these alternatives, expressed with a _discomfort cost_ for each option. However, such choices often turn out to be inter-dependent to minimize system-wide _inefficiency_ and _unfairness costs_ (non-linear cost functions) for which the agents may have (explicitly or implicitly) interest as well. These choices require coordination and computing the optimal combination of choices in an NP-hard problem (Srijoni, 2018). This is especially the case when agents come with different levels of _selfish vs. altruistic_ behavior, with which they prioritize the minimization of their individual discomfort cost over the collective inefficiency and unfairness cost. Smart Cities are full of emerging application scenarios that can be modelled as such optimization problems (Srijoni, 2018; Srijoni, 2018; Srijoni, 2018): power peak reduction to avoid blackouts, shift of power demand to consume more available renewable energy resources, coordinated vehicle routing to decrease travel times, traffic jams and air pollution, coordinated swarms of Unmanned Aerial Vehicles (UAVs) for distributed sensing, load-balancing of bike sharing stations, and other. So far, heuristics for solving these distributed multi-agent optimization problems mainly address soft constraints, which is the best effort to minimize all involved costs. This is because in the absence of complete information in the agents, it is simpler to design algorithms that search efficiently for good solutions even if these are not the optimal ones. Instead, satisfying hard constraints opens up a Pandora box: without full information, any autonomous agent choice can violate the hard constraints that can only be satisfied Figure 1. An illustrative optimization scenario. _Baseline:_ Without any optimized demand response, the power peak causes a blackout. _Soft constraints:_ Significant power peak reduction that does not though prevent the power blackout. _Hard constraints:_ Guarantee the reduction of the power peak below the blackout threshold. with certainty at an aggregate level. Letting agents make conservative choices to avoid violating the hard constraints may significantly downgrade performance and optimality, while any rollback of choices violating these constraints is complex and costly. To address this timely problem with impact on Smart Cities, a new decentralized hard constraint satisfaction model is introduced. The model constructs ranges of upper and lower bounds within which the aggregate choices and costs must remain, while optimizing for soft constraints. To solve the cold start problem in the initialization phase during which agents choices and costs are undergoing aggregation, a heuristic is introduced for the agents to make choices with the highest average likelihood to satisfy all hard constraints. As this heuristic is sensitive to the agents' self-organization (order) in decision-making and aggregation, agents keep reorganizing themselves after violations of hard constraints as long as a stopping criterion is not reached. After the cold start phase and after agents successfully satisfy the hard constraints, they can shift entirely to the optimization of the soft constraints while preserving the satisfaction of the hard constraints locally with a low cost. The proposed model is integrated into a collective learning algorithm, the _Iterative Economic Planning and Optimized Selections_ (I-EPOS) (Zhou et al., 2017). This allows a comprehensive assessment of the decentralized hard constraint satisfaction model and its impact on the optimality of the soft constraints. For the first time, experimental evaluation with real-world data from Smart City scenarios disentangle the performance sacrifice as a result of satisfying hard constraints and the additional agents' altruism level required to mitigate such sacrifice. These findings are highly revealing for system operators, policymakers, system designers and architects. They can inform them about the additional social capital (incentives/rewards) that they require to build (and pay) to preserve the cost-effectiveness of socio-technical infrastructures operating with hard constraints. The contributions of this paper are summarized as follows: (i) A model of decentralized hard constraint satisfaction on optimizing aggregate agents' choices and their aggregate costs. (ii) The instantiation of this model on a decentralized multi-objective combinatorial optimization algorithm of collective learning for multi-agent systems. (iii) The applicability of decentralized hard constraint satisfaction on three Smart Cities scenarios using real-world data: energy, bike sharing and UAVs swarm sensing. (iv) Insights about the optimality sacrifice as moving from soft to hard constraints and how this optimality loss is measured in terms of the required behavioral shift to preserve performance, i.e. restoring altruism deficit. (v) An open-source software artifact implementation of the model for the I-EPOS collective learning algorithm. This paper is summarized as follows: Section 2 reviews related methods. Section 3 introduces the decentralized hard constraint satisfaction model. Section 4 illustrates the applicability of this model to the collective learning algorithm of I-EPOS and its implementation. Section 5 illustrates the experimental methodology and the evaluation. Section 6 concludes this paper and outlines future work. ## 2. Comparison to Related Work We study a discrete-choice distributed multi-objective optimization problem for multi-agent systems with both soft and hard constraints. In such systems, most related optimization approaches (Rasmaldi et al., 2015) operate as partially observable systems with agents communicating with their neighbors. There are approaches that rely on an asynchronous hierarchical process using depth-first search to order agents that communicate with their parents to make choices and optimize objective functions (Bradley et al., 2015). Mailler et al. (2015) cluster agents based on the constraints they attempt to satisfy, with a central controller that uses a branch and bound paradigm for searching solutions. These hierarchical approaches suffer from failure risks, performance bottlenecks, and potential privacy breaches in application scenarios involving sensitive personal data, e.g. location and health data. Multi-agent reinforcement learning approaches with constraints on agent choices are earlier studied. For instance, Curran et al. (2016) generate rewards for agents to optimize delay intervals that prevent air traffic congestion with greedy scheduling to implement _hard stop_ (constraints) for agents when the delay surpasses a limit. Rollback to previous states (_warm restarts_) are earlier studied upon violation of global hard constraints in (Sinao et al., 2017). Simao et al. (2017) learn non-violated execution by training using datasets containing constrained actions of the agents and corresponding global states of a centrally controlled environment. Even though the execution is decentralized, the learned model used to provide recommendations to the agents is an outcome of a centralized computation. Decentralized and asynchronous versions of population search-based optimization methods, such as particle swarm optimization (Bradley et al., 2015) or ant colony (Bradley et al., 2015) algorithms show a slow convergence with high communication cost to rollback after violations of hard constraints, while improving global fitness and local search. This may slow down online real-time adaptations. Violation of hard constraints is prevented via message broadcasting that rolls back all choices made after a violation (Bradley et al., 2015). Other earlier approaches use tree overlay network structures for aggregation to aggregate messages from child nodes in form of hypercubes to reduce the frequency of message exchanges (Bradley et al., 2015; Bradley et al., 2015; Mailler et al., 2015). These methods use dynamic programming approach and thus storing all solutions increases the size of messages exponentially. Other highly efficient discrete-choice multi-agent optimization methods, such as COHDA (Mailler et al., 2015) and EPOS (Zhou et al., 2017), address a large spectrum of NP-hard combinatorial problems in the domains of Smart Grids and Smart Cities (Sinao et al., 2017; Sinao et al., 2017; Sinao et al., 2017; Sinao et al., 2017). COHDA generalizes well in different communication structures among the agents that have full view of the systems, while EPOS focuses on hierarchical acyclic graphs such as trees to perform a cost-effective decision-making and aggregation of choices. Table 1 compares the design and efficiency of multi-agent optimization approaches, as well as how they address soft and hard constraints. As COHDA shares full information between agents, it has higher communication cost. The computational cost is lower at global level for COHDA compared to EPOS because of the agents' brute force search to aggregate choices. Both COHDA and EPOS focus on satisfying soft constraints, like minimizing cost functions that satisfy balancing (minimum variance and standard deviation (Zhou et al., 2017)) or matching (minimum root mean square error, residual sum of squares (Srivastava et al., 2016)) objectives. However, satisfying global hard constraints (Table 1) is challenging as agents need to additionally coordinate for choices, whose potential violations are only confirmed at an aggregate level, which makes any rollback of choices to avoid violations particularly complex. An expensive rollback procedure earlier introduced in COHDA (Krause et al., 2017) performs complete rescheduling using a 0-1 multiple-choice combinatorial to find another solution that satisfy the constraints. Summarizing, satisfaction of hard constraints during initialization phase, when agents accumulate information about other agents' choices (cold start problem) remains an open challenge. It is also unclear how the satisfaction of hard constraints degrades the performance of these efficient algorithms based on their soft constraints. Addressing these open questions is the focus of this paper. ## 3. Hard Constraint Satisfaction Model This section introduces the general optimization problem and the decentralized hard satisfaction model. ### Optimization problem Table 2 summarizes the mathematical symbols of this article. Assume a socio-technical systems of \(n\) users, each assisted by a software agent, i.e. a personal digital assistant. Each agent \(i\) has a number of \(k\) options to choose from. Each option \(j\) is referred to as a _possible plan_, which is a sequence of real values \(p_{i,j}=(p_{i,j})_{u=1}^{m}\in P_{i}=(p_{i,j})_{j=1}^{k},\forall i\in\{1,...,n\}\). Each agent selects one and only one plan \(p_{i,s}\), which is referred to as the selected plan (i.e. the agent's choice). All agents' choices aggregate element-wise to the collective choice, the _global plan_\(g=(g_{u})_{u=1}^{m}=\sum_{i=1}^{n}p_{i,s}\) of the multi-agent system. A possible plan of an agent may represent a resource schedule or allocation, e.g. the energy consumed over time or the energy consumed from a certain supplier. Multiple possible plans for each agent represent alternatives and its operational flexibility. In the example of energy, the global plan represents the total energy consumption in the system over time or suppliers (see Figure 1). Agents' choices are made based on different, often opposing criteria. Each agent has its individual preferences over the possible plans, measured by the _discomfort cost_\(f_{\text{D}}(p_{i,j})=D_{i,j}\) of each plan \(j\), which also makes the mean discomfort cost in the system \(f_{\text{D}}(p_{1,s},...,p_{n,s})=\frac{1}{n}\cdot\sum_{i=1}^{n}f_{\text{D}}( p_{i,s})\). Each agent can make independent choices to minimize their own discomfort cost. However, agents may also have interest to satisfy the following two general-purpose collective criteria: inefficiency cost \(f_{\text{I}}(\sum_{i=1}^{n}p_{i,s})=I_{i}\) and unfairness cost \(f_{\text{I}}(D_{1,s},...,D_{n,s})=U_{i}\). If these cost functions are non-linear, meaning the choices of the agents depend on each other, the satisfaction of soft constraints, i.e. minimizing the inefficiency and unfairness cost, is a combinatorial NP-hard optimization problem (Srivastava et al., 2016). Balancing (e.g. min variance) and matching objectives (e.g. min residual of sum squares) are examples for measuring inefficiency cost with a broad applicability in load-balancing application scenarios of Smart Cities: minimizing power peaks, shifting demand to times with high availability of renewable energy resources, rerouting vehicles to avoid traffic jams, etc. Table 3 show such a case, in which three agents have two options (plans). These plans may represent energy consumption choices while forming an optimal global plans that meets the available energy supply. The elements may signify the power consumption for the day and night. The agents choose plans with minimum dispersion between elements (soft constraints), but that leads to a suboptimal global plan of (Krause et al., 2017; Srivastava et al., 2016). The global plan should also come with lower dispersion. The variance of the discomfort costs over the population of agents can measure the unfairness cost. Satisfying all of these \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline Attributes & I-EPOS (Srivastava et al., 2016) & EPOS (Srivastava et al., 2016) & COHDA (Krause et al., 2017; Krause et al., 2017; Krause et al., 2017) & H-DPO (Krause et al., 2017) ; Krause et al. (2017) \\ \hline Plan Selection - Autonomy & Locally & Parent-Level & Locally & Parent-Level \\ \hline Computational Cost & agent: \(O(pt)\) ; system \(O(pt\) \(log\)\(a)\) & agent: \(O(p^{c}\)\(log\)\(a)\) & agent: \(O(pt)\) ; system: \(O(pt)\) & agent: \(O(p^{c})\) ; system: \(O(p^{c})\) & agent: \(O(p^{c})\) \\ \hline Communication Cost & agent: \(O(t)\) ; system: \(O(t\) \(log\)\(a)\) & agent: \(O(p)\) ; system: \(O(p\)\(log\)\(a)\) & agent: \(O(at)\) ; system: \(O(at)\) & agent: \(O(p^{c})\) ; system: \(O(pt^{c})\) & agent: \(O(p^{c})\) ; system: \(O(pt^{c})\) & agent: \(O(p^{c})\) ; system: \(O(pt^{c})\) & agent: \(O(p^{c})\) ; system: \(O(pt^{c})\) & agent: \(O(pt^{c})\) \\ \hline Information Exchange & tree overlay; aggregate information; & tree overlay; aggregate message; & tree overlay; aggregate information; & tree overlay; full information; & tree overlay; full information; & tree overlay; full information; \\ \hline Soft Constraints & local (initialization), aggregated choices in global plan & local (initialization) & global & no soft constraints \\ \hline Hard Constraints & local (initialization) & local (initialization) & local (initialization), global & local (initialization), global \\ \hline \end{tabular} \end{table} Table 1. Comparison of self adaptive decentralized approaches \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt}} \hline Notation & Meaning \\ \hline \(n\) & Number of agents \\ \hline \(P_{i}\) & Set of possible plans for agent \(i\) \\ \hline \(m\) & Plan size \\ \hline \(k\) & Number of plans \\ \hline \(p_{i,j}\subset\mathbb{R}^{m}\) & The \(f^{m}\) plan as sequence of \(m\) elements of agent \(i\) \\ \hline \(p_{i,s}\) & Selected plan of agent \(i\) \\ \hline \(q\) = \(\sum_{i=1}^{m}p_{i,s}\) & Global plan from selected plans of all \(n\) agents \\ \hline \(\tilde{H}_{i}\) & Discoort factor for agent \(i\) \\ \hline \(\alpha_{i}\) & Unfairness factor for agent \(i\) \\ \hline \(r\) & Constraints satisfaction rate \\ \hline \(I\) & Inefficiency cost \\ \hline \(D\) & Discomfort cost \\ \hline \(U\) & Unfairness cost \\ \hline \(\tilde{H}\colon\mathbb{R}^{m}\to\mathbb{R}\) & Discomfort cost function \\ \hline \(\tilde{H}\colon\mathbb{R}^{m}\to\mathbb{R}\) & Inefficiency cost function \\ \hline \(\tilde{H}\colon\mathbb{R}^{m}\to\mathbb{R}\) & Unfairness cost function \\ \hline \(\tilde{H}\colon\mathbb{R}^{m}\to\mathbb{R}\) & Sequence of upper bound hard constraints \\ \hline \(\mathcal{L}\in\mathbb{R}^{m}\) & Sequence of lower bound hard constraints \\ \hline \(\mathbb{E}(p_{i,j},\mathcal{U})\) & Expected satisfaction for upper bound constraints \\ \hline \(\mathbb{E}(p_{i,j},\mathcal{L})\) & Expected satisfaction for lower bound constraints \\ \hline \(\mathbb{E}(p_{i,j},\mathcal{L})\) & Expected satisfaction for lower bound constraints \\ \hline \end{tabular} \end{table} Table 2. Mathematical notations used in this paper. (opposing) objectives depends on the selfish vs. altruistic behavior of the agents, e.g. whether they accept a plan with a bit higher discomfort cost to decrease inefficiency or unfairness cost. We can observe this in Table 3. If Agent C selects a plan ([6,2]) with higher energy requirement during the day, it achieves to minimize the inefficiency cost and an optimum global plan of [10,10] is achieved. Such multi-objective trade-offs are modelled with the parameters \(\alpha_{i}\) and \(\beta_{i}\) such that: \[\begin{split} p_{i,s}=\underset{j=1}{\overset{k}{argmin}}& (1-\alpha-\beta)\cdot\mathbb{f}(p_{1,s}+...+p_{i,j}+...+p_{n,s})\\ &+\alpha\cdot f_{\mathbb{U}}(D_{1,s},...,D_{i,j},...,D_{n,s})\\ &+\beta\cdot f_{\mathbb{U}}(D_{1,s},...,D_{i,j},...,D_{n,s}).\end{split} \tag{1}\] From the above equation, it is apparent that the choice of a plan cannot be easily optimized without (i) information of the other agents' choices and (ii) coordination of the agents' choices for non-linear cost functions that depend on each other. The optimization heuristics (Section 2) address the satisfaction of such soft constraints via various sequential information exchange, information aggregation and communication schemes that coordinate agents' choices. See the baseline scenario of soft constraints in Table 3. However, introducing hard constraints on the aggregated choices \(g\) and their costs \(D_{i,j}\), \(I_{i}\), \(U_{i}\) is challenging. This is because there is no guarantee to satisfy the hard constraints in the absence of full information, which is usually the case for decentralized heuristics that require time to converge to full information. This is a particular cold start problem of initialization/exploration, during which the first choices are made under high uncertainty. As choices with high likelihood of violating hard constraints add up incrementally, it becomes increasingly hard to discover choices that will prevent such violations. Hence, agents need different and more conservative selection criteria that prioritize hard over soft constraints. Designing and evaluating these criteria is a contribution of this paper. ### A heuristic for satisfying hard constraints The heuristic for satisfying the hard constraints on aggregate choices and their costs is illustrated in this section. Table 3 also illustrates an example of applying the heuristic in practice. #### 3.2.1. Constraints on aggregate choices For each element \(g_{u}\) of a global plan \(g\), a hard constraint is defined by a range (envelope) of an upper \(\mathcal{U}=(\mathcal{U}_{u})_{u=1}^{m}\) and lower \(\mathcal{L}=(\mathcal{L}_{u})_{u=1}^{m}\) bound, where \(\mathcal{U}\), \(\mathcal{L}\) are also sequences of real values of equal size \(|\mathcal{U}|=|\mathcal{L}|=|g|=m\). Each value \(u\) of the upper bound denotes that \(\mathcal{U}_{u}\geq g_{u}\), whereas for the lower bound it holds that \(\mathcal{L}_{u}\leq g_{u}\). The selected plan expected to satisfy all hard constraints at the initialization phase, during which the aggregate choices (global plan \(g\)) are not known, is estimated as follows: \[p_{i,s}=\underset{p_{i}\in P_{i}}{argmax}\mathbb{E}(p_{i,j},\mathcal{U}, \mathcal{L}), \tag{2}\] where the expectation of satisfaction is given by: \[\mathbb{E}(p_{i,j},\mathcal{U},\mathcal{L})=\sum_{u=1}^{m}(\mathcal{U}_{u}-p_ {i,j,u})+\sum_{u=1}^{m}(p_{i,j,u}-\mathcal{L}_{u}). \tag{3}\] #### 3.2.2. Constraints on aggregate costs The modeling for the hard constraints on the aggregate costs is exactly the same as the one of the aggregate choices, where the expected satisfaction for each of the costs of \(f_{\mathbb{U}}(p_{1,s},...,p_{n,s})\), \(f_{\mathbb{U}}(\sum_{i=1}^{n}p_{i,s})\) and \(f_{\mathbb{U}}(D_{1,s},...,D_{n,s})\) is calculated for upper and lower bounds with \(|\mathcal{U}|=|\mathcal{L}|=1\). #### 3.2.3. Constraint satisfaction rate The effectiveness of the hard constraint satisfaction heuristic is measured by the satisfaction rate (\(r\)). This is the total number of satisfactions achieved out of a total number of trials made. These trials are often existing parameters of the optimization algorithms, for instance, random repetitions, or the order with which agents aggregate choices made to coordinate and optimize their own choices. ## 4. Hard constraints implementation The model of decentralized hard constraint satisfaction is implemented and integrated into the I-EPOS collective learning algorithm1[(25)]. I-EPOS solves a large class of optimization problems, as formalized in Section 3.1. It is chosen due to its large spectrum of applicability in Smart City scenarios [(22)] as well as its superior performance in satisfying soft constraints [(25)]. Efficient coordinated choices are made using a self-organized [(21)] tree topology within which agents organize their interactions, information exchange and decision-making. I-EPOS benefits from the fact that trees are acyclic graphs: communication cost is very low and all exchanges in I-EPOS are at an aggregate level without double-counting. The coordination is a result of a more informed decision-making: each agent makes a choice taking into account the choices of a group of other agents (the tree branch underneath during initialization) or the choices of all agents at a previous time point (after initialization). Coordination evolves in multiple learning iterations, each consisting of a _bottom-up_ and _top-down_ phase. During the bottom-up phase, each agent chooses based on the new choices of the agents below and the choices of all agents in the previous iteration. However, each agent has an information gap: It has no information about the subsequent choices of the agents above in the tree. This problem is solved during the top-down phase, in which agents roll back (back propagation) to choices of the previous iteration as long as no costs reduction is achieved. Further information about the design of the I-EPOS collective learning algorithm is out of the scope of this paper and can be found in earlier work [(25)]. Footnote 1: Available at: [https://github.com/epournaras/EPOS/tree/hard_constraints](https://github.com/epournaras/EPOS/tree/hard_constraints). The decentralized hard constraint satisfaction model is implemented by filtering out the possible plans in Equation 1 that violate the given upper and lower bounds. However, in the first learning iteration, it is not possible determine these plans that violate the hard constraints with certainty because each agent only knows about the aggregate choices of the agents underneath (and not the ones above). As a result, the root agent may end up having no possible plan that does not violate the hard constraints. To prevent the likelihood of these violations, the agents make more conservative choices according to Equation 2 during the first iteration, aiming at maximizing the expected satisfaction of the hard constraints. Once the hard satisfactions are satisfied, the agents switch back to plan selection according to Equation 1, while keep filtering plans that violate the hard constraints. The agents cannot violate the hard constraints in these subsequent learning iterations because they always have the option to roll back to the choices made at the end of the first learning iteration during which hard constraints are satisfied (via the top-down phase). Figure 2 illustrates the implementation of the hard constraints model on the open-source I-EPOS software artifact (Kumar et al., 2017). The implementation of the cost function interfaces is extended to filter out plans that violate the hard constraints, as well as the selection based on the expected satisfaction principle of Equation 2. The hard constraints are controlled via the main input parameter file of I-EPOS (Java Properties). Constraints on aggregate choices and costs can be activated and deactivated. Two input.csv files are introduced, one for each type of hard constraints. Both contain the upper/lower bounds and the coding of the operators. ## 5. Experimental Evaluation Table 4 illustrates the application scenarios and settings of the experimental evaluation. A number of 1000 agents run the I-EPOS collective learning algorithm (Kumar et al., 2017). They are self-organized (Kumar et al., 2017) in a height-balanced binary tree. The algorithm repeats 200 times, each time with a different random positioning of the agents in the tree. This introduces different decision-making order with which agents coordinate their optimized choices. At each repetition, the algorithm runs for 40 iterations, which is usually sufficient for convergence (Kumar et al., 2017). Evaluation is performed in three Smart City application scenarios: (i) energy demand-response, (ii) bike sharing and (iii) UAV swarm sensing. The optimized inefficiency cost function and the generation of plans are also outlined in Table 4. ### Smart City application scenarios **Energy demand-response.** This dataset is based on energy disaggregation of the simulated zonal power transmission in the Pacific Northwest Smart Grid Demonstrations Project (Kumar et al., 2017; Kumar et al., 2017). It contains 5600 consumers with their energy demand recorded every 5 min in a 12h span of a day. The goal is to perform power peak shaving to prevent blackouts (Kumar et al., 2017) by minimizing the variance. \begin{table} \begin{tabular}{l l l l} \hline \hline Parameter & Energy & Bike Sharing & UAV Swarm \\ \hline Num. of agents (\(n\)) & 1000 & 1000 & 1000 \\ Num. of plans (\(k\)) & 10 & 1 to 24 & 64 \\ Plans size (\(m\)) & 144 & 98 & 64 \\ Num. of repetitions & 200 & 200 & 200 \\ Num. of iterations & 40 & 40 & 40 \\ Inefficiency cost \(f\) & Min VAR & Min VAR & Min RMSE \\ \hline \hline \end{tabular} \end{table} Table 4. Parameters of the EPOS algorithm (Kumar et al., 2017) for datasets \begin{table} \begin{tabular}{l|c| **Bike sharing**. The Hubway Data Visualization Challenge dataset consists of the trip records of the Hubway bike sharing system in Paris (Pars, 2016; Pars, 2016). The data contain 2300 users, each with a varying number of possible plans for the bike stations from which bikes are picked up or returned (98 stations in total). The goal is to keep the bike sharing stations load balanced by minimizing the variance of the global plan. **UAV swarm sensing.** The dataset contains 1000 drones that can capture images or videos of vehicle traffic information on public roadways over 64 areas of interest (sensing cells) that are uniformly distributed in the city map (Pars, 2016; Pars, 2016). Drones aim to collect the required amount of sensing data (target plan) determined by a continuous kernel density estimation, for instance, monitoring cycling risk based on past bike accident data (Brock et al., 2016). ### Hard constraint satisfaction works For each application scenario, three incremental levels of hard constraints are set to the aggregate choices (global plan \(g\)). These levels are quantiles chosen empirically by observing the median global plan after several executions of I-EPOS based on soft constraints. The agents are assumed here altruistic, such that: \(\beta_{i}=0,\alpha_{i}=0,\forall i\in\{1,...,n\}\). In the scenario of energy demand-response, hard constraints based on upper and lower bounds may represent an envelope of operation within which demand does not cause a blackout. In the scenario of bike sharing, hard constraints may represent limits on incoming or outgoing bikes that infrastructure operators may have, e.g. parking capacity. In the scenario of UAV swarm sensing, an upper bound of hard constraints may represent privacy-sensitive areas or regulated no-fly zones for drones. In contrast, a lower bound Figure 4. Optimization under soft and three levels of hard constraints in the bike sharing scenario. Light-grey shaded areas represent the upper bound and dark-grey shaded areas the lower bound. Arrows point to violations of hard constraints. Figure 5. Optimization under soft and three levels of hard constraints in the UAV swarm sensing scenario. Light-grey shaded areas represent the upper bound and dark-grey shaded areas the lower bound. Arrows point to violations of hard constraints. Figure 3. Optimization under soft and three levels of hard constraints in the energy demand-response scenario. Light-grey shaded areas represent the upper bound and dark-grey shaded areas the lower bound. Arrows point to violations of hard constraints. may represent minimal information required to monitor effectively a phenomenon, e.g. a forest fire or traffic jam. Figures 3, 4 and 5 show the global plans for the soft constraint (baseline) along with three incremental and alternating levels of hard constraints (upper/lower bounds). Under soft constraints, the upper and lower bounds are violated, whereas hard constraints prevent these violations. Stricter hard constraints prevent more violations, however, the satisfaction rate (\(r\)) drops significantly, while the inefficiency cost increases. This shows that strict hard constraints are likely to oppose the soft constraints. Last but not least, note that the scenario of UAV swarm sensing allows the satisfaction of a larger number of hard constraints, while preserving high satisfaction rates. This is because the agents have more operational flexibility by generating a larger number of plans (64\(>\)24\(>\)10). ### Behavior shift can mitigate hard constraints Satisfying hard constraints results in a degrade of the performance profile (lower inefficiency cost) achieved under soft constraints. The recovery from this degrade is measured here as the required social capital (behavioral shift) that agents need to offer such that soft and hard constraints have equivalent performance. The raise of social capital is measured by the reduction of the mean \(\beta_{i}\) value in the population of agents that makes them more altruistic, see Equation 1. The following method is introduced to measure the behavioral shift: I) Perform parameter sweep on I-EPOS under soft constraints for \(\beta_{i}=0\), to \(\beta_{i}=1,\forall i\in\{1,...n\}\) with a step of 0.025. II) For each I-EPOS execution in Step 1 with a \(\beta_{i}\) value, a discomfort cost \(D\) and an inefficiency cost \(I\), run I-EPOS under a hard constraint on the mean discomfort cost with an upper bound value equals to \(D\) (the one of the I-EPOS execution under soft constraints). III) Derive the increased inefficiency cost under the hard constraint on the discomfort cost. IV) Find the \(\beta_{i}\) value from Step 1 that has the closest inefficiency cost with the one derived in Step 3 under the hard constraint and V) Compare the two \(\beta_{i}\) values in Step 2 and 4. The difference represents the required mean behavioral shift to mitigate the performance degrade of hard constraints. Figure 6 illustrates the inefficiency and discomfort cost as a function of \(\beta_{i}\) under soft and hard constraints and the three different application scenarios. It becomes apparent that hard constraints require a minimum and significant level of altruism, otherwise, inefficiency cost rapidly explodes, especially in the scenarios without significant operational flexibility. This is also the reason why the discomfort cost becomes easier to reduce in the scenario of UAV swarm sensing, which comes with higher operational flexibility. Figure 7 shows the required behavior shift to restore the performance loss as a result of satisfying hard constraints. For energy demand-response, the agents need on average 44.29% higher altruism under hard constraints to meet the performance of the soft constraints. The bike sharing scenario suggests an almost complete shift from selfish to altruistic behavior. Strikingly, the scenario of UAV swarm sensing shows performance gain as a result of satisfying hard constraints. As the number of plans is significantly higher for the UAV dataset, the search space is larger, which affects the optimality of the collective iterative learning paradigm in I-EPOS. The satisfaction rate for the energy demand-response (Figure 7(a)), bike sharing (Figure 7(b)) and UAV swarm sensing (Figure 7(c)) are 54.45%, 19.44% and 61.21% respectively. For \(\beta\leq 0.475\) and \(\beta\leq 0.25\), the satisfaction rate is 100% for the UAV swarm sensing and energy demand-response respectively. The operational flexibility by Figure 6. Inefficiency and discomfort cost as a function of the altruism level for soft and hard constraints for the three Smart City application scenarios. Figure 7. Required behavioral shift to mitigate the performance degrade of satisfying hard constraints for the three Smart City application scenarios. The satisfaction rate is also shown for each scenario. Performance comparison: \(\beta\) of soft constraints vs. \(\beta\) of hard constraints and satisfaction rate. higher number of possible plans increases the constraints satisfaction rate. ## 6. Conclusion and Future Work To conclude, this paper shows that the decentralized satisfaction of global hard constraints is feasible. It is a significant enabler for sustainability and resilience in several Smart City application scenarios such as energy demand-response to avoid blackouts, load balancing of bike sharing stations to make more accessible low-carbon transport modalities as well as improved sensing quality and efficiency by swarms of energy-constrained drones. Results show that hard constraints can be easily violated when optimizing exclusively for soft constraints. Instead, the proposed model prevents to a very high extent such violations. Results also reveal the performance cost when hard constraints are introduced and how this cost can be mitigated by a behavioral shift towards a more altruistic behavior that sacrifices individual comfort for collective efficiency. These findings are invaluable for informing policy makers and systems operators of the required social capital that they need to raise to satisfy ambitious policies such as net-zero. The open-source software artifact implementation of the proposed model to the I-EPOS collective learning algorithm is a milestone to encourage further research and application scenarios based on decentralized hard constraint satisfaction. Future work includes the applicability of the model to other decentralized optimization algorithms. The proposed heuristic is designed to satisfy all hard constraints together, which may be a limitation for high numbers of such opposing constraints, i.e. sacrifice of optimality and low satisfaction rates. Instead, a more incremental (and possibly partial) satisfaction of the hard constraints is part of future work. The further understanding of how to recover missing social capital to preserve both efficiency and fairness is also subject of future ## Acknowledgements This work is supported by a UKRI Future Leaders Fellowship (MR-/W009560/1): _Digitally Assisted Collective Governance of Smart City Commons-ARTIO_', the Alan Turing Fellowship project 'New Edge-Cloud Infrastructure for Distributed Intelligent Computing' and the SNF NRP77 'Digital Transformation' project 'Digital Democracy: Innovations in Decision-making Processes', #407740_187249.
2307.00988
Tellurium emission line in kilonova AT 2017gfo
The late-time spectra of the kilonova AT 2017gfo associated with GW170817 exhibit a strong emission line feature at $2.1\,{\rm \mu m}$. The line structure develops with time and there is no apparent blue-shifted absorption feature in the spectra, suggesting that this emission line feature is produced by electron collision excitation. We attribute the emission line to a fine structure line of Tellurium (Te) III, which is one of the most abundant elements in the second r-process peak. By using a synthetic spectral modeling including fine structure emission lines with the solar r-process abundance pattern beyond the first r-process peak, i.e., atomic mass numbers $A\gtrsim 88$, we demonstrate that [Te III] $2.10\,\rm \mu m$ is indeed expected to be the strongest emission line in the near infrared region. We estimate that the required mass of Te III is $\sim 10^{-3}M_{\odot}$, corresponding to the merger ejecta of $0.05M_{\odot}$, which is in agreement with the mass estimated from the kilonova light curve.
Kenta Hotokezaka, Masaomi Tanaka, Daiji Kato, Gediminas Gaigalas
2023-07-03T13:06:54Z
http://arxiv.org/abs/2307.00988v1
# Tellurium emission line in kilonova AT 2017gfo ###### Abstract The late-time spectra of the kilonova AT 2017gfo associated with GW170817 exhibit a strong emission line feature at 2.1 \(\mu\)m. The line structure develops with time and there is no apparent blue-shifted absorption feature in the spectra, suggesting that this emission line feature is produced by electron collision excitation. We attribute the emission line to a fine structure line of Tellurium (Te) III, which is one of the most abundant elements in the second r-process peak. By using a synthetic spectral modeling including fine structure emission lines with the solar r-process abundance pattern beyond the first r-process peak, i.e., atomic mass numbers \(A\gtrsim 88\), we demonstrate that [Te III] 2.10 \(\mu\)m is indeed expected to be the strongest emission line in the near infrared region. We estimate that the required mass of Te III is \(\sim 10^{-3}M_{\odot}\), corresponding to the merger ejecta of \(0.05M_{\odot}\), which is in agreement with the mass estimated from the kilonova light curve. keywords: transients: neutron star mergers ## 1 Introduction The origin of r-process elements is a long-standing problem in astrophysics (Burbidge et al., 1957; Cameron, 1957). Neutron star mergers have been considered as promising sites of r-process nucleosynthesis (Lattimer & Schramm, 1974). A neutron star merger, GW170817, was accompanied by an uv-optical-infrared counterpart, a kilonova (or macronova) AT 2017gfo, which provides strong evidence that r-process nucleosynthesis occurs in neutron star merger ejecta (see, e.g., Metzger, 2017; Nakar, 2020; Margutti & Chornock, 2021, for reviews). A series of spectral data of the kilonova AT 2017gfo was obtained in the the optical and near infrared bands from 0.5 to 10 days after the merger (Andreoni et al., 2017; Chornock et al., 2017; Kasliwal et al., 2017; Pian et al., 2017; Smartt et al., 2017; Tanvir et al., 2017; Troja et al., 2017). The kilonova AT 2017gfo is dominated by the photospheric emission at the early times. The photospheric emission around a few days after the merger peaks in the near infrared band, indicating the existence of lanthanides, which have strong absorption at optical to near infrared wavelengths (Barnes & Kasen, 2013; Kasen et al., 2013; Tanaka & Hotokezaka, 2013; Fontes et al., 2020; Tanaka et al., 2020; Kawaguchi et al., 2018; Barnes et al., 2021). The early spectra also exhibit several absorption structures including: (i) the 0.8 \(\mu\)m feature attributed to Sr II or He I (Watson et al., 2019; Domoto et al., 2021; Gillanders et al., 2022; Perego et al., 2022; Tarumi et al., 2023) and (ii) the 1.3 \(\mu\)m and 1.5 \(\mu\)m features attributed to La III and Ce III, respectively (Domoto et al., 2022). In addition to the elemental identification, Sneppen et al. (2023) demonstrated that the spectra in the photospheric phase are useful to study the geometry of the outer part of the kilonova ejecta, \(\gtrsim 0.2c\). After the photospheric phase, kilonovae enter the nebula phase, where the ejecta is heated by charged decay products of the radioactivity of r-process nuclei and the heat is radiated through atomic emission lines. Examining kilonova nebular spectra provides opportunities to identify atomic species synthesized in the merger ejecta that may not appear as absorption lines during the photospheric phase. For instance, Hotokezaka et al. (2022) interpreted the detection of _Spitzer_(Villar et al., 2018; Kasliwal et al., 2022) at 4.5 \(\mu\)m at 43 and 74 days after the merger as emission lines of selenium (Se) or tungsten (W). In the early nebular phase, \(\sim 10\) days, the infrared emission is of particular interest because the absorption opacity due to atomic transitions is lower compared to the optical region (e.g., Tanaka et al., 2020), and thus, the emission lines are expected to appear as early as \(\lesssim 10\) days. Most of infrared emission lines are expected to arise from fine-structure transitions in the ground terms of heavy elements, for which the line wavelengths and transition rates can be obtained with reasonably high accuracy from the experimentally calibrated atomic energy levels. Furthermore, such emission lines can be used to estimate the mass distribution of the emitting ions from the emission line spectra. In fact, the mass distributions of ions in type Ia supernova ejecta have been derived from the infrared nebular spectra (e.g., Kwok et al., 2023; DerKacy et al., 2023) In section 2, we study an emission line feature at 2.1 \(\mu\)m in the kilonova AT 2017gfo spectra from 7.5 to 10.5 days. We attribute this line to a fine-structure line of doubly ionized Tellurium (Te III, atomic number 52). The Te III mass that is required to explain the observed data is estimated as \(\sim 10^{-3}M_{\odot}\). With a synthetic spectral modeling with the solar r-process abundance pattern, we show that [Te III] 2.10 \(\mu\)m is the strongest fine structure emission line in the near infrared region. In section 3, we conclude the results and discuss the uncertainties and implications. ## 2 TeV III line in Kilonova The emission lines produced through radiative de-excitation of atoms emerge from the optically thin region of the ejecta. The optical depth of the kilonova ejecta with an expansion velocity of \(v_{\rm ej}\) and a mass of \(M_{\rm ej}\) is \[\tau \approx\frac{\kappa M_{\rm ej}}{4\pi(v_{\rm ej}t)^{2}}, \tag{1}\] \[\approx 1\left(\frac{\kappa}{1\,{\rm cm^{2}g^{-1}}}\right)\left(\frac{M_{ \rm ej}}{0.05M_{\odot}}\right)\left(\frac{v_{\rm ej}}{0.1c}\right)^{-2}\left( \frac{t}{10\,{\rm day}}\right)^{-2}, \tag{2}\] where \(\kappa\) is the opacity and \(t\) is the time since merger. The opacity is dominated by bound-bound transitions of heavy elements and depends on the composition and wavelengths. Tanaka et al. (2020) show that the expansion opacity decreases with wavelength, e.g., \(\sim 10-100\,{\rm cm^{2}g^{-1}}\) around \(0.5\,\mu{\rm m}\) and \(\lesssim 1\,{\rm cm^{2}g^{-1}}\) around \(2\,\mu{\rm m}\). Therefore, infrared emission lines are expected to emerge at the earlier time than optical lines. With the ejecta parameters of AT 2017gfo, we expect emission lines to dominate over the photospheric emission as early as \(\sim 10\,{\rm days}\) around \(2\,\mu{\rm m}\). Figure 1 shows the spectral series of the kilonova AT 2017gfo from 7.5 to 10.5 days after the merger taken by X-shooter on the Very Large Telescope (Pian et al., 2017). The observed spectra are composed of several line features and a continuum component extending from the optical to near infrared bands. We model the underlying continuum spectrum by blackbody radiation, where the photospheric velocity and temperature for 7.5-10.5 days are 0.06 - 0.08\(c\) and 1700 - 2400 K, respectively. The observed spectra clearly show an emission line at 2.1 \(\mu{\rm m}\) (see Gillanders et al., 2023 for a detailed analysis). The expansion velocity of the line emitting region is \(\sim 0.07c\) derived from Doppler broadening of the line, which is consistent with the picture where the emission line is produced outside the photosphere. The line flux remains roughly constant with time while the continuum flux declines, and thus, the line-to-continuum ratio increases from \(\sim 1\) at 7.5 days to \(\sim 1.5\) at 10.5 days. This development of the emission line without a blue-shifted absorption feature indicates that the emission at 2.1 \(\mu{\rm m}\) is a forbidden line driven by electron collision rather than an emission line associated with an absorption line, e.g., a P-Cygni line or a fluorescence line. The wavelength of the peak of the emission line feature indeed coincides with a fine structure line, [Te III] 2.10 \(\mu{\rm m}\), arising from the transition between the ground level \({}^{3}{\rm P}_{0}\) and the first excited level \({}^{3}{\rm P}_{1}\). It is worth noting that [Te III] 2.10 \(\mu{\rm m}\) has been detected in planetary nebulae (Madonna et al., 2018). Note that the transition between the ground level \({}^{3}{\rm P}_{2}\) and the second excited level \({}^{3}{\rm P}_{1}\) of Te I also produces an emission line at 2.1 \(\mu{\rm m}\). As discussed later, the contribution of Te I line is weaker than Te III line. It may not be surprising that Te III produces the strongest emission lines because Te is among the most abundant elements in the second r-process peak. Figure 2 shows the mass fraction of each atom at 10 days after the merger. Here we assume that the final abundance pattern matches the solar r-process residual with atomic numbers \(A\geq 88\)(Hotokezaka & Nakar, 2020), i.e., the elements beyond the first r-process peak. With this assumption, the most abundant element is Sr and the second most is Te at 10 days. Note also that [Te III] 2.10 \(\mu{\rm m}\) is particularly expected to be strong as long as Te III is abundant outside the photosphere because this line is produced by radiative decay of the first fine structure transition level, which is easily excited by electron collision. For the iron peak elements, [Co III] 11.89 \(\mu{\rm m}\) and [Co II] 10.52 \(\mu{\rm m}\) represent lines of the same nature. Indeed, these are among the most prominent mid-IR lines observed in SNe Ia and SN 1987A, respectively (Kwok et al., 2023; Wooden et al., 1993). Let us first give an estimate of the amount of Te III from the observed line flux assuming that the observed line flux is predominantly produced by Te III and the ejecta is optically thin to the [Te III] 2.10 \(\mu{\rm m}\) line. The total line luminosity is given by \[L\sim h\nu_{10}A_{10}f_{1}N({\rm Te\,III}), \tag{3}\] where \(h\nu_{10}\), \(A_{10}\approx 2\,{\rm s^{-1}}\), and \(f_{1}\) are the excitation energy, the Figure 1: Spectral series of the kilonova AT 2017gfo 7.5–10.5 days after the merger. The observed data were taken by X-shooter on VLT (Pian et al., 2017). The synthetic spectra are composed of fine structure emission lines (dashed curve) and a continuum (dotted curve), where the continuum emission is approximate by a blackbody with temperatures \(T_{\rm BB}=2400\), 2100, 1800, 1700 K at 7.5, 8.5, 9.5 and 10.5 days, respectively. The electron temperature is fixed to be 2000 K. The ejecta model assumes \(M_{\rm ej}=0.05M_{\odot}\), \(v_{\rm exp}=0.07c\), and \(n_{\rm e}=10^{7}\,{\rm cm^{-3}}(t/9.5{\rm d})^{-3}\). We use ionization fractions of (Y\({}^{+0}\), Y\({}^{+1}\), Y\({}^{+2}\))\({}^{+2}\)) = (0.25 0, 4.0, 25. 0.1) for all the atomic species for simplicity. The composition is assumed to be the solar r-process abundance pattern with \(A\geq 88\) (figure 2). The shape of each emission line is assumed to be a Gaussian profile with a broadening parameter of 0.07c. The distance to the source is set to \(=40\) Mpc. The wavelength of [Te I] 2.10 \(\mu{\rm m}\) and [Te III] 2.10 \(\mu{\rm m}\) is shown as a vertical dashed line. Also shown as vertical lines are possibly strong emission lines at 10.5 day. The gray shaded vertical regions depict the wavelength ranges between the atmospheric windows. The wavelength of [Te I] 2.10 \(\mu{\rm m}\) is shown with an offset of +0.04 \(\mu{\rm m}\). radiative decay rate, and the fraction of TeV III ions in the \({}^{3}\)P\({}_{1}\) level, respectively, and \(N\)(Te III) is the total number of Te III ions in the ejecta (see equations (1) and (2) in Hotokezaka et al. 2022, for the formula of M1 transition probabilities). The observed flux at 2.1 \(\mu\)m after subtracting the underlying continuum is \(\sim 0.1\) mJy, corresponding to the observed line luminosity of \(L_{\rm obs,line}\sim 2\cdot 10^{39}\) erg s\({}^{-1}\) with \(D=40\) Mpc. Equation (3) leads to a total mass of Te III: \[M(\mbox{Te III})\sim 10^{-3}\,M_{\odot}\left(\frac{f_{1}}{0.1}\right)^{-1}\, \left(\frac{L_{\rm obs,line}}{2\cdot 10^{39}\mbox{ erg s${}^{-1}$}}\right)\,. \tag{4}\] Note that the electron density at \(t\sim 10.5\) day may be comparable to the critical density of Te III \({}^{3}\)P\({}_{1}\)(Madonna et al., 2018), and therefore, the level fraction \(f_{1}\) is comparable to or slightly less than that expected from the thermal distribution, i.e., \(f_{1}\propto 0.1\) in thermal equilibrium at \(T_{e}=2000\) K. Given the total ejecta mass of \(\sim 0.05M_{\odot}\), we estimate that the mass fraction of Te is greater than a few per cent. We now turn to the comparison of the observed spectra with a synthetic spectral model. The synthetic spectrum is composed of fine structure emission lines and a continuum component, where the continuum emission is approximated by blackbody radiation. The blackbody temperature and radius at a given epoch are determined such that the synthetic spectrum roughly agrees with the observed one at the near IR region \(\leq 2\,\mu\)m. The emission line spectrum is computed by the one-zone modeling presented in Hotokezaka et al. (2022), where the energy level populations are solved by balance between collision and radiative decay for a given electron density, temperature, and ionization state. We use the collision strengths of the fine structure transitions of the ground term of Te III derived by Madonna et al. (2018). The collision strengths of other elements that are relevant for the nebular modeling at \(\lambda\lesssim 3.5\,\mu\)m are obtained by using an atomic structure code Hulllac (Bar-Shalom et al., 2001, see also Hotokezaka et al., 2022) and the M1 line list is constructed by using the NIST database (Kramida et al., 2021) and the LS selection rules with the single-configuration approximation (Hotokezaka et al. in prep). Note that the wavelengths and radiative transition rates of the M1 lines in the list are sufficiently accurate for our purpose. In the modeling, the ejecta composition is assumed to be the solar r-process abundance pattern with \(A\geq 88\) (figure 2), which is the same as the second and third peak model used in Hotokezaka et al. (2022). The model also assumes the electron temperature, \(T_{e}=2000\) K and the ionization fractions \((Y^{+0},Y^{+1},Y^{+2},Y^{>3})=(0.25,0.4,0.25,0.1)\)1. These quantities are assumed to be constant with time for simplicity. This choice of the ionization fraction is somewhat motivated by Hotokezaka et al. (2021), where the ionization fractions of Nd atoms in the kilonova nebular phase are studied. With these parameters, the ejecta mass of \(0.05M_{\odot}\) and the expansion velocity of \(0.07c\), [Te III] \(2.10\,\mu\)m is the strongest emission line among M1 transitions of all the heavy elements beyond the first r-process peak and the synthetic spectra can roughly reproduce the emission line structure around \(2.1\,\mu\)m. However, one should keep in mind that the ionization fraction varies among different atomic species. The ejecta mass of \(0.05M_{\odot}\) agrees with the ejecta mass estimated from the energy budget of the bolometric light curve (Waxman et al., 2019; Kasen and Barnes, 2019; Hotokezaka and Nakar, 2020). Footnote 1: We neglect the emission lines of ions in \(Y^{\geqslant 3}\). If this interpretation is correct, we expect that the \(2.1\,\mu\)m line remains at the later times while the continuum flux keeps declining. It is worth noting that Te III may produce another emission line at \(2.93\,\mu\)m arising from the transition between the first and second excited levels (\({}^{3}\)P\({}_{1}\)-\({}^{3}\)P\({}_{2}\)) at the later times because the electron temperature is expected to gradually increase with time (Hotokezaka et al., 2021; Pognan et al., 2022). Although this line may be hidden by several other lines of Os II, III, and Pd III, detecting the two lines of Te III in future events can provide solid confirmation of the Te III production in mergers. Furthermore, the ratio of these line fluxes can be used to diagnose the electron temperature. ## 3 Conclusion and Discussion The observed spectra of the kilonova AT 2017gfo exhibit a strong emission line at 2.1 \(\mu\)m. The emission line with the lack of an apparent blue-shifted absorption feature suggests that the emission feature is a forbidden line excited through electron collision. We attribute this line to the fine structure line, [Te III] \(2.10\,\mu\)m, which has also been detected in planetary nebulae (Madonna et al., 2018). Note that Te is one of the most abundant elements in the second r-process peak. We estimate that the mass of Te III is roughly \(10^{-3}M_{\odot}\) to account for the observed line flux. We compare the observed spectra with a synthetic model, where the spectrum is composed of fine structure emission lines and a continuum component approximated by blackbody radiation. The spectrum of fine structure emission lines is computed with the one-zone model presented in Hotokezaka et al. (2022). With the solar r-process abundance beyond the first r-process peak, \(T_{e}\sim 2000\) K, and \(Y^{+2}\sim 0.3\), we show that [Te III] \(2.10\,\mu\)m is the strongest emission line among M1 transitions of all the heavy elements around 10 days after merger. Our model agrees with the observed spectra for the ejecta of \(0.05M_{\odot}\) with the solar r-process abundance pattern with \(A\geq 88\), an expansion velocity of \(0.07c\), and electron temperature of \(\sim 2000\) K, and the ionization fraction of \(Y^{+2}\sim 0.3\). Because blackbody radiation may be a poor approximation to the continuum flux around \(2\,\mu\)m we should keep in mind that the amount of Te III in our analysis may be affected by the continuum flux model. It is also interesting to note that the same abundance pattern can reproduce the _Spitzer_ \(4.5\,\mu\)m detection at 40 days (Villar et al., 2018; Kasilwal et al., 2022), in which the emission is attributed to a fine structure line of W III (Hotokezaka et al., 2022). Note that, if the lighter r-process elements are abundant, they are expected to produce emission lines Figure 2: Elemental mass fraction used in the synthetic modeling. The abundance pattern at 10 days is determined such that the final abundance pattern at \(\sim 5\) Gyr matches the solar r-process residual for \(88\leq A\leq 209\). Tellurium is the second most abundant element in this model. around 2 \(\mu\)m such as [Kr III] 2.20 \(\mu\)m and [Se IV] 2.29 \(\mu\)m. However, the observed spectra peak at 2.1 \(\mu\)m, suggesting that these ions are less abundant compared to Te III in the line emitting region of the ejecta. Our model does not include electric dipole (E1) lines, which may produce strong absorption and emission lines. Recently, Gillanders et al. (2023) suggested that 2.1 \(\mu\)m may be composed of two lines and an E1 line of Ce III is the best candidate producing this line feature. Although we cannot quantify the contamination of E1 lines to the 2.1 \(\mu\)m feature, we emphasize that the M1 emission of Te can account the observed line flux with reasonable parameters. To verify this hypothesis we need spectral modelings with E1 lines. We also note that the M1 lines in our list cannot account for the observed feature at 1.6 \(\mu\)m. The flux in this line declines with time as the continuum flux declines, indicating that this emission feature may be produced by E1 lines. Interestingly, Domoto et al. (2022) show that Ce III has several strong E1 lines around 1.6 \(\mu\)m. From the early blue emission in the photospheric phase, it is suggested that the emission is dominated by the ejecta composed of light r-process elements in order to avoid significant absorption in the optical band by lanthanides. Furthermore, the analyses of the Ichinova spectra in the photospheric phase lead to the similar conclusion. The absorption feature around 0.8 \(\mu\)m is likely caused by one of light r-process elements, Sr (\(Z=38\)), or even He (Watson et al., 2019; Gillanders et al., 2022; Tarumi et al., 2023). Domoto et al. (2022) propose that La (\(Z=57\)) and Ce (\(Z=58\)) produce the absorption lines around 1.2 and 1.5 \(\mu\)m, respectively. But the abundances of La and Ce inferred from the spectral analysis are lower than the solar r-process residuals by factor of \(\sim 10\). These indicate that the outer part of the ejecta (\(v\gtrsim 0.2c\)) is predominantly composed of light r-process elements. In contrast to the early emission, our analysis implies that heavier elements, i.e., the second r-process peak, are likely more abundant in the slower part of the ejecta. In order to obtain better constraints on the elemental abundances and ejecta parameters, the spectral modelings should be improved by developing non-LTE radiation transfer modelings (e.g., Pognan et al., 2022) and by improving atomic data such as the radiative transition rates (e.g., Gaigalas et al., 2019), collision strengths, and recombination rate coefficients. For future kilonova events, the spectroscopic observations with the JWST as well as ground-based telescopes will be useful to identify more elements in the nebular spectra with a wider wavelength range. ## Acknowledgments We thank Nanae Domoto and Yuta Tarumi for useful discussion. This research was supported by JST FOREST Program (Grant Number JPMJFR212Y, JPMJFR2136), NIFS Collaborative Research Program (NIFS22KIIF005), the JSPS Grant-in-Aid for Scientific Research (19H00694, 20H00158, 21H04997, 20K14513, 20H05639, 22JJ22810). ## Data Availability The data presented this article will be shared on request to the corresponding author.
2305.08575
Fission and fusion of heavy nuclei induced by the passage of a radiation-mediated shock in BNS mergers
We compute the structure of a Newtonian, multi-ion radiation-mediated shock (RMS) for different compositions anticipated in various stellar explosions. We use a multifluid RMS model that incorporates electrostatic coupling between the different plasma constituents as well as Coulomb friction in a self-consistent manner, and approximates the effect of pair creation and the presence of free neutrons in the shock upstream on the shock structure. We find that under certain conditions a significant velocity separation is developed between different ions in the shock downstream and demonstrate that in fast enough shocks ion-ion collisions may trigger fusion and fission events at a relatively high rate. Our analysis ignores anomalous coupling through plasma microturbulence, that might reduce the velocity spread downstream below the activation energy for nuclear reactions. A rough estimate of the scale separation in RMS suggests that for shocks propagating in BNS merger ejecta the anomalous coupling length may exceed the radiation length, allowing a considerable composition change behind the shock via inelastic collisions of $\alpha$ particles with heavy elements at shock velocities $\beta_u\gtrsim0.25$. A sufficient abundance of free neutrons in the shock upstream, as expected during the first second after the merger, is also expected to alter the ejecta composition through neutron capture downstream. The resultant change in the composition profile may affect the properties of the early kilonova emission. The generation of microturbulence due to velocity separation can also give rise to particle acceleration that might alter the breakout signal in supernovae and other systems.
Alon Granot, Amir Levinson, Ehud Nakar
2023-05-15T12:02:00Z
http://arxiv.org/abs/2305.08575v2
Fission and fusion of heavy nuclei induced by the passage of a radiation-mediated shock in BNS mergers ###### Abstract We compute the structure of a Newtonian, multi-ion radiation-mediated shock for different compositions anticipated in various stellar explosions, including supernovae, gamma-ray bursts, and binary neutron star mergers, using a multi-fluid RMS model that incorporates a self-consistent treatment of electrostatic coupling between the different plasma constituents. We find a significant velocity separation between ions having different charge-to-mass ratios in the immediate shock downstream and demonstrate that in fast enough shocks ion-ion collisions can trigger fusion and fission events at a relatively large rate. Our analysis does not take into account potential kinetic effects, specifically, anomalous coupling through plasma microturbulence, that can significantly reduce the velocity spread downstream, below the activation energy for nuclear reactions. A rough estimate of the scale separation in RMS suggests that for shocks propagating in BNS merger ejecta, the anomalous coupling length may exceed the radiation length, allowing a considerable composition change behind the shock via inelastic collisions of \(\alpha\) particles with heavy elements at shock velocities \(\beta_{u}\gtrsim 0.2\). Moreover, a sufficient abundance of free neutrons upstream of the shock can also trigger fission through neutron capture reactions downstream. The resultant change in the composition profile may affect the properties of the early kilonova emission. The implications for other exploding systems are also briefly discussed. keywords: Transients - Shock Waves - Plasmas - Nucleosynthesis ## 1 Introduction Radiation mediated shocks (RMS) are an inherent feature of essentially all (strong) stellar explosions (e.g., various types of supernovae, low luminosity GRBs, regular GRBs and binary neutron star mergers). They dictate the properties of the early electromagnetic emission released in the explosion during the breakout of the shock from the opaque envelope enshrouding the source, the detection of which is a primary focus of current and upcoming transient surveys (for recent reviews see Waxman and Katz, 2017; Levinson and Nakar, 2020). The RMS structure and dynamics depend on the progenitor type, the explosion energy and the explosion geometry. In particular, the shock velocity at breakout can range from sub-relativistic to mildly relativistic, and in extreme cases even ultra-relativistic. The gamma-ray flash GRB 170817A that accompanied the gravitational wave signal GW 170817 (for reviews see Nakar, 2020; Margutti and Chornock, 2021) is a plausible example of a shock breakout signal (e.g., Kasliwal et al., 2017; Gottlieb et al., 2018; Beloborodov et al., 2020); the RMS in this source was most likely driven by the interaction of the relativistic jet expelled by the compact remnant and the merger ejecta (Nakar et al., 2018), but a shock with a much wider opening angle is also a viable possibility (Beloborodov et al., 2020). While the origin of GRB 170817A is still debatable, the presence of a relativistic jet in this system has been confirmed by VLBI observations (Mooley et al., 2018). Consequently, if this system represents a prototypical BNs merger evolution, the conclusion that a fast shock must cross through at least part of the ejecta at early times seems unavoidable. In what follows we demonstrate that the propagation of the RMS through the merger ejecta can significantly alter the composition profile of r-process material behind the shock and, potentially, the kilonova emission if the change in composition affects the opacity and/or the radioactive energy deposition in the ejecta. A key issue in RMS theory is how the different plasma constituents are coupled. Since the radiation force acts solely on electrons and positrons, it must be mediated to the ions by some other means. In a sub-relativistic, single-ion RMS this is accomplished through the generation of an electrostatic field, owing to a tiny charge separation of electrons and ions. However, electrostatic coupling fails in relativistic RMS (RRMS), in which \(e^{\pm}\) pairs are overabundant (Levinson, 2020; Vanthieghem et al., 2022), and in RMS composed of multi-ion species with different charge-to-mass ratios (Derishev, 2018). It has been shown recently (Vanthieghem et al., 2022) that in an unmagnetized single-ion RRMS, the dominant coupling mechanism is plasma microturbulence, generated by a current filamentation instability driven by the relative drift between the ions and the pairs. The presence of a strong enough (transverse) magnetic field in the upstream flow can give rise to magnetic coupling of pairs and ions which completely suppresses the instability (Mahlmann et al., 2023). The situation can be vastly different in multi-ion RMS. First, the deceleration rate of ions inside the shock depends on the charge-to mass ratio, and since the charge conservation condition in a multi-ion plasma is degenerate a large velocity separation between the different ions in the post-deceleration zone is, in principle, allowed (even in sub-relativistic RMS). Second, since the gyroradius of an ion of mass \(Am_{p}\) is larger by a factor \(Am_{p}/Zm_{e}\) than that of an electron (or positron), much stronger magnetic fields may be needed to couple all ions. Such strong fields may not be present in most relevant systems. Third, the coupling of different ions by plasma microturbulence would require generation of wave modes different than in a single-ion RRMS. In this paper, we construct a semi-analytic, multi-ion RMS model and use it to compute the structure of a shock propagating in the expanding ejecta of a BNS merger. We find a significant velocity separation of different r-process isotopes just downstream of the shock transition layer, and show that collisions of the different ion beams can induce nuclear transmutations in regions where the shock velocity exceeds the corresponding activation barriers, provided that anomalous friction is not too effective. For an isotope of mass number \(A\) and velocity separation \(\Delta\beta\), the center-of-mass collision energy is approximately \(E_{COM}\approx Am_{p}c^{2}\Delta\beta^{2}/2\sim 1(A/100)(\Delta\beta/0.1)^{2}\) GeV (see Eq. 12). The Coulomb barrier of colliding nuclei with mass numbers \(A_{1},A_{2}\) and atomic numbers \(Z_{1},Z_{2}\) is approximately \(E_{b}\approx 0.92Z_{1}Z_{2}/(A_{1}^{1/3}+A_{2}^{1/3})\) MeV. For the r-process elements contained in the ejecta (\(A\sim 100\)) it lies in the range \(100-300\) MeV, and is considerably lower for collision of \(\alpha\) particles with the heavy nuclei. Thus, even a separation of \(\Delta\beta\approx 0.03\) is likely sufficient to induce nuclear transmutations. Moreover, the presence of sufficiently abundant free neutrons in the shock upstream should lead to fission through the capture of free neutrons, which are not decelerated by the shock, by neutron-rich isotopes downstream of the shock. The activation energy for neutron-induced fission ranges from practically zero for \({}^{235}U\) to a peak of about \(40\) MeV for elements of mass number \(A\approx 100\). Consequently, a considerable impact on the abundance evolution of r-process elements, by neutron capture and/or inelastic collisions of heavy nuclei, is anticipated prior to the kilonova emission in regions where the shock velocity \(\gtrsim 0.1c\). These gross estimates are supported by detailed calculations presented below. The plan of the paper is as follows: In Sec. 2 we introduce the Multi-ion RMS model. In Sec. 3 we provide analytic estimates of ion-ion collision energies and collision frequencies. In Sec. 4 we present numerical solutions of the shock equations for different compositions of the upstream plasma. In Sec. 5 we modify the model to include a simplified interionic friction force. In Sec. 6 we discuss the applications to BNS mergers. A detailed derivation of the shock equations can be found in the appendix. ## 2 The multi-ion shock model We construct a semi-analytic model for a Newtonian RMS that propagates in a neutral plasma consisting of electrons and a collection of ions having different charge-to-mass ratios. For simplicity, the shock is assumed to be infinite, planar and in a steady state. Adopting the approach of Blandford and Payne (1981a); Blandford and Payne (1981b), we employ the diffusion approximation to compute the transfer of radiation through the shock. A comparison of analytic solutions with full Monte-Carlo simulations (Ito et al., 2020) indicates that the diffusion approximation is quite accurate even at shock velocities \(\beta_{u}\lesssim 0.3\) or so, justifying this treatment. In contrast to the single-fluid approach invoked in all previous Newtonian RMS models, in which all plasma constituents are assumed to be tightly coupled, here we self-consistently compute the electrostatic coupling between the electrons and ions, imposed by the charge separation induced by the relative drift between the different multi-fluid components. This is accomplished by solving the energy and momentum equations of the radiation and the multi-fluid plasma, together with Maxwell's equations, taking into account the electrostatic force acting on the charged fluids. Our treatment is similar to that adopted in Levinson (2020) for the single-ion relativistic RMS, but with modifications necessary for the inclusion of multi-ion species, and with the beaming approximation for the radiative transfer replaced by the diffusion approximation. To simplify the analysis, we neglect pair creation and nuclear transmutations, although as shown below fission and fusion might be important in the immediate downstream of a fast shock. We emphasize that our main goal here is not to present detailed calculations of the composition profile behind the shock, but merely to demonstrate that a substantial change in composition is likely to occur. The neglect of pair production is justified for most exploding systems, but not for the extreme densities anticipated in BNS merger ejecta (see Sec. 6 for a detailed discussion). We tend to believe that inclusion of pair creation, while considerably complicating the solutions, will not alter significantly the essence of the results obtained with the simplified model constructed below. We intend to incorporate pair production into our model in a future publication. Our analysis also ignores potential generation of plasma waves by ion beam instabilities. The resultant plasma microturbulence can provide additional coupling mechanism on microscopic scales, that might alter the ion velocity distribution in the immediate downstream. To gain some insight into the effect of such anomalous coupling, we incorporate, in Sec. 5, a phenomenological interionic friction model into the shock equations. In Sec. 6 we estimate the ratio between the radiation and kinetic scales, and discuss the implications for the anomalous coupling length. Based on this estimate, we speculate that for RMS in BNS mergers, anomalous coupling may not alter significantly the results obtained in Sec. 4.3 in the absence of anomalous friction. For other exploding systems the effect of plasma instabilities is unclear. ### Governing equations We solve the shock equations in the rest frame of the shock, in which the flow is steady. We choose a coordinate system such that the upstream flow moves in the positive \(\hat{x}\) direction. All fluid quantities are henceforth measured in the shock frame, unless otherwise stated. Let \(n_{e}\) denotes the electron density and \(\beta_{e}\) its velocity (in units of \(c\)), and \(n_{\alpha}\), \(\beta_{\alpha}\), \(Z_{\alpha}\), \(A_{\alpha}\) the density, velocity, atomic number and mass number, respectively, of ion species \(\alpha\) (here, \(\alpha=\) He, Au, etc.). The charge-to-mass ratio of the \(\alpha\) ion is then \(Z_{\alpha}/A_{\alpha}\). Since our analysis ignores pair production and nuclear transmutation (although, as explained below, they might be important downstream), all particle fluxes must be conserved across the shock. In particular, \(j_{e}\equiv n_{e}\beta_{e}=n_{e,\alpha}\beta_{\mu}\) and \(j_{\alpha}\equiv n_{\alpha}\beta_{\mu}=n_{\alpha,\beta}\mu_{\nu}\), here subscript \(\mu\) labels fluid quantities far upstream of the shock. For the upstream conditions anticipated (at early times) in BNS merger ejecta all ions are likely to be fully ionized. Hence, \(Z_{\alpha}\) is a constant for all \(\alpha\). Charge neutrality of the upstream plasma implies \(\sum_{\alpha}Z_{\alpha}n_{\alpha,\alpha}-n_{e,\alpha}=0\), from which one obtains the relation \[\sum_{\alpha}Z_{\alpha}j_{\alpha}-j_{e}=0, \tag{1}\] that holds everywhere. The radiation force inside the shock acts solely on the electrons. This leads to a relative drift between the electrons and the dif ferent ion species which, in turn, induces charge separation, \(\rho_{e}=e(\sum_{\alpha}Z_{\alpha}n_{\alpha}-n_{e})\neq 0\), and the consequent generation of an electrostatic field, \(\tilde{E}=E(x)\hat{x}\). The change of this electric field across the shock is governed by Gauss' law: \[\nabla\cdot\mathbf{E}=\frac{dE}{dx}=4\pi\rho_{e}. \tag{2}\] The last equation can be rendered dimensionless by transforming to the coordinate \(d\tau=\sigma_{T}n_{e}dx\), here \(\sigma_{T}\) is the Thomson cross-section, and normalizing the electric field \(E\) by the fiducial field \(E_{0}=m_{e}c^{2}j_{e}\sigma_{T}/e\), viz., \(\tilde{E}=E/E_{0}=eE/m_{e}c^{2}j_{e}\sigma_{T}\). This yields \[\frac{d\tilde{E}}{d\tau}=\chi\left(\frac{\Sigma_{\alpha}j_{\alpha}\beta_{e}}{ j_{e}\beta_{\alpha}}-1\right), \tag{3}\] with \[\chi=\frac{4\pi e}{\sigma_{T}E_{0}}\approx 10^{12}\left(\frac{n_{e,\alpha}}{ 10^{25}\,\text{cm}^{-3}}\right)^{-1}\beta_{u}^{-1}. \tag{4}\] The density is normalized here to the fiducial density anticipated in BNS merger ejecta about a second after the merger (Eq. 19). In SNe and GRBs the anticipated density is much smaller, \(n_{e,\alpha}\lesssim 10^{15}\,\text{cm}^{-3}\). The energy and momentum equations of the multi-fluid system are derived in appendix A under the assumption that the electron and ion fluids are cold. This simplifying assumption is justified by virtue of the fact that the pressure inside the shock is completely dominated by the diffusing radiation. In the dimensionless form, the reduced set of equations read: \[\frac{d}{d\tau}(\beta_{e}+\pi_{\gamma})=-\tilde{E}, \tag{5}\] \[\frac{d}{d\tau}\beta_{\alpha}=\mu\frac{Z_{\alpha}\beta_{e}}{A_{ \alpha}\beta_{\alpha}}\tilde{E},\] (6) \[\frac{d}{d\tau}\pi_{\gamma}=\frac{1}{\beta_{e}}\frac{d}{d\tau}(4 \pi_{\gamma}\beta_{e}-\frac{d}{d\tau}\pi_{\gamma}), \tag{7}\] where \(\pi_{\gamma}\) is the normalized radiation pressure, \(\pi_{\gamma}=p_{\gamma}/m_{e}c^{2}j_{e}\), and \(\mu=m_{e}/m_{p}\) is the electron-to-proton mass ratio. Equations (3)-(7) form a closed set that can be solved once the values of \(\beta_{u}\), \(j_{e}\), \(j_{\alpha}\) and \(Z_{\alpha}/A_{\alpha}\) are specified. The far upstream value of the electric field is \(\tilde{E}(x\rightarrow-\infty)=0\). ### Numerical integration Solutions for the multi-fluid RMS structure are obtained in section 4 by numerically solving Eqs. (3)-(7), assuming a cold, neutral plasma in the far upstream. We note that the shock equations, Eqs. (3)-(7), are invariant under translations, hence, the location of the origin can be chosen arbitrarily. For convenience, we start the integration in all runs at \(\tau=0\), with \(\beta_{e,\alpha}=\beta_{\alpha,\alpha}=\beta_{\alpha}\), \(\pi_{\gamma,\alpha}=0.001\), \(\tilde{E}=0\), and \(\tilde{j}_{T,\mu}^{\alpha\alpha}=0\) as initial values (at \(\tau=0\)), where \(\tilde{j}^{\alpha\alpha}\) is the radiation energy flux, defined in Eq. 2. With this choice the radiation force at \(\tau=0\) is given by \[\frac{d\pi_{\gamma,\alpha}}{d\tau}=4\pi_{\gamma,\alpha}\beta_{u}=0.004\beta_{u}. \tag{8}\] We verified that as long as \(\pi_{\gamma,\alpha}\) is small enough, the solution converges, except for a shift in the location of the shock which, as explained above, is arbitrary. Since only the ratio \(j_{\alpha}/j_{e}=n_{\alpha,u}/n_{e,\alpha}\) appears in the above equations, we find it convenient to normalize the upstream densities to the electron density \(n_{e,\alpha}\), and define \(\tilde{j}_{\alpha}=j_{\alpha}/j_{e}\). To save computing time, we use \(\chi=10^{6}\) instead the more realistic value given in Eq. (4). We checked that the solution is highly insensitive to the value of \(\chi\), provided it is not too small (\(>10^{3}\) roughly). ## 3 Estimation of ion-ion collision energy and collision frequency A qualitative description of the multi-flow dynamics inside the shock is as follows: At the outset of the shock transition layer, the incoming electron fluid is being decelerated, and thus compressed, by the radiation force (the gradient of the radiation pressure in the diffusion limit). This creates charge separation which, in turn, induces an electrostatic field inside the shock. This electric field tends to decelerate the ions and accelerate the electrons, thereby providing coupling of the various plasma components. Charge neutrality is regained once the radiation pressure saturates (and the radiation force vanishes), completely suppressing the electric field. In the absence of any other coupling mechanism (e.g., plasma turbulence, see section 5 for further discussion), relative drifts between ions with different charge-to-mass ratio ensue in the post-deceleration zone (immediate downstream), depending on the value of \(Z_{\alpha}/A_{\alpha}\). The slowest ion in the immediate downstream, which we henceforth term 'agile', is the one with the largest \(Z/A\) ratio among all ions present in the upstream flow. While the full shock structure can only be found upon numerical integration of Eqs. (3)-(7), an expression for the immediate downstream velocity of each ion in terms of the electric potential across the shock can be obtained analytically by integrating Eq. (6) alone. This enables reasonably accurate estimates of the relative drift between the ions and the consequent collision energies. In what follows, we provide estimates of the relative drifts, collision energies and collision frequencies of ions in the immediate downstream. ### Ion velocity separation Since ions do not experience the radiation force in our model, the change in their energy across the shock must equal the work done by the electrostatic field. When normalized to the upstream baryon energy, \(m_{e}c^{2}\beta_{u}^{2}/2\), the latter reads: \[w_{E}\equiv\int_{-\infty}^{\infty}\frac{2E\cdot e}{m_{e}c^{2}\beta_{u}^{2}}dx. \tag{9}\] Integrating Eq. (6), an expression for the downstream terminal velocity of ion species \(\alpha\) is obtained in terms of \(w_{E}\): \[\beta_{\alpha,d}=\beta_{u}\sqrt{1+w_{E}\frac{Z_{\alpha}}{A_{\alpha}}}. \tag{10}\] The velocity difference between ion species \(\alpha\) and \(\alpha^{\prime}\) now reads: \[\Delta\beta_{\alpha,\alpha^{\prime}}=\beta_{u}\bigg{[}\sqrt{1+w_{E}\frac{Z_{ \alpha}}{A_{\alpha}}}-\sqrt{1+w_{E}\frac{Z_{\alpha^{\prime}}}{A_{\alpha^{ \prime}}}}\bigg{]}. \tag{11}\] This relation is exact, but the value of \(w_{E}\) is unknown. To estimate \(w_{E}\) we note that for the agile ion \(\beta_{agile,d}^{2}<<\beta_{u}^{2}\). Adopting \(\beta_{agile,d}=0\) in Eq. (10) gives \(w_{E}\approx-\frac{A_{\alpha,d}}{Z_{\alpha,d}}\). Substituting the latter expression into Eq. (11), we can evaluate \(\Delta\beta_{\alpha,\alpha^{\prime}}\) for all ions once the ratio \(A_{agile}/Z_{agile}\) is specified. As an example, consider a mixture of hydrogen and helium only. The agile ion here is \(H\), for which \(A_{H}/Z_{H}=1\) and \(w_{E}\approx-A_{H}/Z_{H}=-1\), giving \(\Delta\beta_{He,H}\approx 0.17\left(\beta_{u}/0.3\right)\), for a shock velocity of \(\beta_{u}=0.3\). This result is in excellent agreement with the numerical solution derived in section 4 (see Fig. 1), for which \(\Delta\beta_{He,H}\approx 0.173\). In the case of heavy r-process composition (section 4.3), we find \(w_{E}\approx-A_{He}/Z_{He}\approx-2\) for the agile ion \(He\). This yields, for example, velocity separation of \(\Delta\beta_{Zr,He}\approx 0.0998\left(\beta_{u}/0.3\right)\left[\sqrt{1-\left(\frac{w_{E}}{ 2}\right)\left(\frac{Z_{\alpha}}{A_{\alpha}}\right)}\right]\) between \(Zr\) (with \(A_{Zr}=91\)) and \(He\), again in good agreement with the numerical solution (\(\Delta\beta_{Zr,He}\approx 0.104\)) presented in Fig. 4. We find good agreements in all other cases tested and conclude that Eq. (11) can be safely used to calculate relative drifts between ions. The above derivation suggests that the downstream conditions are highly insensitive to the upstream density of the agile ion. As will be shown in Sec. 4, this may be of utmost importance for astrophysical applications. To see this, note first that Eq. (10) yields the exact result \(w_{E}=-(A_{agile}/Z_{agile})|1-(B_{agile,d}/\beta_{u})^{2}|\), which depends on \(\beta_{agile,d}\) to second order. From Eq. (10) we also deduce that the downstream velocity of the non-agile ions, for which \(\beta_{\alpha,d}\gg B_{ogile,d}\), depends on \(\beta_{agile,d}\) to the same order, and since the flux \(j_{\alpha}\) is conserved across the shock, the downstream density, \(n_{\alpha,d}=j_{\alpha}/\beta_{\alpha,d}\), also depends on \(\beta_{agile,d}\) to second order. From Eqs. (5) and (7) we find, upon neglecting the electron contribution to the energy budget: \[m_{p}c^{2}\beta_{\alpha}^{2}w_{E}=8e\int_{-\infty}^{\infty}E(x)\,\frac{ \beta_{\alpha,d}}{\beta_{c}(x)}dx.\] Since to order \(O(\beta_{agile,d}/\beta_{u})^{2}\) the left hand side of this equation is constant, it implies that \(\beta_{c}(x)\) and, hence, \(B_{\alpha,d}\) and \(n_{e,d}\), must be preserved to the same order. The charge neutrality condition downstream, \(Z_{agile}n_{agile,d}=n_{e,d}-\sum_{\alpha\neq a\neq a\neq a\neq a}Z_{\alpha,d}n _{\alpha,d}\), then implies that the downstream density of the agile ion, \(n_{agile,d}\), must also be preserved to this order and is, therefore, highly insensitive to the upstream density \(n_{agile,u}\). In other words, the compression ratio of the agile ion increases as its mass fraction upstream decreases in order to maintain \(n_{agile,d}\) at the level required for charge neutrality downstream. This argument applies provided the diffusion approximation remains valid. ### Collision energy In order to compute the collision energy in \(\alpha-\alpha^{\prime}\) collisions, it is convenient to transform to the center of momentum frame (COM). One then finds \[E_{COM}=\frac{1}{2}\frac{A_{\alpha}A_{\alpha^{\prime}}}{A_{\alpha}+A_{\alpha^ {\prime}}}m_{p}c^{2}\Delta\beta_{\alpha,\alpha^{\prime}}^{2}, \tag{12}\] where \(\Delta\beta_{\alpha,\alpha^{\prime}}\) is given by Eq. (11). As an example, consider the \(H-He\) collision. With \(A_{H}=1,A_{He}=4\), Eq. (11) yields \(\Delta\beta_{H,He}=0.17(\beta_{u}/0.3)\), and from Eq. (12) we have \(E_{COM}=11.5(\beta_{u}/0.3)^{2}\) MeV. In general, we find that for light elements \(E_{COM}\sim(1-20)(\beta_{u}/0.3)^{2}\) MeV and for heavy elements \(E_{COM}\sim(10-300)(\beta_{u}/0.3)^{2}\) MeV. Thus, at shock velocities \(\beta_{u}\gtrsim 0.5\), the collision energy can exceed \(\sim 1\) GeV for certain combinations of ions. Coulomb barriers range between about 1 and 300 MeV for most elements. Therefore, ion-ion collisions behind a fast enough shock may conceivably cause significant composition change. ### Collision rate The cross-section for inelastic collisions of ion species \(\alpha\) and \(\alpha^{\prime}\) is given approximately by \[\sigma_{\alpha,\alpha^{\prime}}=\frac{8\pi}{3}[(A_{\alpha}^{1/3}+A_{\alpha^{ \prime}}^{1/3})_{r_{p}}]^{2}\approx 0.18\sigma_{T}(A_{\alpha}^{1/3}+A_{\alpha^{ \prime}}^{1/3})^{2}, \tag{13}\] where \(r_{p}\sim 0.43\,r_{e}\sim 1.2\,fm\), with \(r_{e}=\frac{e^{2}}{m_{e}c^{2}}\) being the classical electron radius, and \(\sigma_{T}=\frac{8\pi}{3}r_{e}^{2}\) the Thomson cross section (e.g., Bass, 1973; Kox et al., 1987). The corresponding collision frequency is given by \[\nu_{\alpha\alpha^{\prime}}=n_{\alpha^{\prime},d}\sigma_{\alpha,\alpha^{\prime }}c|\Delta\beta_{\alpha,\alpha^{\prime}}|. \tag{14}\] To obtain the total collision frequency, we sum over all \(\alpha-\alpha^{\prime}\) interactions with minimal energy \(E_{min}\). We take the coulomb barrier between any two nuclei as the minimal value. \(E_{min}\) is obtained by Eq. (1) in Royer et al. (2021). The mean number of collisions over one Thomson length, \(l_{T}=(\sigma_{T}n_{e})^{-1}\), is: \[\Delta\tau_{\alpha} =\sum_{\alpha^{\prime},>E_{min}}\frac{\nu_{\alpha\alpha^{\prime}} l_{T}}{\beta_{\alpha,d}\cdot c}=\sum_{\alpha^{\prime},>E_{min}}\frac{n_{\alpha^{ \prime},d}0.18\sigma_{T}(A_{\alpha}^{\frac{1}{3}}+A_{\alpha^{\prime}}^{1})^{2 }}{n_{e}\sigma_{T}}\cdot\frac{|\Delta\beta_{\alpha,\alpha^{\prime}}|}{\beta_{ \alpha,d}}\] \[=0.18\sum_{\alpha^{\prime},>E_{min}}\tilde{J}_{\alpha^{\prime}} \frac{\beta_{u}}{\beta_{\alpha^{\prime},d}}\cdot\frac{|\Delta\beta_{\alpha, \alpha^{\prime}}|}{\beta_{\alpha,d}}(A_{\alpha}^{\frac{1}{3}}+A_{\alpha^{ \prime}}^{1})^{2}\] \[=0.18\beta_{u}\sum_{\alpha^{\prime},>E_{min}}\tilde{J}_{\alpha^{ \prime}}\bigg{|}\frac{1}{\beta_{\alpha^{\prime},d}}-\frac{1}{\beta_{\alpha,d }}\bigg{|}\cdot(A_{\alpha}^{\frac{1}{3}}+A_{\alpha^{\prime}}^{1})^{2}. \tag{15}\] For the example presented in Fig. 1 of pure \(H-He\) composition (see Sec. 4.1 for details), Eq. (15) gives \(\tau_{He}\approx 6.4\,\big{(}\frac{j_{\alpha}}{\beta^{\prime}}\big{)}\), implying all of the helium ions would interact with hydrogen ions over a length scale smaller than shock width. On the other hand, for the hydrogen ions, we find \(\tau_{H}\approx 0.53\,\big{(}\frac{j_{He}}{\beta^{\prime}/14}\big{)}\). We emphasize that in practice the plasma downstream is expected to be unstable by virtue of the relative drift velocity between the various ion beams. This will likely lead to anomalous coupling by plasma turbulence, and the question is what is the ratio between the anomalous coupling length and the ion collision length? We discuss this issue in greater detail in section 5. ## 4 Numerical results In this section we present solutions of the shock equations, Eqs. (3)-(7), for different compositions of the upstream gas. To illustrate the basic properties of the solution we begin with a simple example of a pure hydrogen-helium (\(H-He\)) mixture. We then present solutions for solar composition (full and \(H\)-stripped), and finally for r-process elemental abundances anticipated in BNS merger ejecta. In all of these examples, the shock velocity is taken to be \(\beta_{u}=0.3\). The ion-ion collision energies and collision rates are calculated in the immediate downstream by employing the equations derived in Sec. 3.2. ### \(H-He\) mixture In our first example we compute the structure of a shock propagating at a velocity \(\beta_{u}=0.3\) in a medium composed of \(H-He\) mixture, with relative abundances of \(X=0.75\), \(Y=0.25\). With our normalization, this translates to the far upstream values \(\tilde{J}_{H,u}=\frac{7}{9}\) and \(\tilde{J}_{He,u}=\frac{1}{24}\) of \(H\) and \(He\) fluxes, respectively. The resultant shock structure is depicted in Fig 1. As seen, the small radiation force invoked at \(\tau=0\), Eq. (8), leads initially to the deceleration of the electrons, and the consequent generation of a negative electric field (magenta line in the inset) by the resultant charge separation. This electric field, in turn, decelerates the ions (Eq. 6), and counteracts the radiation force acting on the electrons (Eq. 5). As a result, the \(H\) ions remain tightly coupled to the electrons, owing to their larger charge-to-mass ratio, whereas the \(He\) ions quickly decouple. Charge neutrality is eventually regained (at \(\tau\approx 17\)), whereupon the net force acting on the system (the sum of electric and radiation forces) vanishes, and the plasma continues to stream undisturbed. The relative drift velocity (marked by the dotted black line) approaches \(\Delta\beta_{H,He}=0.173\) in the post deceleration zone, and the corresponding collision energy is \(\sim 12.16\) MeV. ### Solar abundances In our second example we use the solar elemental abundances given in Lodders (2019) (see Table 4 there). Figure 2 shows the velocity profiles of the electrons and the different ions (a), a color chart of ion-ion collision energies (b), and the collision depth of all ions (c). In similarity to the previous example, the hydrogen fluid remains tightly coupled to the electron fluid throughout the shock, but all other ions decouple and form a separate component, with a velocity spread considerably smaller than the relative drift between the \(H\) ions and all other ions. The collision energies range between practically zero and 20 MeV, for the shock velocity adopted, and are dominated by collisions of hydrogen with heavy ions. Above roughly \(5\sim 10\) MeV, proton capture reactions for many elements (e.g. C, N and O) are quite rapid, with cross sections in excess of 100 mb (Soppera et al. (2012)). We thus anticipate such reactions to ensue at shock velocities \(\beta_{h}\gtrsim 0.2\), provided that the relative velocity drifts are maintained for distances comparable to the shock width. While there is no detectable signature of H in the spectra of type Ib/c SNe, the progenitors of these SNe might nonetheless contain a small fraction of H (Hachinger et al., 2012). In fact, study of the confirmed single type Ib progenitor that was detected in pre-explosion images (SN iPTF13bvn), suggests that H constitutes almost 1% of the progenitor's mass (Gilks & Arcavi, 2022). Most of the H mass resides in the outer part of the envelope, where its fraction can reach tens of percents. We therefore considered also cases with relative H abundance substantially smaller than solar, specifically, in the range \(10^{-5}-1\) solar. We find that over this range the ratio between the downstream densities of H and the remaining elements is practically independent of the mass fraction of H ahead of the shock. As explained at the end of Sec. 3.1, this is a consequence of the charge neutrality condition and the fact that the agile ion is most affected by the electrostatic field inside the shock. In type IIb SNe, velocities of \(0.1\sim 0.15c\) may be obtained at the ejecta front (Chevalier & Soderberg, 2010), which is marginal for proton capture. However, considerably higher shock velocities are conceivable in Type Ib/c, and if indeed contain H ions, proton capture reactions might be important in those systems, and can alter the composition in regions where the shock is fast enough. In the complete absence of hydrogen ions, as may be the case in some \(H-stripped\) massive stars, the charge-to-mass ratio of all ions falls in the range \(0.4\lesssim Z/A\lesssim 0.5\). There is a group of agile ions (those with \(Z/A=0.5\)) that develop a large relative drift with respect to other ions. We find that the absence of hydrogen leads to the generation of a much stronger electric field over a longer length scale compared with the previous case, resulting in a larger velocity separation between all other ions species (see Fig. 3a). The collision energies can exceed 100 MeV for certain channels that involve collisions between light and heavy ions (Fig 3b). For lighter ions the collision energies are modest, between a few to a few tens MeV. However, for the adopted shock velocity only in very few cases the collision energy exceeds the Coulomb barrier (Fig. 3c). Hence, a considerable effect on the composition seems less likely. ### r-process composition in BNS mergers As a final (and most interesting) example we consider a shock propagating in BNS merger ejecta consisting of r-process material. The composition of various fluid elements in the ejecta of BNS merger depend on its initial condition, mostly the initial electron-fraction and its expansion velocity. However, all fluid elements have in common a dominant fraction of r-process material and a non-negligible fraction of He. Since He is the most agile element expected in the ejecta, its presence has a significant effect on the velocity spread. Typical He mass fractions found in various simulations range between \(10^{-4}\) and \(0.1\)(Tarumi et al., 2023 and references therein). We calculated the velocity spread for far-upstream compositions with these two extreme values, whereas for the r-process elements we adopt the abundances given by Goriely (1999). The results are summarized in Fig. 4, where we use the mass number \(A\) instead of \(Z\) to label the different ions, since different isotopes of the same elements are present in the system. The He mass fraction in this example is \(X_{He}=10^{-2}\). As in the case of solar abundance with varying \(X_{H}\), we find that in the range \(10^{-4}<X_{He}<0.1\) that we explored the ratio between the density of He and the remaining elements in the immediate downstream is independent of the mass fraction of He ahead of the shock. Nearly all elements would experience collisions with He with COM energies well above the Coulomb barrier. This should give rise to nuclear transmutations of many isotopes at shock velocities \(\beta_{h}\gtrsim 0.2\), although the inelastic cross section for nuclear reactions might be somewhat smaller than the total cross section adopted to estimate the collision depth in Fig. 4c. ## 5 Inclusion of interionic friction The velocity distribution of the multi-ion plasma established in the post deceleration zone (Figs 1-4), is expected to be prone to various beam instabilities. This would result in the generation of plasma microturbulence that can transfer momentum between different ion beams via ion scattering off plasma waves, thereby driving the downstream plasma towards equipartition. The anomalous coupling length is expected to depend on the threshold drift velocity for the onset of the instability, on the saturation level of the turbulence, and conceivably other factors. Moreover, the continuous decrease of the relative drift velocity between different ion species will induce charge separation that will affect the electrostatic field. We note that conversion of the dissipation energy to radiation can only occur Figure 1: Shock structure solution obtained for a pure hydrogen-helium mixture. The solid blue and red lines represent the velocity profiles of \(H\) and \(He\), respectively, as indicated, and the black dashed line the velocity profile of the electrons. The black dotted line delineates the relative drift velocity \(\Delta\beta_{He,H}=B\mu_{He}-\beta_{H}\). The inset shows the electric field (red line), radiation force (blue dashed line) and radiation pressure (blue solid line). As seen, the relative drift velocity approaches \(\Delta\beta_{He,H}=0.173\) in the post-deceleration zone, at \(\tau=17\) roughly, where the electric field and radiation force vanish. over the radiation length scale. Consequently, if the anomalous coupling length is much smaller than the Thomson length, dissipation via anomalous friction should first lead to ion heating before the energy will be converted to radiation. (On a global scale this would appear as formation of collisionless subsb shocks.) Furthermore, if the growth of the instability occurs well after decoupling, the ion temperature should be high enough to allow inelastic ion-ion collisions by random motions. On the other hand, the electrostatic field induced by the interionic friction may prevent the early decoupling of ions with small \(Z/A\) seen in the solutions presented in Sec. 4, thereby suppressing the rate of inelastic ion-ion collisions. What would actually happen depends, ultimately, on the threshold velocity of the instability and the scale separation. In Sec. 6 we provide a rough estimate of the scale separation in RMS, and conclude that in BNS mergers the anomalous coupling length is likely to be comparable to, or even larger than, the radiation length, whereas in other systems it is most likely much smaller. Thus, the situation might be different in different types of systems. To gain insight into the effect of interionic friction, we constructed a (simplified) phenomenological friction model. The model ignores any threshold effects, and doesn't take into account ion heating, thus, it is rather limited. Nonetheless, it elucidates some basic features that can guide future analysis. Figure 3: Same as Fig. 2 but for \(H-stripped\) solar abundances. Figure 2: Solutions for a shock with a full solar composition. Shown are: (a) Velocity profiles of electrons and ions. The large difference in \(Z/A\) between the hydrogen and all other ions creates a clear distinction between the \(H\) fluid and the cluster of all other ions. For clarity, the sidebar on the right gives a zoom-in into the terminal velocities of the different ions in the cluster. (b) A color chart showing the ion-ion collision energies computed in the immediate shock downstream (the post deceleration zone at \(\tau\gtrsim 17\)). To obtain the collision energies at other shock velocities, the values in this chart should be scaled by the factor \((\beta_{\rm e}/0.3)^{2}\). (c) Collision depth of different ions, defined as the mean number of ion collisions within one Thomson length (Eq. 15). Only collisions above the Coulomb barrier are shown. The plot indicates that essentially all ions will collide over a length scale considerably smaller than the shock width. ### A phenomenological friction model Since we only consider interionic friction, the only equation altered in the model is Eq. (6). The other equations remain unchanged. We suppose that the change in momentum of ion \(\alpha\) due to momentum exchange with ion beam \(\alpha^{\prime}\) is proportional to the relative velocity between ion species \(\alpha\) and \(\alpha^{\prime}\), and the density of the \(\alpha^{\prime}\) beam: \(\mathcal{F}_{\alpha\alpha^{\prime}}=g_{\alpha,\alpha^{\prime}}n_{\alpha^{ \prime}}(\beta_{\alpha}-\beta_{\alpha^{\prime}})\), where \(g_{\alpha,\alpha^{\prime}}\) is a characteristic friction coefficient. The total friction force exerted on ion \(\alpha\) is the sum over all ion beams. In terms of the conserved fluxes \(j_{\alpha^{\prime}}\) and the coordinate \(\tau\), the modified Eq. (6) reads: \[\frac{d}{d\tau}\beta_{\alpha}=\frac{\beta_{\epsilon}}{\beta_{\alpha}A_{\alpha }}\left[\mu\mathbb{E}\mathbb{Z}_{\alpha}-\sum_{\alpha^{\prime}}\mathbb{g}_{ \alpha\alpha^{\prime}}\frac{j_{\alpha^{\prime}}}{\beta_{\alpha^{\prime}}}( \beta_{\alpha}-\beta_{\alpha^{\prime}})\right], \tag{16}\] here \(\mathbb{g}_{\alpha\alpha^{\prime}}\equiv\frac{g_{\alpha,\alpha^{\prime}}}{m_ {\alpha}\sigma_{\alpha}}\) defines a dimensionless friction coefficient. In what follows we assume for simplicity that all the coefficients are equal: \(\mathbb{g}_{\alpha\alpha^{\prime}}=\mathbb{g}\). This, of course, may not be realistic in practice. Nonetheless, it allows us to estimate the strength of the friction force required to tightly couple all ions. We can derive a critical value for \(\mathbb{g}\) by comparing the length scale for ion deceleration by friction, \(l_{fric}\), with \(l_{T}\). The deceleration length can be approximated from Eq. (16): \[l_{fric}\approx\frac{C\alpha_{\alpha}>\beta_{\alpha}^{2}c^{2}}{\sigma T\beta \sum_{\alpha^{\prime}}j_{\alpha^{\prime}}}. \tag{17}\] Requiring \(l_{T}=l_{fric}\), and using Eq. (1) to estimate \(\sum_{\alpha^{\prime}}j_{\alpha^{\prime}}\approx\frac{j_{\alpha}}{<\Delta_{ \alpha}>}\approx\frac{j_{\alpha}}{<\Delta_{\alpha}>}\), we get: \[\mathbb{g}_{crit}\approx<A>^{2}\beta_{\alpha}. \tag{18}\] For a pure \(He-H\) mixture with \(\beta_{\mathrm{u}}=0.3\) and \(<A>\sim 1\), we find \(\mathbb{g}_{crit}\approx 0.3\). For heavy \(\mathrm{r}\)-process elements with \(\beta_{\mathrm{u}}=0.3\) and \(<A>\sim 100\), we find \(\mathbb{g}_{crit}\approx 3\cdot 10^{3}\). These results give a ballpark estimate in good agreement with the numerical results exhibited in Figs 5 and 6. ### Shock solutions Replacing Eq. (6) with Eq. (16), we obtained new solutions for the shock structure. Results for the \(H-He\) and \(\mathrm{r}\)-process compositions are shown in Fig. 5 and Fig. 6 respectively. We find no effect on the shock structure up to \(\mathbb{g}\lesssim 10^{-2}\) (\(10^{2}\) for \(\mathrm{r}\)-process case). At \(\mathbb{g}\sim 10^{-1}\) (\(10^{2}\)), the friction force succeeds to couple all the components together on length scales of a few \(l_{T}\), yet enabling ion-ion collisions. For \(\mathbb{g}>1\) (\(10^{3}\)) no velocity differences are obtained and the shock acts as a single fluid. ## 6 Application to BNS mergers The nucleosynthesis of \(\mathrm{r}\)-process elements in the outflow of BNS mergers takes place during the first second or so following its ejection, as the neutron rich material decompresses. The exact evolution of the thermodynamic conditions within each fluid element depends on its initial state and expansion velocity (as can be seen in Korobkin et al. 2012). As long as nuclear reactions (neutron capture, fission, etc.) take place, the temperature of the gas remains roughly 50 keV Figure 4: Same as Fig. 2 for a shock propagating in a BNS merger ejecta, containing He ions with a mass fraction of \(X_{\mathrm{u}_{\mathrm{f}}}=10^{-2}\). The color chart indicates that collisions of alpha particles with nearly all elements are above the Coulomb barrier. Figure 5: Shock solutions for a pure \(H-He\) composition, as in Fig. 1, and different values of the friction coefficient \(\hat{g}\), as indicated. The lower-right panel displays the radiation pressure profile for the three cases presented in the other panels (\(\hat{g}=10^{-2},10^{-1},1\)). As seen, full coupling occurs for \(\hat{g}\geq 1\). and the free neutrons are over abundant (dominate the number density). A sharp transition in the thermodynamic evolution takes place when the reactions terminate, roughly a second after the ejection at a density of about \(100\) gr cm\({}^{-3}\). In the first stage, almost all of the free neutrons are captured over a very short time scale (much shorter than a second). Subsequently, the density and temperature evolve as in a freely expanding radiation dominated gas, \(\rho\propto t^{-3}\) and \(T\propto t^{-4}\). The composition during this stage evolves only radioactive decay towards the vally of stability. As we discuss below, this evolution implies that the properties of a shock that is driven into the ejecta strongly depends on whether it propagates in the ejecta before or after nuclear reactions freeze-out. Several different shocks are expected to cross the subrelativistic material ejected from a BNS merger. First, in the commonly accepted scenario, and as supported by observations of GW170817, a jet is launched from the compact remnant formed in the aftermath of BNS coalescence, and propagates through the merger ejecta. The motion of the jet inflates an expanding cocoon that drives an asymmetric RMS in the ejecta. The shock velocity is fastest along the jet axis (can approach the speed of light) and decreases with increasing angle. The angular distribution of the shock velocity depends on the density distribution of the ejecta, as well as the jet luminosity and opening angle, and is not known exactly. However, simulations suggest that it can exceed \(0.1c\) and even approach \(0.3c\) over a significant range of angles (Gottlieb and Nakar, 2022). Overall, a small fraction (of order of a few percent) of the ejecta is shocked by the jet and the cocoon. This material is expected to dominate the early KN emission, on time scales of minutes to hours (Gottlieb et al., 2018; Hamidani and Ioka, 2023, 2022). The shock can cross the ejecta both before or after nucleosynthesis freezeout and affect the composition of the cocoon material. A significant change in the composition of this material might affect the properties of the early kilonova signal. A second source of shocks is interaction between various components of the ejecta itself. There are several different mechanisms that expel matter, each operates on a different time scale after the merger and produces outflow with a different range of velocities (see, e.g., Nakar, 2020 and references within). Since some of these produce relatively fast ejecta over long time scales, shocks are expected to form when different components of the outflow collide. For example, Nedora et al. (2021) find a fast (0.15-0.3 c) spiral-wave driven wind that lasts as long as the central remnant does not collapse to a black-hole. This fast material, which may be ejected for a second or longer, is expected to interact with slower material, driving shocks at a velocity of \(\sim 0.1-0.2\) c. Observations of GW170817 reveal a comparable amount of mass in ejecta having a velocity of 0.2-0.3 c and ejecta with a velocity of \(0.1-0.2\) c. It is also likely that an even faster material was ejected. It is plausible that collisions during the ejection of this fast component generated shocks with velocities in the range \(\sim 0.1-0.2\) c, and possibly even faster. These type of shocks form during the main mass ejection episode and, therefore, most likely cross the ejecta before nucleosynthesis freeze-out, although it is conceivable that some shocks form also at later times. Finally, the central remnant may also drive a quasi-spherical, or a wide-angle, shock into the ejecta. For example, a long lasting magnetar can inject enough energy to significantly accelerate the ejected material, thereby driving a fast shock through a large fraction of the ejecta (e.g., Beloborodov et al., 2020). This may even be the source of the fast (\(\sim 0.3\) c) component in GW170817. Such energy injection can ensue before and/or after the termination of nucleosynthesis, and may have a significant effect on the composition of most of the ejecta, and possibly its entirety. According to the results obtained in Sec. 4.3, in regions where the shock is fast enough, \(\beta_{\rm H}\gtrsim 0.2\), collisions between neutron rich isotopes just behind the shock can trigger fission and fusion, leading to a substantial composition change, provided anomalous friction does not prevent full decoupling of the ions inside the shock. In addition, fission can also be induced by neutron capture if conditions are right. Below, we provide an estimate of the scale separation (between radiation and kinetic scales) and discuss the implications for anomalous coupling. We also consider fission by neutron capture. But we begin with a caveat - the neglect of pair production. ### Pair production Estimates based on the observed kilonova emission in GW170817 indicate ejecta mass of \(M_{ej}\approx 10^{-2}\,M_{\odot}\). The corresponding density at a radius \(r=10^{10}r_{10}\) cm is \(\rho\approx 10(M_{ej}/10^{-2}M_{\odot})r_{10}^{-3}\) gr cm\({}^{-3}\). From Fig. 15 in Levinson and Nakar (2020) we estimate a downstream temperature of \(kT_{d}\sim 10^{2}\big{(}\rho/10\,{\rm gr}^{-3}\big{)}^{1/4}(\beta_{\rm H}/0.3)^{3}\) keV for a single-fluid RMS model. At such high temperatures the pair creation rate is large and one expects a large abundance of positrons inside the shock (Katz et al., 2010; Budnik et al., 2010; Levinson and Nakar, 2020). Thus, our neglect of this component in the shock equations seems unjustified. It is worth noting that like \(H\) ions, positrons have \(Z/A=1\). However, unlike \(H\) ions they are subject to the radiation force. This implies that they should decouple first, as seen in single-ion relativistic RMS (Levinson, 2020), since the electric force acting on the positrons is in the same direction as the radiation force. How would this affect the velocity separation of the ions is unclear at present. We defer the inclusion of pair creation in the multi-ion RMS model for future work. ### Scale separation and coupling length In the preceding section, we demonstrated that sufficiently strong interionic friction can tightly couple the ions, preventing relative drifts. As explained there, in reality such friction represents anomalous momentum transfer between ions via scattering off plasma turbulence. Such momentum transfer occurs on microscopic scales, but the actual length scale is unclear at present. In Vanthieghem et al. (2022) it has been shown, by means of PIC simulations, that in a single-ion RRMS, where positrons are present, the anomalous coupling length scales as \(l_{c}\sim 10^{5}\cdot\theta^{1/2}(\gamma_{\rm H}/10)^{-1}l_{p}\), where \(\mathcal{M}\) is the pair multiplicity, \(\gamma_{\rm H}\) the shock Lorentz factor and \(l_{p}\) the proton skin depth defined below. For the typical densities expected in many exploding stellar systems (e.g., SNe, LGRB) \(l_{c}\) has been shown to be much smaller than the shock width (by several orders of magnitude). This result is Figure 6: Shock solutions for r-process composition taken from Goriely (1999), and \(\bar{g}=10^{1},10^{2}\), and \(10^{3}\), as indicated. probably irrelevant to the Newtonian, multi-ion RMS studied here, as the plasma modes that dominate the turbulence are most likely different. However, it suggests that the anomalous coupling length may encompass thousands or even tens of thousands of skin depths. At the extreme densities of BNS merger ejecta, this may exceed the shock width, as we now show. To get a rough measure of the anticipated scale separation in RMS, we compare the Thomson length, \(l_{T}=(\sigma_{T}n_{e,\mu})^{-1}\), with the proton skin depth at the same density, \(l_{p}=c/\omega_{p}=(m_{p}c^{2}/4\pi e^{2}n_{e,\mu})^{1/2}\)1. The upstream electron number density can be estimated upon assuming that all elements have \(Z/A=0.5\): Footnote 1: The skin depth of an individual ion beam \(\alpha\) is related to the proton skin depth through: \(l_{\alpha}=\sqrt{n_{e,\mu}A_{\alpha}/n_{\alpha,\mu}Z_{\alpha}^{2}}\,l_{p}\simeq l _{p}\). \[n_{e,\mu}\approx\frac{\rho}{2m_{p}}\approx 10^{25}(M_{ej}/10^{-2}M_{\odot})r_{ 10}^{-3}\quad\mathrm{cm}^{-3}, \tag{19}\] from which we find \[l_{T}/l_{p}\approx 10^{4}\left(\frac{n_{e,\mu}}{10^{25}\,\mathrm{cm}^{-3}} \right)^{-1/2}. \tag{20}\] Note that this ratio scales as \(r_{10}^{3/2}\). Comparing the ratio \(l_{T}/l_{c}\) in the last equation with the ion collision depth in Fig. 4c suggests that if indeed \(l_{c}>10^{4}l_{p}\), anomalous coupling may not prevent inelastic ion-ion collisions, and the consequent change of the composition profile. A more firm conclusion requires a quantitative analysis of plasma instabilities in the multi-ion system, which is beyond the scope of this paper. We defer such analysis for a future study. ### Fission by neutron capture reactions As mentioned above, a large abundance of free neutrons is present in the ejecta up to about one second post merger. If the shock propagates in the ejecta during this stage, free neutrons advected with the upstream flow will cross the shock undisturbed and collide with the heavy nuclei downstream. This can trigger fission, provided that the relative velocity of the free neutrons with respect to the downstream ions exceeds the fission barrier. Fission barriers of heavy ions range from a few MeV for atomic numbers \(Z>80\) roughly (depending on \(A\)), to about 40 MeV for lighter elements (Moller et al., 2009; Royer et al., 2021). For most ions with \(Z>70\) it is below 25 MeV. This implies that at shock velocities \(\beta_{\mu}\gtrsim 0.2\) most nuclei downstream of the shock will undergo fission if the shock crosses the ejecta early enough. ## 7 Summary We have demonstrated how the passage of a fast radiation-mediated shock through a multi-ion plasma can trigger nuclear reactions in the immediate shock downstream, thereby changing the composition profile behind the shock. The reason is that ions having different charge-to-mass ratios experience different deceleration rates by the electrostatic field generated inside the shock, owing to the charge separation imposed by the radiation force acting solely on the electrons. This leads to a relative velocity drift between different ion beams just downstream of the shock which, if exceeds the activation barriers for nuclear reactions, can induce fusion or fission of various elements. In case of BNS mergers, we find that nuclear reactions are triggered at shock velocities \(\beta_{\mu}\gtrsim 0.2\), and are dominated by either, neutron capture if the shock crosses the ejecta before freeze-out of nucleosynthesis in the unshocked ejecta, or collisions of \(\alpha\) particles with heavy ions in the immediate downstream if shock crossing occurs after freeze-out. The resultant change in composition behind the shock that the jet drives as it crosses the ejecta, might considerably affect the properties of the early kilonova emission (minuets to hours), which is likely dominated by the cocoon. The late kilonova emission can be also affected if fast shocks arrise between different components of the ejecta or if the central remnant objects a significant amount of energy into the outflow after its ejection. In SNe, proton capture reactions are found to be important at shock velocity \(\beta_{\nu}\gtrsim 0.2\), even if the abundance of H is smaller than solar by several orders of magnitudes. Such high velocities are not expected in type II SNe, but are likely in type Ib/c SNe and in shock breakout from extended stellar envelopes in lIGRB (Nakar, 2015). The absence of hydrogen lines in the spectra of these SNe sets a limit on the H abundance; according to Hachinger et al. (2012) an H mass fraction of even 1% out of the total may be allowed, where the H mass fraction in the outer envelope can be considerably higher. What would be the effect of the composition change on the emitted spectrum in these sources is unclear at present, although it would probably affect only spectra at very early times. We hope that future studies might be able to identify observational diagnostics that can test this model. Our analysis ignores the effect of plasma instabilities. An estimate of the ratio between kinetic and radiation scales is given in Sec. 6.2. Based on this estimate, we speculated that in BNS mergers, owing to the extreme density of the ejecta, anomalous coupling may not prevent a complete velocity separation of the different ion beams, hence, may not affect our results significantly. In SNe, where the typical densities are smaller by about ten orders of magnitudes, the scale separation is much larger and the effect of beam instabilities on the downstream velocity distribution of the ions is unclear. We pointed out that if the threshold drift velocity required for anomalous coupling is large enough, it may lead to formation of collisionless subshocks, with random ion velocities in excess of the proton capture barrier. Otherwise, the onset of instabilities might suppress nuclear reactions in SNe. We plan to conduct a detailed kinetic study of instabilities in multi-ion RMS, along similar lines to the study described in Vanthieghem et al. (2022), in the near future. ## Data availability The data underlying this article will be shared on reasonable request to the corresponding author. ## Acknowledgements We thank Israel Mardor and Heinrich Wilsenach for enlightening discussions on nuclear physics. This research was partially supported by an Israel Science Foundation grant 1995/21 and a consolidator ERC grant 818899 (JetNS).
2310.02588
ViT-ReciproCAM: Gradient and Attention-Free Visual Explanations for Vision Transformer
This paper presents a novel approach to address the challenges of understanding the prediction process and debugging prediction errors in Vision Transformers (ViT), which have demonstrated superior performance in various computer vision tasks such as image classification and object detection. While several visual explainability techniques, such as CAM, Grad-CAM, Score-CAM, and Recipro-CAM, have been extensively researched for Convolutional Neural Networks (CNNs), limited research has been conducted on ViT. Current state-of-the-art solutions for ViT rely on class agnostic Attention-Rollout and Relevance techniques. In this work, we propose a new gradient-free visual explanation method for ViT, called ViT-ReciproCAM, which does not require attention matrix and gradient information. ViT-ReciproCAM utilizes token masking and generated new layer outputs from the target layer's input to exploit the correlation between activated tokens and network predictions for target classes. Our proposed method outperforms the state-of-the-art Relevance method in the Average Drop-Coherence-Complexity (ADCC) metric by $4.58\%$ to $5.80\%$ and generates more localized saliency maps. Our experiments demonstrate the effectiveness of ViT-ReciproCAM and showcase its potential for understanding and debugging ViT models. Our proposed method provides an efficient and easy-to-implement alternative for generating visual explanations, without requiring attention and gradient information, which can be beneficial for various applications in the field of computer vision.
Seok-Yong Byun, Wonju Lee
2023-10-04T05:09:50Z
http://arxiv.org/abs/2310.02588v1
# ViT-ReciproCAM: Gradient and Attention-Free Visual Explanations for Vision Transformer ###### Abstract This paper presents a novel approach to address the challenges of understanding the prediction process and debugging prediction errors in Vision Transformers (ViT), which have demonstrated superior performance in various computer vision tasks such as image classification and object detection. While several visual explainability techniques, such as CAM, Grad-CAM, Score-CAM, and Recipro-CAM, have been extensively researched for Convolutional Neural Networks (CNNs), limited research has been conducted on ViT. Current state-of-the-art solutions for ViT rely on class agnostic Attention-Rollout and Relevance techniques. In this work, we propose a new gradient-free visual explanation method for ViT, called ViT-ReciproCAM, which does not require attention matrix and gradient information. ViT-ReciproCAM utilizes token masking and generated new layer outputs from the target layer's input to exploit the correlation between activated tokens and network predictions for target classes. Our proposed method outperforms the state-of-the-art Relevance method in the Average Drop-Coherence-Complexity (ADCC) metric by \(4.58\%\) to \(5.80\%\) and generates more localized saliency maps. Our experiments demonstrate the effectiveness of ViT-ReciproCAM and showcase its potential for understanding and debugging ViT models. Our proposed method provides an efficient and easy-to-implement alternative for generating visual explanations, without requiring attention and gradient information, which can be beneficial for various applications in the field of computer vision. Computer Vision Vision Transformer Class Activation Map Explainable AI Gradient-free CAM White-Box XAI ## 1 Introduction Recently, vision transformer (ViT) has been proposed by Dosovitskiy et al. (2021), which has gained significant attention in the computer vision community due to its promising representation capabilities. Especially, ViT models such as Deit (Touvron et al. (2020)) have even outperformed traditional convolutional neural networks (CNNs) (Krizhevsky et al. (2012), LeCun et al. (1989)) in various benchmarks. However, like CNN models, ViT models are susceptible to failure for edge cases or out-of-distribution inputs, which poses a significant problem for critical applications such as medical diagnosis, security, and autonomous driving. To mitigate this issue, explainable AI (XAI) techniques have been developed for CNN models, such as Class Activation Map (Zhou et al. (2016)), Grad-CAM(Selvaraju et al. (2016)), Recipro-CAM(Byun and Lee (2022)), Ablation-CAM(Desai and Ramaswamy (2020)), and Score-CAM(Wang et al. (2020)), which provide high-quality visual explanations using gradient or gradient-free methods. However, the research on XAI techniques for ViT is still in its early stages. Only a few approaches have been proposed, such as class agnostic visual activation maps via analyzing attention flow, as suggested by Chefer et al. (2021), and the attention-rollout with relevance information to support CAM proposed by Chefer et al. (2021). The attention-rollout (Chefer et al. (2021)) technique considered skip-connection information and pairwise attention relationship between multi-layer attentions so it improved visualization capability than previous single attention layer analysis methods but it cannot give class specific explainability. Chefer et al. (2021) improved the visualization quality of attention-rollout and enabled generating class specific activation map with relevance information but it requires gradient information so it cannot be used for supporting explainability in inference time. And, all the existing techniques require the accessibility to the all attention layer's activation in a model and it requires deeper model dependency. To overcome the limitations of existing XAI techniques for ViT, we propose an accurate and efficient reciprocal information-based approach. Our method utilizes reciprocal relationship between new spatially masked feature inputs (positional token masking) and network's prediction results. By identifying these relations, we can generate visual explanations that provide insights into the model's decision-making process without using attention layers' information and assuming their internal relationship as previous researches. Our approach not only improves the interpretability of ViT models but also enables users to use XAI result in their inference system without trainable model. We evaluate our method on a range of ViT classification models and demonstrate its effectiveness in generating high-quality visual explanations that aid in understanding the models' behavior and its accuracy in measuring Average Drop-Coherence-Complexity (ADCC) score suggested by Poppi et al. (2021). The main contributions of this paper include: * We present a new, efficient, and gradient-free XAI solution for ViT that utilizes the reciprocal relationship between the spatial masked encoding features and the network's prediction results. * The proposed method, ViT-ReciproCAM, achieved a state-of-the-art accuracy on various XAI metrics such as average drop, average increase, and ADCC, while providing approximately \(1.5\times\) faster execution performance than Relevance method. ## 2 Related Work ### Visual Explainability for CNN Researches in visual explainability of CNNs have been conducted for a number of years due to their black-box nature, with the goal of revealing the learning mechanism of CNNs. Early research efforts used a deconvolution approach ( Zeiler and Fergus (2014); Springenberg et al. (2015)) to analyze the learned features of each layer and extract important pixels in the input image in a class-agnostic way. This research served as motivation for subsequent white-box XAI researchers. From an explainability perspective, layer-wise activation or input sensitivity measures are insufficient to explain the prediction results of a given model for specific inputs. The first research that clarified the relationship between input data and CNN output was CAM Zhou et al. (2016). CAM produces a feature map that highlights important regions of an image for a target class by multiplying a global average pooling activation vector with a fully connected weight vector specific to the class. CAM is a useful tool for AI researchers to analyze the neural network architecture and gain an understanding of how the network reacts to specific classes of input data. However, CAM has a limitation in that it requires the presence of a global average or max pooling layer in the architecture. This means that certain neural network architectures may not be compatible with the CAM method, and therefore, alternative explainability methods may need to be considered. To generalize the CAM technique for all network architectures, gradient-based methods ( Selvaraju et al. (2016); Chattopadhay et al. (2018); Fu et al. (2020); Omeiza et al. (2019)) have been studied by several researchers. These methods rely on the gradient information of the target class confidence output with respect to the activation map of the convolution layer and these operation can be done with training framework's back propagation information. The gradient-based approaches have emerged as a popular solution, as they can overcome the limitations of CAM and provide high quality visual explainability even though these require a trainable model to utilize gradient information, which restricts their use in post-depployment frameworks like ONNX ( Bai et al. (2019)) or OpenVINO ( Intel (2019)). Moreover, saturation and false confidence issue in gradient-based methods led to the development of gradient-free techniques like Score-CAM ( Wang et al. (2020)), Smooth Score-CAM ( Wang et al. (2020)), Integrated Score-CAM ( Naidu et al. (2020)), Ablation-CAM ( Desai and Ramaswamy (2020)), and Recipro-CAM ( Byun and Lee (2022)). Black-box techniques ( Petsiuk et al. (2018); Kenny et al. (2021); Dabkowski and Gal (2017); Petsiuk et al. (2020)) are suggested to remove architectural dependency in white-box methods. These approaches can generate a saliency map without relying on network architectural information or gradient computability. However, they typically require much more time than white-box methods and may exhibit relatively lower accuracy. ### Visual Explainability for ViT The NLP community has conducted extensive research on explainability for the Transformer architecture ( Vaswani et al. (2017)). Many studies ( Lee et al. (2017); Coenen et al. (2019); Pruthi et al. (2019); Vashishth et al. (2019); Wiegreffe and Pinter (2019); Jain and Wallace (2019)) have focused on analyzing attention activation or relations for single attention layers within the self-attention block. A recent work proposed a novel approach to analyzing the relationship between attention layers and consolidating their weights to provide activation scores for each token ( Abnar and Zuidema (2020)). However, this method's simple linear relationship assumption between attention layers may not be suitable for abnormally highly activated relations or early attention blocks with noisy information. Additionally, this technique is class-agnostic, and therefore, it cannot provide class-specific explainability. Building upon the recent proposal of the ViT architecture by Dosovitskiy et al. (2021), visual explainability research has commenced, addressing the limitations of the attention-rollout method ( Abnar and Zuidema (2020)), which assumes a linear relationship between attention layers and is class-agnostic. The proposed approach employs relevance propagation ( Montavon et al. (2017)) and gradient information to overcome these shortcomings. With a systematic approach, the authors achieved a significant advance in analyzing attention propagation in the transformer architecture, demonstrating clear differences from the attention-rollout method. Furthermore, they showed that this technique can be utilized for vision transformers with various image inputs in qualitative analysis and appendix. However, it is worth noting that this approach focuses on analyzing attention relations for achieving transformer model explainability and requires a thorough understanding of the given model's architecture and trainable model. XAI solutions must be evaluated with standardized metrics in a unified manner to ensure their effectiveness. However, until the recent proposal of the ADCC metric by Poppi et al. (2021), there was no universally accepted standard metric for XAI evaluation. Previous studies, such as Chattopadhay et al. (2018); Petsiuk et al. (2018); Fu et al. (2020), had suggested their own metrics such as Avg Drop, Avg Inc, Deletion, and Insertion. Nevertheless, these standalone metrics could lead to incorrect performance evaluations, as demonstrated by Poppi et al. (2021) using Fake-CAM example. ADCC is a comprehensive metric that considers the average drop, coherence, and complexity of the model explanations and provides a single score as their harmonic mean with the following equation: \[\text{ADCC}(x)=3\Bigg{(}\frac{1}{\text{Coherency}(x)}+\frac{1}{1-\text{Complexity }(x)}+\frac{1}{1-\text{AverageDrop}(x)}\Bigg{)}^{-1}. \tag{1}\] Figure 1: ViT-ReciproCAM architecture: The first ’LayerNorm’ layer output of the last transformer encoder block is used for generating new spatially masked number of patches (#token-1) feature inputs for using next layers’ input using batch operation and normalized prediction score as the target class’s saliency map. To evaluate the precision of our approach, we will use the ADCC metric, which takes into account multiple aspects of model explanations and provides a more accurate and reliable performance evaluation compared to standalone metrics. ## 3 ViT-ReciproCAM We propose a novel method, called Vit-ReciproCAM, that efficiently generates saliency maps in ViT without requiring gradient or attention information, as illustrated in Figure 1. Specifically, Vit-ReciproCAM extracts a feature map with dimensions \((H\times T\times D)\) from the first layer (LayerNorm) of the last transformer encoder block, where \(H\) denotes the number of heads (in the first layer, \(H=1\) since all heads are concatenated), \(T\) denotes the number of tokens, and \(D\) denotes the encoder dimension. From this feature map, Vit-ReciproCAM generates \((T-1)\) spatial masks, each of which corresponds to a new input feature map to be used in the subsequent layer. For each spatial location \((x,y)\) defined by the center of a Gaussian spatial mask, Vit-ReciproCAM measures the prediction scores of a specified class for the corresponding feature token. We note that our explanation ignores the batch dimension for simplicity. The efficacy of Vit-ReciproCAM is evaluated on ImageNet validation dataset and is shown to outperform previous state-of-the-art method. ### Spatial Token Mask and Feature Generation The spatial token mask \(M\) has \((N\times T)\) dimensions and where, \(N\) is \((T-1)\) and \(T\) is '[cls] token' plus number of \(Patch(P)^{2}\). This set only \(3\times 3\) tokens matched at the spatial positions in patched input image with Gaussian weights, but others set as zero except for the class token for each [0,...,N-1] new feature maps as depicted in Figure 1. The new input feature maps \((N\times H\times T\times D)\) can be generated via element-wise multiplication with original feature map \((H\times T\times D)\) and spatial mask \((N\times T)\) as expressed as \[\tilde{F}_{k}^{n}=F_{k}\odot M^{n} \tag{2}\] with the Hardamard product \(\odot\), where \(F_{k}\) is original feature map of the first layer of the last transformer encoding block and \(F_{k}^{n}\) is new feature maps generated by the operation. Figure 2-(c) depicts the process of generating feature maps through a spatial mask. Each feature map comprises \(3\times 3\) patched regions of the input image with the original features of these regions rescaled by a \(3\times 3\) Gaussian weight. The resulting masked input feature maps are utilized to compute the saliency score of the center token based on the network's predicted specific class confidence. While the use of the Gaussian kernel for generating the spatial mask is our default approach, it is noteworthy that the mask can also be generated using a single token masking method. To generate saliency map, ViT-ReciproCAM divides the network into two parts based on the first layer (LayerNorm) of the last transformer encoder block. As depicted in Figure2-(a), the first part is denoted as \(\mathcal{G}\), and the subsequent layers are denoted as \(\mathcal{H}\). By feeding a batch of \(N\) new feature inputs into the last layers (\(\mathcal{H}\)), we can generate prediction scores for a specified class. These scores indicate the relative importance of the masked token for the class prediction, and the class's saliency map can be calculated using Equation 3. Figure 2: (a) feature extraction from the first LayerNorm in the last encoder block (default), (b) feature extraction from block output, (c) masked tokens cover the dashed blue box area in input image. \[S^{c}=\text{reshape}\left[\frac{\mathbf{Y}_{c}-\min(\mathbf{Y}_{c})}{\max(\mathbf{Y}_ {c})-\min(\mathbf{Y}_{c})},(P,P)\right], \tag{3}\] where the \(N\times 1\) prediction scores \(\mathbf{Y}_{c}=\left[y_{c}^{1},\ldots,y_{c}^{N}\right]^{T}\) for a class \(c\) is composed of \[y_{c}^{n}=\text{softmax}\left(\mathcal{H}(\mathcal{G}(I)\odot M^{n})\right)_{c} \tag{4}\] for \(n=1,\ldots,N\), where \(y_{c}\) is a one-dimensional vector of size \(N=(P\times P)\) and has \([0,1]\) scalar value ranges, \(\text{reshape}[P,Q]\) changes given \(P\) dimensional scalar values into \(Q\) dimensional scalar values. ## 4 Experiments In Sections 4.1, 4.2, and 4.3, we will provide a quantitative, qualitative, and performance analysis, respectively, to demonstrate the accuracy and effectiveness of ViT-ReciproCAM. ### Quantitative Analysis To perform quantitative analysis, we utilized the ADCC metric proposed by Poppi et al. (2021)] and conducted experiments under their prescribed conditions. Our evaluation dataset comprised the ILSVRC2012 Russakovsky et al. (2015) validation set, which contains \(50,000\) samples. We compared our results to Chefer et al. (2021) by evaluating the performance of ViT-ReciproCAM on ViT_Deit_Base_Patch16_224 and ViT_Deit_Small_Patch16_224 (Touvron et al. (2020)). Prior to feeding the images into the model, we resized them to \(256\times 256\) and center-cropped them to \(224\times 224\). Furthermore, we normalized the images using a mean of \([0.485,0.456,0.406]\) and a standard deviation of \([0.229,0.224,0.225]\). The outcomes of our analysis are presented in Table 1. ViT-ReciproCAM achieved state-of-the-art ADCC score on two ViT architectures. ### Qualitative Analysis In this study, we employed the ImageNet pretrained ViT_Deit_Base_Patch16_224 model (Touvron et al. (2020)) provided by Facebook Research. We used the shared source provided by Chefer et al. (2021) to evaluate the Relevance method's saliency map. To ensure consistency in the quantitative analysis, we applied identical pre-processing and normalization techniques to the input images. The input images were categorized into three groups for systematic analysis: single-object images, multiple identical objects in the input images, and input images with multiple classes. In Figure 3, we present the results of the initial case comparison. The Attention-Rollout method (Abnar and Zuidema (2020)) produces a class-agnostic output with a high level of noise in the saliency map. In contrast, the Relevance (Chefer et al. (2021)), ViT-ReciproCAM, and ViT-ReciproCAM \([3\times 3]\) methods exhibit precise activation of the'mantis', 'peacock', and 'trolleybus' classes. However, the overall quality of Relevance's saliency map is inferior to that of ViT-ReciproCAM and ViT-ReciproCAM \([3\times 3]\) approaches, although ViT-ReciproCAM shows some activated background regions for 'peacock' and 'trolleybus'. The second experiment aims to evaluate the localization performance of the proposed methods. The results are presented in Figure 4. Compared to the Relevance approach, which activates only limited key features of each object through a narrow range of the saliency map, the ViT-ReciproCAM family exhibits wider coverage of the class object regions. Notably, the ViT-ReciproCAM \([3\times 3]\) method yields highly localized saliency maps for the target class objects. In third experiment, we evaluate the resolution capability of different XAI methods across multiple object classes in an image. To achieve this, we deactivate objects from different classes within an image, allowing the resulting saliency map to reflect class-specific results. The comparison of different methods is presented in Figure 5. The first row of Figure 5 shows the results obtained for an image of a 'border collei' containing a 'chihuahua' object, where the network predicted 'border collei' with the highest probability. While Relevance, ViT-ReciproCAM, and ViT-ReciproCAM \([3\times 3]\) successfully separate the different class object(s) from the target class object(s), Attention-Rollout fails to \begin{table} \begin{tabular}{l c c c c c|c c c c} \hline \hline & \multicolumn{3}{c|}{Det.Base\_Patch16\_224} & \multicolumn{3}{c}{Det.Small\_Patch16\_224} \\ \hline Method & **Drop** & Inc & **Chefer** & **Compal** & **ADCC** & **Drop** & Inc & **Chefer** & **Compal** & **ADCC** \\ & (\(\downarrow\)) & (\(\uparrow\)) & (\(\uparrow\)) & (\(\downarrow\)) & (\(\uparrow\)) & (\(\downarrow\)) & (\(\uparrow\)) & (\(\uparrow\)) & (\(\downarrow\)) & (\(\uparrow\)) \\ \hline Relevance[Chefer et al.(2021)] & 55.31 & 9.30 & 79.41 & 13.17 & 64.53 & 55.83 & 10.00 & 80.67 & 12.53 & 64.54 \\ ViT-ReciproCAM & 33.57 & 9.78 & 70.22 & 53.54 & 59.03 & 31.42 & 11.87 & 67.62 & 54.21 & 58.58 \\ **ViT-ReciproCAM**\([3\times 3]\) & 25.82 & 14.12 & 88.61 & 46.35 & **69.11** & 27.69 & 17.12 & 84.58 & 41.16 & **70.34** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of ViT-ReciproCAMs and Relevance using ADCC metric on two ViT architectures. Figure 4: Same multiple-objects results: Yachts, Goldfishs, and Sparrows inputs process with Attention-Rollout, Relevance, ViT-ReciproCAM, and ViT-ReciproCAM \([3\times 3]\) to generate saliency maps. Figure 3: Single-object results: Mantis and Peacock inputs process with Attention-Rollout, Relevance, ViT-ReciproCAM, and ViT-ReciproCAM \([3\times 3]\) to generate saliency maps. activate the target object(s). In the second row, we generate a saliency map for the 'chihuahua' class to evaluate the quality of the saliency maps generated for other classes in a multiple classes input case. The third and fourth rows show another multiple classes case for 'Elephant' and 'Zebra' using ImageNet's image normalization method described in Section 4.1. All three methods except for Attention-Rollout successfully localize each class. However, the quality of the saliency map generated by the ViT-ReciproCAM family is superior to Relevance. ### Performance Analysis We conducted performance measurement experiments on Relevance, ViT-ReciproCAM, and ViT-ReciproCAM \([3\times 3]\), using the Deitbasepatch16224 ImageNet pre-trained model. The test dataset consisted of \(1,000\) randomly selected images from the ILSVRC2012 validation dataset, and the image preprocessing and crop size were consistent with the quantitative analysis process. The experimental setup included a single Nvidia Geforce RTX 3090 device and an i9-11900 Intel CPU. As indicated in Table 2, Relevance exhibited a performance slowdown of approximately 1.5x compared to ViT-ReciproCAM Family. ## 5 Ablation Study In this section, we present two experiments aimed at assessing the impact of the class token ([CLS] token) on spatially masked feature maps, and the dependency of feature extraction position in transformer encoder blocks. We evaluated the experiments by measuring ADCC scores for two cases: without the class token (wo_CLS_token), Figure 5: Multiple-class results: Border Collie/Chihuahua and Zebra/Elephant inputs process with Attention-Rollout, Relevance, ViT-ReciproCAM, and ViT-ReciproCAM \([3\times 3]\) to generate each target class’s saliency maps. by extracting the feature map from output of the \((N-1)\) block (block[-2]) using ViT_Deit_Base_Patch16_224, ViT_Deit_Small_Patch16_224, and ViT_Deit_Tiny_Patch16_224 ViT architectures. The experimental conditions were kept the same as those used in Section 4.1. ### Class Token Effect In order to investigate the impact of class token information, we modified the default spatial mask generation logic to zero out the [cls] token feature values, thus reducing the effect of the [cls] token in skip-connected feature maps after the multi-head attention (MHA) layer. We compared the results of the modified model to the default model with the class token included, and summarized the findings in Table 3. Eliminating class token information from spatially masked new feature maps produced different effects depending on the capacity of ViT architectures and the kernel method used (Gaussian [\(3\times 3\)] and Dirac Delta). In the Base model, eliminating the [cls] token information resulted in higher ADCC scores compared to the default cases, but the score gap between the models was smaller for the Gaussian kernel method than the Dirac Delta kernel method. However, in the other two low-capacity architectures, the ADCC scores for the eliminated cases were lower than the default cases. The score gap tendencies between the two cases in the two kernel methods were consistent with the high-capacity architecture. Therefore, the [cls] token information can serve as a helper to maintain the stability of ViT-ReciproCAM in an architecture-agnostic manner. The qualitative comparison is depicted in Figure 6. ### Effect of Feature Extraction Position In our default approach, we extract feature information from the first 'LayerNorm' layer of the last transformer encoder block in the network. This is because utilizing the unmodified skip connected feature information from the previous encoder block can provide more accurate explainability. This method is compared with a block feature extraction method illustrated in Figure 2-(b) to show superiority of it and the result is summarized in Table 4. ## 6 Conclusion In this paper, we proposed a novel approach called Vit-ReciproCAM to generate visual explainability for vision transformers (ViT) without requiring gradient and attention information. Vit-ReciproCAM masks the feature map of the first layer of the last transformer encoder block to leverage the correlation between the spatially masked feature maps and the network outputs. Our experimental results demonstrate that Vit-ReciproCAM outperforms other saliency map generation methods in terms of both performance and execution time, achieving state-of-the-art results (\(69.11/70.34\)) on the ADCC metric. Remarkably, Vit-ReciproCAM's execution time is approximately 1.5\(\times\) faster than that of the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & Base & (Diff) & Small & (Diff) & Tiny & (Diff) \\ \hline **ViT-ReciproCAM** & **59.03** & - & **58.58** & - & **60.00** & - \\ ViT-ReciproCAM\_block & 52.92 & -6.11 & 58.46 & -0.12 & 59.77 & -0.23 \\ \hline **ViT-ReciproCAM**\([3\times 3]\) & **69.11** & - & **70.34** & - & **70.37** & - \\ ViT-ReciproCAM \([3\times 3]\)\_block & 67.17 & -1.94 & 66.56 & -3.78 & 65.92 & -4.45 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study results: effect of feature extraction position \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{Execution} & \multirow{2}{*}{FPS} & \multirow{2}{*}{Ratio} \\ & Time (ms) & & & \\ \hline ViT-ReciproCAM & 86.4 & 11.57 & - \\ **ViT-ReciproCAM**\([3\times 3]\) & 88.1 & 11.35 & 1.02\(\times\) \\ Relevance & 132.2 & 7.56 & 1.50\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 2: A comparative analysis of the execution times of three distinct methodologies. The measurement of execution time was conducted using \(1,000\) inputs, and the resultant average time was calculated. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & Base & (Diff) & Small & (Diff) & Tiny & (Diff) \\ \hline **ViT-ReciproCAM** & 59.03 & - & **58.58** & - & **60.00** & - \\ ViT-ReciproCAM\_w\_cls\_token & **67.05** & 8.02 & 48.03 & -10.55 & 56.82 & -3.18 \\ \hline **ViT-ReciproCAM**\([3\times 3]\) & 69.11 & - & **70.34** & - & **70.37** & - \\ ViT-ReciproCAM \([3\times 3]\)\_w\_cls\_token & **69.58** & 0.47 & 69.51 & -0.83 & 69.78 & -0.59 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study results: effect of class token attention and gradient-based Relevance method. The proposed approach could potentially aid in improving model interpretability and assist practitioners in understanding how vision transformers make decisions.
2303.03976
FRA -- A new Fast, Robust and Automated pipeline for the detection and measurement of solar-like oscillations in time-series photometry of red-giant stars
We developed, tested and validated a new Fast, Robust and Automated (FRA) tool to detect solar-like oscillations. FRA is based on the detection and measurement of the frequency of maximum oscillation power $\nu_{max}$, without relying on the detection of a regular frequency spacing to guide the search. We applied the FRA pipeline to 254 synthetic power spectra representative of TESS red giants, as well as 1689 red giants observed by Kepler and 2344 red giants observed by TESS. We obtain a consistency rate for $\nu_{max}$ compared with existing measurements of $\sim$ 99% for Kepler red giants and of $\sim$ 98% for TESS red giants. We find that using $\nu_{max}$ as an input parameter to guide the search for the large frequency separation $\Delta\nu$ through the existing Envelope AutoCorrelation Function (EACF) method significantly improves the consistency of the measured $\Delta\nu$ in the case of TESS stars, allowing to reach a consistency rate above 99%. Our analysis reveals that we can expect to get consistent $\nu_{max}$ and $\Delta\nu$ measurements while minimizing both the false positive measurements and the non-detections for stars with a minimum of four observed sectors and a maximum G magnitude of 9.5.
C. Gehan, T. L. Campante, M. S. Cunha, F. Pereira
2023-03-07T15:25:34Z
http://arxiv.org/abs/2303.03976v1
FRA - A new Fast, Robust and Automated pipeline for the detection and measurement of solar-like oscillations in time-series photometry of red-giant stars ###### Abstract We developed, tested and validated a new Fast, Robust and Automated (FRA) tool to detect solar-like pulsations. FRA is based on the detection and measurement of the frequency of maximum oscillation power \(\nu_{\rm max}\), independently from the large frequency separation \(\Delta\nu\). We applied the FRA pipeline to 254 artificial power spectra representative of TESS red giants, as well as 1689 red giants observed by _Kepler_ and 2344 red giants observed by TESS. We obtain a consistency rate for \(\nu_{\rm max}\) compared with existing measurements above 99% for _Kepler_ red giants and above 97% for TESS red giants. We find that using \(\nu_{\rm max}\) as an input parameter to guide the search for \(\Delta\nu\) through the existing Envelope AutoCorrelation Function (EACF) method significantly improves the consistency of the measured \(\Delta\nu\) in the case of TESS stars, allowing to reach a consistency rate above 99 %. Our analysis reveals that we can expect to get consistent \(\nu_{\rm max}\) and \(\Delta\nu\) measurements while minimizing both the false positive measurements and the non-detections for stars with a minimum of four observed sectors and a maximum G magnitude of 9.5. Astronomical - Methods: data analysis - Techniques: photometric - Stars: interiors - Stars: low-mass - Stars: solar-like Article Type 1,2]C. Gehan 1,2]T. L. Campante 2,3]M. S. Cunha 2,3]F. Pereira 3]F. ## 1 Introduction Many fields of astrophysics rely on the precise knowledge of the stellar properties, in particular stellar masses, radii, and ages. The characterisation of exoplanets requires the characterisation of their host star (Rauer et al., 2014). Similarly, Galactic archaeology, which aims at tracing the formation and evolution of the Milky Way, is based on the study of the stellar populations of its various structural components (Miglio et al., 2017). The detailed understanding of stellar evolution and its physical mechanisms is therefore fundamental and urgent for these fields, which are undergoing an exponential development. In this context, we need powerful tools for the observational diagnosis of stellar interiors. This is made possible by asteroseismology, which consists in studying stellar oscillations containing information on the physical conditions prevailing in the interior of stars. Pulsating evolved low-mass stars, such as red giants and subgiants, represent ideal laboratories to study the physical mechanisms governing stellar cores, as their oscillation spectrum exhibits so-called mixed modes probing the most inner radiative regions in addition to the external convective envelope (Beck et al., 2012; Bedding et al., 2011). Important breakthroughs in stellar physics have been made in the last 15 years with the advent of the ultra-high precision photometry space missions CoRoT (ESA, 2006-2012) and _Kepler_/K2 (NASA, 2009-2018), which led to the detection of oscillations in tens of thousands of stars. The TESS space mission (NASA) has been launched in 2018, and its observation strategy allows an entire hemisphere to be mapped in a single year. Most of the sky will receive 27 days of continuous observations, while a small area around the poles will get more or less continuous coverage for the entire year. TESS completed its 2-year, all-sky nominal survey in July 2020 and is dramatically increasing the number of red giants with detected oscillations, which is expected to be on the order of \(10^{5}\)(Campante et al., 2016; Mackereth et al., 2021; Silva Aguirre et al., 2020). TESS enables in particular the measurement of the two global asteroseismic properties \(\nu_{\rm max}\) and \(\Delta v\)(Mackereth et al., 2021). The frequency of maximum oscillation power \(\nu_{\rm max}\) is proportional to the surface gravity \(g\) and effective temperature \(T_{\rm eff}\) of the star such that (Brown, Gilliland, Noyes, & Ramsey, 1991; Kjeldsen & Bedding, 1995) \[\nu_{\rm max}\propto\frac{g}{\sqrt{T_{\rm eff}}}. \tag{1}\] \(\nu_{\rm max}\) corresponds to the center of the Gaussian envelope characterising solar-like oscillations in the power spectrum. The large frequency separation \(\Delta v\) is related to the mean stellar density \(\rho\) as (Ulrich, 1986) \[\Delta v\propto\sqrt{\rho}. \tag{2}\] \(\Delta v\) characterises the regular frequency spacing of acoustic oscillations modes. The asymptotic expression of the large separation represents the inverse of the time it takes for a pressure wave to travel back and forth from the centre to the surface of the star, such that (Ulrich, 1986) \[\Delta v=\left(2\int\limits_{0}^{R}\frac{{\rm d}r}{c_{\rm s}}\right)^{-1}, \tag{3}\] where \(R\) is the stellar radius and \(c_{\rm s}\) is the internal sound-speed. We can determine accurate seismic masses and radii using \(\nu_{\rm max}\) and \(\Delta v\), independently from modelling, through the scaling relations (Kallinger, Weiss, et al., 2010; Kjeldsen & Bedding, 1995; Mosser et al., 2013) \[\frac{M}{M_{\odot}}=\left(\frac{\nu_{\rm max}}{\nu_{\rm max,\odot}}\right)^{ 3}\left(\frac{\Delta v}{\Delta v_{\odot}}\right)^{-4}\left(\frac{T_{\rm eff} }{T_{\rm eff,\odot}}\right)^{3/2}, \tag{4}\] and \[\frac{R}{R_{\odot}}=\left(\frac{\nu_{\rm max}}{\nu_{\rm max,\odot}}\right) \left(\frac{\Delta v}{\Delta v_{\odot}}\right)^{-2}\left(\frac{T_{\rm eff}}{ T_{\rm eff,\odot}}\right)^{1/2}, \tag{5}\] where \(\nu_{\rm max,\odot}=3050\,\mu\)Hz, \(\Delta v_{\odot}=135.5\,\mu\)Hz and \(T_{\rm eff,\odot}=5777\) K are the solar values. When spectroscopic measurements of the effective temperature are not available, we can use a proxy for red giants as given by Mosser et al. (2012) \[T_{\rm eff}=4800\,\left(\frac{\nu_{\rm max}}{40}\right)^{0.06}, \tag{6}\] with \(\nu_{\rm max}\) in \(\mu\)Hz. The precision on the exoplanet properties such as the radius, mass and age is directly related to the precision reached on the determination of stellar properties. TESS allows us to reach a relative precision of \(\sim 4\%\) in \(\Delta v\) and \(\sim 3.5\%\) in \(\nu_{\rm max}\)(Silva Aguirre et al., 2020). Radii, masses, and ages can be obtained with uncertainties at the 6%, 14%, and 50% level, and decrease to 3%, 6%, and 20% when parallax information from Gaia DR2 (Gaia Collaboration et al., 2018) is included. These precision levels are similar to those obtained with the 4-year long _Kepler_ datasets, which give rise to relative precisions of 2.9% in radius, 7.8% in mass and 25% in age (Wu et al., 2018; Yu et al., 2018). Additionally, Galactic archaeology also requires precise stellar radii, masses and ages (Miglio et al., 2017). Several methods exist to measure \(\nu_{\rm max}\) and/or \(\Delta v\), in particular: * the COR (Mosser & Appourchaux, 2009), OCT (Hekker et al., 2010) and A2Z (Mathur et al., 2010) methods first search for the signature of \(\Delta v\), using the Envelope AutoCorrelation Function (EACF) method in the case of the COR pipeline and by computing the power spectrum of the power spectrum for the other methods, for several windows inside the spectrum in the case of the OCT pipeline and for the whole spectrum for the A2Z pipeline. The three methods then rely on the \(\Delta v\) measurement to target the search for \(\nu_{\rm max}\) through the local fit of a Gaussian envelope for the oscillations; * the CAN (Kallinger, Mosser, et al., 2010), BAM (Zinn, Stello, Huber, & Sharma, 2019) and BHM (Elsworth, Themessl, Hekker, & Chaplin, 2020) pipelines use a Bayesian Markov-Chain Monte-Carlo (MCMC) algorithm to fit a global model to the observed power density spectrum, including a description of the background signal and a Gaussian envelope for the oscillations. The CAN and BAM methods then fit Lorentzian profiles for the radial, dipole and quadrupole modes, and finally derive \(\Delta v\) from this mode identification. The BHM method then computes the power spectrum of the power spectrum to derive \(\Delta v\); * the approach of Hon, Stello, & Zinn (2018) consists in using supervised deep learning to detect oscillations in red giants and estimate \(\nu_{\rm max}\). However, it is crucial to be able to detect \(\nu_{\rm max}\) independently from \(\Delta v\), which is not the case of COR, OCT and A2Z. Indeed, \(\Delta v\) is not necessarily measurable. A short observation duration and/or a low signal-to-noise ratio results in a resolution of the oscillation modes that is insufficient to reveal a regular spacing of pressure modes, while the Gaussian bump of oscillations can still be visible, making it possible to measure \(\nu_{\rm max}\) and to detect solar-like oscillations. Additionally, CAN, BAM and BHM are not optimized to assess whether or not a given star exhibits oscillations, being rather aimed at deriving an accurate \(\nu_{\rm max}\) measurement for stars that are known to exhibit clear oscillations (Hon et al., 2018). On the other side, the method of Hon et al. (2018) aims at mimicking a visual detection of oscillations by an expert eye and does not rely on a statistical criteria. We thus need a pipeline allowing to statistically assess the existence of solar-like oscillations and to measure \(\nu_{\rm max}\) independently from \(\Delta\nu\). Such a pipeline has to be fast, robust and automated in order to efficiently analyse the harvest of red giants observed by TESS and, in a near future, PLATO. We here present a new Fast, Robust and Automated (FRA) pipeline to detect solar-like oscillations, which can be easily implemented. It relies on the statistical detection of \(\nu_{\rm max}\) to validate the presence of oscillations. FRA presents the advantage of giving a \(\nu_{\rm max}\) measurement which is independent from \(\Delta\nu\). Additionally, FRA uses an innovative approach relying on a local search for \(\nu_{\rm max}\) over the power spectrum, while methods like CAN, BAM and BHM require a MCMC algorithm to fit the stellar background together with the oscillations. We deal with reduced computation times since our approach only requires 5 free parameters, for which we use realistic bounds to guide the search. Our pipeline has already been used in several studies (Gaulme et al., 2022; Huber et al., 2022, and Jiang et al. submitted) as well as in several collaborations within the TESS Asteroseismic Science Operations Center (TASOC)1. Section 2 is dedicated to the description of the FRA pipeline and to the measurement of \(\nu_{\rm max}\). In Sect. 3, we test and validate the FRA pipeline on TESS-like synthetic oscillation spectra built with different values of \(\nu_{\rm max}\) typical of low-luminosity red-giant branch (LLRGB) stars and corresponding to different numbers of observed TESS sectors. We also discuss the performance of our pipeline as a function of the stellar magnitude of the number of observed TESS sectors. In Sect. 4, we test and validate our FRA pipeline on _Kepler_ red giants benefiting from exquisite 4 year-long observations, as well as TESS red giants from the Southern Continuous Viewing Zone (SCVZ) having between 2 sectors (54.8 days) and 13 sectors (about 1 year) of observations. In Sect. 5, we use the EACF method (Mosser & Appourchaux, 2009) to obtain \(\Delta\nu\) measurements in addition to \(\nu_{\rm max}\) and consider its performance compared to the FRA pipeline. We provide an independent catalogue of \(\nu_{\rm max}\) and \(\Delta\nu\) measurements for the _Kepler_ and SCVZ TESS targets analyzed in this study2, which can be valuable for future studies. Sect. 6 is devoted to conclusions. Footnote 1: [https://tasoc.dk/](https://tasoc.dk/) Footnote 2: Results are publicly available on ADS. ## 2 The FRA pipeline to detect solar-like oscillations: measurement of \(\nu_{\rm max}\) We developed the Fast, Robust and Automated (FRA) method to measure \(\nu_{\rm max}\) based on the fit of the smoothed power spectrum, with the aim of detecting the bump of oscillations in the power spectrum associated to a Gaussian envelope. The novelty of FRA lies in its ability to assess whether or not a given star exhibits oscillations. To that matter, FRA performs a totally blind search for \(\nu_{\rm max}\), independent from the \(\Delta\nu\) measurement. We describe in the following the different steps we implemented. ### Smoothing the power spectrum The spectrum first needs to be smoothed (Fig. 1 ). To that end, 20 input values \(\nu_{\rm c}\) are tested for \(\nu_{\rm max}\), regularly spaced between 30 and 270 \(\mu\)Hz, since the Nyquist frequency of 30-mins cadence TESS data is 278 \(\mu\)Hz. For each \(\nu_{\rm c}\) value, an input large separation \(\Delta\nu_{\rm c}\) is tested, derived through the scaling relation for red giants (Mosser et al., 2010) \[\Delta\nu_{\rm c}=0.28\,\nu_{\rm c}^{0.75}. \tag{7}\] These \(\Delta\nu_{\rm c}\) values are used to derive 20 different smoothed spectra by convolving the power spectrum with a Gaussian of full width at half maximum (FWHM) (Mosser & Appourchaux, 2009) \[\delta\nu_{\rm env}=3\,\Delta\nu_{\rm c}. \tag{8}\] We highlight here that we are exploring a range of input \(\Delta\nu_{\rm c}\) values for the large separation, hence we do not need to know \(\Delta\nu\) to smooth the spectrum. The scaling relation from Eq. 7 then allows us to target a frequency range to search for \(\nu_{\rm max}\) for each smoothed spectrum. The detection of \(\nu_{\rm max}\) is thus performed without knowing \(\Delta\nu\). Border effects occur when convolving the power spectrum with a Gaussian. They prevent us from accessing the extreme frequencies and, thus, to measure very low and very high \(\nu_{\rm max}\). In practice, the FRA pipeline is limited to \(\nu_{\rm max}\gtrsim 10\,\mu\)Hz. Using 30-mins cadence data, FRA is in addition limited to \(\nu_{\rm max}\lesssim 270\,\mu\)Hz, but this limitation can be overcome using higher cadence data, for example in the 2-mins cadence. We here distinguish between two slightly different configurations of the \(\nu_{\rm max}\) measurement within the FRA pipeline: * FRA1 applies to each of the 20 smoothed spectra; * FRA2 applies to a unique smoothed spectrum constructed by taking the median of the power spectrum density among all the 20 smoothed spectra. FRA2 presents the advantage to be much faster than FRA1. However, for a given \(\nu_{\rm max}\), the height of the Gaussian envelope of oscillations appears smaller with FRA2 compared to FRA1. This effect is particularly emphasized at high \(\nu_{\rm max}\), since the power spectral density at \(\nu_{\rm max}\) decreases with \(\nu_{\rm max}\). Using FRA2 thus requires data allowing the height of the Gaussian envelope to be large enough in order to ensure that oscillations associated to a high \(\nu_{\rm max}\) value are still detectable. This is the case for long observation durations such as the \(\sim\) 4 years of _Kepler_, which allow the power spectral density of oscillations to remain significantly high even at high \(\nu_{\rm max}\) (see Sect. 4.1). However, the Gaussian envelope of oscillations is too diluted at high \(\nu_{\rm max}\) to be detectable using FRA2 for TESS stars, with observation durations ranging from 1 month to 1 year, for which FRA1 is required (see Sect. 4.2). ### Local fitting of oscillations For each of the 20 individual smoothings performed in Sect. 2.1, we fit a Gaussian envelope \(G(\nu)\) for the oscillations with a local contribution for the background \(B(\nu)\), such that \[P_{\rm G}(\nu)=G(\nu)+B(\nu), \tag{9}\] with (Mosser et al., 2012) \[B(\nu)=\alpha\left(\frac{\nu}{\nu_{\rm c}}\right)^{\beta}, \tag{10}\] where \(\alpha\) and \(\beta\) are free parameters. The Gaussian \(G(\nu)\) is centered around \(\nu_{\rm c}\), and its standard deviation \(\sigma_{\rm G}\) is computed as (Mosser et al., 2010) \[\sigma_{\rm G}=\frac{\delta\nu_{\rm G}}{2\sqrt{2\ln 2}}, \tag{11}\] where \(\delta\nu_{\rm G}\) is the FWHM of the Gaussian estimated as (Mosser et al., 2010) \[\delta\nu_{\rm G}=0.59\,\nu_{\rm c}^{0.90}. \tag{12}\] The shot noise in the raw power spectrum follows a two-degree-of-freedom chi-square statistics (e.g. Garcia & Ballot, 2019). However, the smoothed power spectrum is approximately normally distributed as a consequence of the Central Limit Theorem. We therefore use Gaussian statistics for the minimization to fit the oscillations. We set bounds on the variables and use the Trust Region Reflective optimization algorithm (Branch, Coleman, & Li, 1999). The fit to the power spectrum by Eq. 9 is performed over a frequency range encompassing \(\nu_{\rm c}\)\(\pm\)1.2 \(\delta\nu_{\rm G}\), with the following boundaries for the free parameters: * the center of the Gaussian, \(\nu_{\rm c}\), is explored over the frequency range \([\nu_{\rm c}-\delta\nu_{\rm emv}+\delta\nu_{\rm G}/3\), \(\nu_{\rm c}+\delta\nu_{\rm emv}-\delta\nu_{\rm G}/3]\) with \(\delta\nu_{\rm emv}\) defined by Eq. 8 and \(\delta\nu_{\rm G}\) defined by Eq. 12; * the amplitude of the Gaussian is explored in the range \([P_{\rm c}/15,\,15\,P_{\rm c}]\) where \(P_{\rm c}\) is the smoothed PSD at \(\nu_{\rm c}\); * the standard deviation of the Gaussian, \(\sigma_{\rm G}\), is allowed to vary by \(\pm\) 10% from Eq. 11; * the background parameter \(\alpha\) in Eq. 10 is explored between 0 and the maximum value of the smoothed PSD present in the smoothed spectrum in the frequency range where we are fitting the bump of oscillations; Figure 1: Spectrum of the _Kepler_ red giant KIC 1027337. _Left:_ Raw spectrum. The reference \(\nu_{\rm max}\) measurement and our measurement with the FRA pipeline are represented by the vertical red and black lines, respectively, which are superimposed. _Right:_ Optimally smoothed spectrum. The orange line represents the local fit of oscillations from Eq. 9, centered around \(\nu_{\rm max}\). * the background exponent \(\beta\) in Eq. 10 is explored in the range [\(-5\), 0] (Mosser et al., 2012). ### Measuring \(\nu_{\rm max}\) and associated uncertainty Our FRA pipeline then keeps the configuration corresponding to the highest relative variation between the smoothed PSD at \(\nu_{\rm max}\) and the local background at \(\nu_{\rm max}\), ensuring that the bump of oscillations has a PSD at \(\nu_{\rm max}\) significantly above the local background. This step provides a first estimate of \(\nu_{\rm max}\). We then obtain an estimate of \(\Delta\nu\) through Eq. 7, and the optimal smoothing is performed with a Gaussian filter of FWHM computed through Eq. 8. The step presented in Sect. 2.2 is then performed again, to refine the measurement of \(\nu_{\rm max}\) (Fig. 1 ). We also fit a model without oscillations, including only a local background contribution (Eq. 10). We then compare the significance of these two models with respect to the data through the likelihood ratio test (Appourchaux, Gizon, & Rabello-Soares, 1998). This hypothesis test compares the goodness-of-fit of two models to determine which offers a better fit to the data. The second model includes the first one, with some additional parameters. The null hypothesis is that the data is better fitted by the first model with less parameters. The likelihood ratio test writes \[R=-2\,\ln\,\left(\frac{\mathcal{L}_{1}}{\mathcal{L}_{2}}\right), \tag{13}\] where \(\mathcal{L}_{1}\) (resp. \(\mathcal{L}_{2}\)) is the likelihood of the model without oscillations (resp. with oscillations). The likelihood ratio approximately follows a chi-square distribution under the null hypothesis, where the number of degrees of freedom correspond to the number of additional parameters used in the model including oscillations compared to the model without oscillations Wilks (1938). Since the model with oscillations including 2 additional parameters compared to the model without oscillations, we used used a chi-square statistics with 2 degrees of freedom. We then compute the p-value, which is the probability of obtaining the observed results by assuming that the null hypothesis is true. We consider that the measured \(\nu_{\rm max}\) is significant and that oscillations are detected if the p-value is below 1%. We assume that the uncertainty on \(\nu_{\rm max}\) is a fraction of the width of the envelope of the oscillations, i.e a fraction of the FWHM of the fitted Gaussian. We determine the \(1-\sigma\) uncertainty on \(\nu_{\rm max}\) such as \[\sigma_{\nu_{\rm max}}=\delta\nu_{\rm G}/8, \tag{14}\] We checked that dividing the FWHM of the Gaussian by a factor of 8 results in uncertainties that are typical for TESS stars, i.e a relative precision of \(\sim\) 5-6% (Fig. 2 ). The relative precision is improved as \(\nu_{\rm max}\) increases and, therefore, as \(\Delta\nu\) increases based on Eq. 7 (Fig. 2 ). ## 3 Test and validation on artificial TESS oscillation spectra In order to assess the performance of our methods, we applied them to a set of artificial power spectra representative of TESS. We used the artificial lightcurves produced by Campante et al. (2018) for \(\sim\) 30 000 LLRGB stars in 30-mins cadence and introduced gaps every 13.7 days to be consistent with TESS observations. We then derived artificial power spectra through a Lomb-Scargle periodogram, with an oversampling factor of 10 to match the power spectra analysed by Mackereth et al. (2021) and to maximise the detectability of oscillations. We selected the artificial power spectra which have the same number of observed TESS sectors, i.e. in the range [2, 13], compared to the sample of TESS stars analysed by Mackereth et al. (2021). The selected sample includes a much higher number of targets with a low number of observed sectors compared to TESS stars from Mackereth et al. (2021) (Fig. 3 ). We additionally selected the artificial power spectra corresponding to stellar G magnitudes below 11, as the sample analysed by Mackereth et al. (2021), since we do not expect to reliably detect oscillations for stars with larger stellar magnitudes. The selected sample has a G magnitude in the range [7, 11] (Fig. 3 ). In total, we used a set of 254 artificial power spectra, of which 192 (75.6 %) have injected oscillations and 62 (24.4 Figure 2: Relative precision on the \(\nu_{\rm max}\) measurement (blue line) and the \(\Delta\nu\) measurement (orange line) as a function of \(\nu_{\rm max}\). The relative precision on \(\nu_{\rm max}\) has been computed using Eqs. 12 and 14. The relative precision on \(\Delta\nu\) has been computed using Eq. 23 with \(A_{\rm lim}=10\) and \(\delta\nu_{\rm H}\) estimated with Eq. 15. %) have oscillations with \(\nu_{\rm max}\) above the Nyquist frequency, and are thus considered as not having injected oscillations. ### Measurement of \(\nu_{\rm max}\) We used the FRA1 method to analyse artificial power spectra representative of TESS red giants, as described in Sect. 2. We detect oscillations and measure \(\nu_{\rm max}\) for 88 stars (Fig. 4 ). These include 5 stars which have no injected oscillations, representing a false positive detection rate of 8.1 % (among the 62 stars with no injected oscillations, Table 1 ). We also detect oscillations and derive a \(\nu_{\rm max}\) measurement for 83 stars with injected oscillations, representing a detection rate of 43.2 % (among the 192 stars with injected oscillations). For 19 stars with injected oscillations, we derived a \(\nu_{\rm max}\) value with a relative deviation of at least 10% compared to the input value, giving a consistency rate of 77.1 %. We discuss in Sect. 3.2 the reasons for this rather high false positive rate and low consistency rate, and we demonstrate in Sect. 3.3 that we obtain a 100 % consistency rate and a false positive detection rate of Figure 4: _Left:_ comparison between our measurements and the input \(\nu_{\rm max}\) values for synthetic TESS targets. Red dots correspond to stars for which the relative deviation between measurements is at least 10%. The red line represents a 1:1 comparison while the red dashed lines correspond to a deviation of \(\pm\) 10 % from the 1:1 comparison. The horizontal dashed lines indicate an input \(\nu_{\rm max}\) value of 0 \(\mu\)Hz, meaning that no oscillations were injected in the power spectrum. _Right:_ same as the left panel, but for \(N_{\rm sectors}>3\) and ensuring that the associated \(\Delta\nu\) measurement is consistent for G > 9.5. Figure 3: Characteristics of the sample of 254 artificial red giants analyzed in this study. _Left:_ Distribution of the number of observed TESS sectors. Vertical dashed lines indicate sector numbers between 2 and 13. _Left:_ Distribution of the G magnitude. 0 % by simply adding an extra validation step when analyzing stars with a G magnitude above 9.5 (Table 1 ). We obtain a median relative precision of 6.2 % and a median computation time spent for each star of 1.7 s. Impact of the stellar magnitude and the number of TESS sectors on the detectability and consistency of \(\nu_{\rm max}\) We here discuss the impact of the stellar magnitude and the number of observed TESS sectors on the detectability of \(\nu_{\rm max}\) and as well as on the consistency of our measurements. We consider that inconsistent measurements for \(\nu_{\rm max}\) are associated to a relative deviation of at least 10% compared to the input values. To that end, we computed the fraction of stars with inconsistent measurements falling in each stellar magnitude bin (resp. having a given number of observed TESS sectors). We then compared this distribution to the one obtained for stars for which we get consistent measurements in order to highlight any trend difference between these two populations. We find that the stellar magnitude and the number of observed TESS sectors impact the consistency, the detectability and the false detection rate of \(\nu_{\rm max}\) (Fig. 5 ). We note that the proportions of stars with inconsistent measurements, of non-oscillating stars with false positive measurements, and of oscillating stars with no measurement, exceed 10 % for \(N_{\rm sectors}\leq 3\) and increase with stellar magnitude. We note that the proportions of stars with inconsistent \(\nu_{\rm max}\) measurements and of non-oscillating stars with false positive \(\nu_{\rm max}\) measurements exceed 10 % for a stellar magnitude above 9, while the proportion of oscillating stars with no \(\nu_{\rm max}\) measurement exceeds 10 % for a stellar magnitude above 9. So far, Mackereth et al. (2021) only assessed the yield of seismic detection as a function of stellar magnitude and of the number of TESS sectors, while not distinguishing between the detection yield obtained from the \(\nu_{\rm max}\) measurement and the detection yield obtained from the \(\Delta\nu\) measurement. Moreover, \begin{table} \begin{tabular}{c c c c c c} \hline \hline Parameter & Detection & Consistency rate & Median relative & Median time & False positive \\ & rate & with input values & precision & per star & rate \\ \hline \(\nu_{\rm max}\) & 43.2 \% & 77.1 \% & 6.2 \% & 1.7 s & 8.1 \% \\ \(\nu_{\rm max}\) extra step for mag \(>\) 9.5 & 28.0 \% & 100 \% & – & 11.4 s & 0 \% \\ \hline \end{tabular} \end{table} Table 1: Characteristics of the \(\nu_{\rm max}\) measurement for 254 synthetic power spectra representative of TESS red giants with a G magnitude below 11. Figure 5: Proportion of artificial red giants as a function of the number of TESS sectors (left panel) and the mean stellar magnitude (right panel). The blue curve corresponds to stars for which we derive inconsistent \(\nu_{\rm max}\) measurements, the orange curve to non-oscillating stars for which we obtain false positive \(\nu_{\rm max}\) measurements, and the green curve to oscillating stars for which we do not derive a \(\nu_{\rm max}\) measurement. Horizontal dashed lines indicate a proportion of stars of 0 %, while the horizontal continuous line represents a proportion of stars of 10 %. Mackereth et al. (2021) did not apply their pipeline to artificial power spectra. They thus did not address the false positive detection rate, neither as the consistency rate of their measurements. They also did not address the impact of the number of TESS sectors and of the stellar magnitude on these rates. ### Optimized \(\nu_{\rm max}\) measurement Based on the above, we selected only the artificial and observed power spectra associated to a number of observed TESS sectors \(N_{\rm{sectors}}>3\). Additionally, we added another criterium to validate the \(\nu_{\rm max}\) values measured for a stellar magnitude above 9.5: our \(\nu_{\rm max}\) measurement for a given star is validated only if we also obtain a \(\Delta\nu\) measurement, and this \(\Delta\nu\) value has to be close to the \(\Delta\nu\) estimated using \(\nu_{\rm max}\) from Eq. 25, i.e. the relative deviation between these two \(\Delta\nu\) values has to be below 10 %. Otherwise, the \(\nu_{\rm max}\) we measure is not validated, and we consider that we do not have a \(\nu_{\rm max}\) detection. Our implementation of the EACF method to measure \(\Delta\nu\) is described in Sect. 5.1. We now obtain a lower detection rate of 28.0 % (i.e. for 14 stars out of 50 with injected oscillations), which is counterbalanced by a much higher consistency rate of 100 % and a much lower false positive detection rate of 0 % (Table 1 and right panel of Fig. 4 ). Hence, we infer that our pipeline provides consistent measurements while minimizing the false positive measurements and the non-detections for \(\nu_{\rm max}\), provided that the number of observed TESS sectors is \(N_{\rm{sectors}}>3\). The median computation time per run is 11.4 s, which is much longer since we are now left with targets with higher number of observed TESS sectors. ## 4 Results: Analysing _Kepler_ and TESS Red Giants We tested our FRA pipeline to measure \(\nu_{\rm max}\) for 3933 red giant stars observed by the space missions _Kepler_ and TESS. ### Analysis of 1589 _Kepler_ red giants We selected a sample of 1589 _Kepler_ stars with 4 year-long observations, without oversampling, belonging to the sample which Gehan, Mosser, Michel, Samadi, & Kallinger (2018) and Gehan, Mosser, Michel, & Cunha (2021) analysed to measure the mean core rotation rate and the inclination angle. We compare our results for these _Kepler_ red giants with \(\nu_{\rm max}\) and \(\Delta\nu\) values measured using the COR method (Mosser & Appourchaux, 2009). We used the FRA2 method to analyse _Kepler_ red giants, as described in Sect. 2. We were able to derive a \(\nu_{\rm max}\) measurement for all the 1589 Kepler red giants analysed in this study (Table 2 ). We have a median relative precision of 4.6 %, which is better than for artificial TESS targets (Table 1 ) because the _Kepler_ stars analyzed have higher \(\nu_{\rm max}\) values, which are more precise (Fig. 2 ). The computation time spent for each star ranges between 1.9 s and 8.3 s. This variation in the computation time comes from the fact that the number of local maxima to test in the oscillation spectrum to fit the oscillations envelope is star-dependent. The median computation time per run is 3.4 s, slightly longer compared to TESS artificial targets. Indeed, although TESS artificial targets have much shorter observation durations ranging between 1 month and 1 year, their power spectrum is Figure 6: Comparison between our \(\nu_{\rm max}\) measurements and existing ones for _Kepler_ stars. The color code is the same as in Fig. 4. The horizontal dashed line indicates \(\nu_{\rm max}=205\)\(\mu\)Hz. Dotted lines and dot-dashed lines represent \(\nu_{\rm max}=0\)\(\mu\)Hz and the Nyquist frequency of 283 \(\mu\)Hz, respectively. oversampled with a factor of 10, increasing the computation time. For 8 stars, the relative deviation between \(\nu_{\rm max}\) obtained through the EACF method and our measurements is of at least 10% (Fig. 6 ). We note that the EACF fails to propose a value for \(\nu_{\rm max}\) for 7 of these stars, appearing in the figure as having \(\nu_{\rm max}=0\)\(\mu\)Hz. The EACF gives a \(\nu_{\rm max}\) value above the Nyquist frequency (for the 30-mins cadence used) for the remaining star. Our FRA pipeline successfully measures \(\nu_{\rm max}\) in these 8 cases (see APPENDIX A:). We note that the \(\nu_{\rm max}\) measurements obtained through the EACF method tend to be underestimated above 205 \(\mu\)Hz when compared to our \(\nu_{\rm max}\) values derived through the FRA pipeline, however the relative deviation remains below 10%. This deviation seems to be due to a sharper cutoff of the edges of the spectrum when smoothing the spectrum with the EACF method compared to our method, Indeed, our \(\nu_{\rm max}\) actually targets the maximum of the PSD within the Gaussian envelope of oscillations, which is not the case for \(\nu_{\rm max}\) measured with the EACF method (see APPENDIX B:). ### Analysis of 2344 TESS red giants We also considered a sample of 2344 Southern Continuous Viewing Zone (SCVZ) TESS red giants for which Mackereth et al. (2021) indicated that COR, A2Z and BHM obtain consistent results accross the three methods using 30-mins cadence TESS data. Their power spectra are obtained using an oversampling factor of 10. This TESS sample has between 2 and 13 TESS sectors-long lightcurves and a G magnitude below 11 in order to select only stars bright enough to maximise the probability of oscillations being detectable (Fig. 7 ). We compare our results for these TESS red giants with \(\nu_{\rm max}\) and \(\Delta\nu\) values published in Mackereth et al. (2021), which result from the mean of the values obtained by applying the COR (Mosser & Appourchaux, 2009), A2Z (Mathur et al., 2010) and BHM (Elsworth et al., 2020) pipelines. We used the FRA1 method to analyse TESS red giants, as described in Sect. 2. We checked that the FRA1 method maximises the consistent detection of \(\nu_{\rm max}\) for TESS stars. We derived a \(\nu_{\rm max}\) measurement for 1824 stars, i.e. 77.8 % of the analysed sample (upper left panel of Fig. 8 ). The majority of the 520 stars for which we did not detect oscillations based on the measurement of \(\nu_{\rm max}\) have low reference \(\nu_{\rm max}\) measurements, below 17 \(\mu\)Hz (upper right panel of Fig. 8 ). We additionally note that the minimum \(\nu_{\rm max}\) we detect is 13 \(\mu\)Hz, while the minimum \(\nu_{\rm max}\) measured by Mackereth et al. (2021) is 7 \(\mu\)Hz. Indeed, as stated in Sect. 2.1, we cannot use the extreme frequency edges of the smoothed spectrum that are affected by border effects, which makes it not possible to measure low \(\nu_{\rm max}\) with our method. For 50 stars, the relative deviation between \(\nu_{\rm max}\) from Mackereth et al. (2021) and our measurements is of at least 10%, among which 40 have \(\nu_{\rm max}<20\)\(\mu\)Hz as measured by Mackereth et al. (2021) (upper left panel of Fig. 8 ). Since our method cannot measure low \(\nu_{\rm max}\) in many cases, it is much more probable to obtain inaccurate measurements for low \(\nu_{\rm max}\) values, mainly when the signal is noisy and mimicks bumps in the power spectrum that are compatible with a fit with Eq. 9 including oscillations (see Sect. 4.4). We provide such examples in APPENDIX C:. We obtain a consistency rate of 97.3 % with existing measurements (Table 2 ). It is not surprising that we obtain a higher consistency rate compared to artificial stars whithout requiring an extra validation for stellar magnitudes G \(>\) 9.5, since the TESS red giants we analyzed here are bona fide oscillating stars, for which oscillations have been detected Figure 7: Same Figure as Fig. 3 for 2344 SCVZ red giants from Mackereth et al. (2021). through three different methods, which is not the case of the artificial sample analyzed. We have a median relative precision of 5.3 % (Table 2 ). This is worse compared to _Kepler_ red giants, which is consistent with the overall lower \(\nu_{\rm max}\) of the analysed TESS sample (Fig. 2 ). The median relative precision is however better than \begin{table} \begin{tabular}{c c c c} \hline \hline Sample & Consistency rate with & Median relative & Median time \\ & existing measurement & precision & per star \\ \hline _Kepler_ & 99.5 \% & 4.6 \% & 3.4 s \\ \hline TESS & 97.3 \% & 5.3 \% & 11.6 s \\ TESS extra step for mag \(>\) 9.5 & 99.1 \% & – & 12.3 s \\ \hline \hline \end{tabular} Our measurements are considered as consistent when the relative deviation with the existing ones is below 10%. \end{table} Table 2: Characteristics of the \(\nu_{\rm max}\) measurement for the 1589 _Kepler_ and 2344 TESS stars analysed. Figure 8: Characteristics of the \(\nu_{\rm max}\) measurement for TESS stars. _Upper left:_ Same as the left panel of Fig. 4, but for TESS red giants. _Upper right:_ Same as the right panel of Fig. 4, but for TESS red giants. _Upper right:_\(\nu_{\rm max}\) histogram of the non-detections in the case of the upper left panel. The vertical red line represents \(\nu_{\rm max}=17\;\mu\)Hz. _Bottom:_ Computation time spent for each star as a function of the number of observed TESS sectors in the case of the upper left panel. The blue dots represent the median computation time. Horizontal lines indicate times with 2 s spacings. what we obtain for artificial stars (Table 1 ), because we inconsistently retrieve many low \(\nu_{\rm max}\) values in the case of artificial stars, which are less precise (Fig. 2 ). The computation time spent for each star is between 1.3 s and 29.9 s, with a median time of 11.6 s per run. The computation time is highly dependent on the number of observed TESS sectors (bottom panel of Fig. 8 ). The median computation time is significantly higher for TESS red giants compared to _Kepler_ ones with 4-year long datasets (Table 2 ); this comes from the fact that TESS power spectra were obtained with an oversampling factor of 10, while no oversampling was applied on _Kepler_ lightcurves. The median computation time is also much longer compared to artificial TESS targets (Table 1 ), since there is a greater proportion of real TESS stars with a higher number of observed TESS sectors. Impact of the stellar magnitude and the number of TESS sectors on the consistency of \(\nu_{\rm max}\) We led the same analysis as for the artificial sample in Sect. 3.2 to assess the impact of the stellar magnitude and the number of observed TESS sectors on the consistency of our \(\nu_{\rm max}\) measurements compared either to Mackereth et al. (2021) measurements. As for the artificial targets analyzed in this study, we find that both the stellar magnitude and the number of observed TESS sectors \(N_{\rm sectors}\) impact the consistency of our \(\nu_{\rm max}\) measurements for the TESS stars analyzed by Mackereth et al. (2021) (Fig. 9 ). Indeed, we note that the proportion of stars with inconsistent measurements is above the proportion of stars with consistent measurements for \(N_{\rm sectors}\leq 3\), representing 42 % of the stars with inconsistent measurements. We note that 6 % (resp. 10 %) of the stars with inconsistent measurements have \(N_{\rm sectors}\) = 5 (resp. \(N_{\rm sectors}\) = 7), and this proportion is higher than for stars with consistent measurements. We checked that there are only 3 stars (resp. 5 stars) with inconsistent measurements and \(N_{\rm sectors}\) = 5 (resp. \(N_{\rm sectors}\) = 7), which all have stellar magnitudes G > 9.5, except one star with \(N_{\rm sectors}\) = 5 and G = 8.85. We additionally note that the proportion of stars with inconsistent measurements is above the proportion of stars with consistent measurements for a magnitude \(G\geq 10\), representing 60 % of stars with inconsistent measurements. These trends are not surprising since a low number of TESS sectors and/or a large stellar magnitude result in a lower signal-to-noise ratio in the power spectrum, making more challenging to unambiguously identify the bump of oscillations. ### Optimized \(\nu_{\rm max}\) measurement As for artificial targets (Sect. 3.3), we optimized the measurement of \(\nu_{\rm max}\) for TESS stars. We did not make a cut for \(N_{\rm sectors}>3\) as for the artificial sample, since these are bona fide oscillating stars, for which \(\nu_{\rm max}\) has been consistently detected through three different methods. We only applied the same extra validation step as for the artificial sample for G magnitudes above 9.5. Similarly as for the artificial sample, we now obtain a \(\nu_{\rm max}\) detection for only 45.1 % of the sample (i.e. for 1055 stars out of 2344, against 77.8 % initially), together with a higher consistency rate of 99.1 % (Table 2 and upper right panel of Fig. 8 ). We provide examples for 5 of the 9 stars Figure 9: Proportion of TESS red giants for which we derive consistent \(\nu_{\rm max}\) measurements (blue curve) and inconsistent measurements (red curve) as a function of the number of TESS sectors (left panel) and the mean stellar magnitude (right panel). Horizontal dashed lines indicate a proportion of stars of 0 %. for which we still measure an inconsistent \(\nu_{\rm max}\) in APPENDIX D:. The median computation time per run is 12.3 s, which is slightly longer since this extra validation step favors stars with a larger number of observed TESS sectors for G \(>\) 9.5. ## 5 Measurement of \(\Delta\nu\) We implemented our own version of the EACF method to measure \(\Delta\nu\), based on the Fourier transform of the raw power spectrum windowed by a Hanning filter (Mosser & Appourchaux, 2009). The EACF methods presents the advantage of not requiring the knowledge of \(\nu_{\rm max}\), allowing us to derive \(\Delta\nu\) measurements that are completely independent from the \(\nu_{\rm max}\) ones derived with our FRA pipeline. ### The EACF method We describe in the following the different steps we implemented, which include some adjustments compared to the procedure of Mosser & Appourchaux (2009) in order to optimize the EACF method for red giant stars. #### 5.1.1 Windowing the spectrum by a Hanning filter We first select a portion of the raw power spectrum using a Hanning filter. The EACF signal is the cleanest and the highest when the windowed spectrum is centered around \(\nu_{\rm max}\). In order to perform a blind search, independent from the measured \(\nu_{\rm max}\) value, we test 18 different frequencies \(\nu_{\rm c}\) to center the Hanning filter, which are regularly spaced between 10 and 270 \(\mu\)Hz. The FWHM of the Hanning filter is set to (Mosser & Appourchaux, 2009) \[\delta\nu_{\rm H}=\alpha\,\gamma\,\Delta\nu_{\rm c}, \tag{15}\] with \(\alpha\) = 1.05 (Mosser & Appourchaux, 2009) and, for red giants (Mosser et al., 2010), \[\gamma=2.08\,\nu_{\rm c}^{0.15}. \tag{16}\] In cases where \(\delta\nu_{\rm H}\) is too large and one edge of the Hanning filter falls outside the observed frequency range, \(\delta\nu_{\rm H}\) in Eq. 15 is reduced accordingly. The resulting windowed spectrum is then zero-padded at high frequencies so that the resolution of the EACF signal is high enough to give a clean signal. We chose to set the number of data points in the zero-padded spectrum to \[N_{\rm H}=10\,N, \tag{17}\] where \(N\) is the number of data points initially existing in the windowed spectrum. #### 5.1.2 Measuring \(\Delta\nu\) The next step consists in performing the Fourier transform of the zero-padded windowed spectrum. The signature of the large separation corresponds to the first peak in the autocorrelation signal (Fig. 10 ). For each of the 18 zero-padded spectra windowed around a given \(\nu_{\rm c}\) value, an input large separation \(\Delta\nu_{\rm c}\) is tested, derived through Eq. 7. In the Fourier space, we explore possible values for \(\Delta\nu\) in the range \(\left[\Delta\nu_{\rm c}/G,\ G\Delta\nu_{\rm c}\right]\), with \(G=1.1\). For a given Figure 10: Application of the EACF method to the _Kepler_ red giant KIC 1027337. _Left:_ Optimal local EACF signal as a function of the time lag in the autocorrelation space. _Right:_ EACF signals for the different \(\Delta\nu\) measured for each tested interval. The reference \(\Delta\nu\) measurement and our final measurement are represented by the vertical red and black lines, respectively, which are superimposed. The horizontal dashed line represents the limit above which the null hypothesis is rejected to the 0.4% level. value of the large separation, the corresponding time shift in the autocorrelation space is \[\tau_{\Delta\nu}=\frac{2}{\Delta\nu}. \tag{18}\] Given Eq. 18, we explore the range \(\left[\frac{2}{G\Delta\nu_{\mathrm{s}}},\frac{2G}{\Delta\nu_{\mathrm{s}}}\right]\) in the autocorrelation space to search for \(\Delta\nu\) (Fig. 10 ). We then normalize the autocorrelation signal \(C(\tau)\) such that \[A^{\star}=\frac{\left|C(\tau)^{2}\right|}{\left|C(0)^{2}\right|}. \tag{19}\] The EACF signal \(A\) is finally obtained by normalizing the autocorrelation signal to the mean noise level in the autocorrelation, \(\sigma_{\mathrm{H}}\), in order to accurately compare the strength of the EACF signal for each windowed spectrum (Fig. 10 ), such that \[A=\frac{A^{\star}}{\sigma_{\mathrm{H}}}, \tag{20}\] with \[\sigma_{\mathrm{H}}=\frac{3}{2(\frac{N_{\mathrm{H}}}{N_{\mathrm{GS}}}-1)}, \tag{21}\] where \(N_{\mathrm{OS}}\) is the oversampling factor, which equates to \(2\ N_{\mathrm{f}}/N_{\mathrm{t}}\), with \(N_{\mathrm{f}}\) the number of data points in the power spectrum and \(N_{\mathrm{t}}\) the number of data points in the time series. The final \(\Delta\nu\) value corresponds to the \(\Delta\nu\) value with highest EACF signal (Fig. 10 ). The measurement of \(\Delta\nu\) is considered as significant if its associated EACF signal is above or equal to a threshold value, \(A_{\mathrm{lim}}\), defined below. Otherwise, no detection of \(\Delta\nu\) is achieved. #### 5.1.3 Reliability of the \(\Delta\nu\) measurement For each of the 18 zero-padded spectra, the \(\Delta\nu\) measurement is validated if the null hypothesis, i.e. that the detected signal can be explained by noise only, is rejected to a low level \(p\). Eqs. (10) and (11) of Mosser & Appourchaux (2009) define the threshold value above which the EACF signal rejects the null hypothesis to a given level \(p\), as a function of the number of independent bins in the time interval over which the large separation is searched for. In the case of a zero-padded spectrum, Gabriel et al. (2002) show that the number of independent bins has to be multiplied by a correction factor that depends on the padding factor. In this study, we are using a padding factor of 10 (Eq. 17), which corresponds to a correction factor equal to 3 (Gabriel et al., 2002). We thus revisit Eqs. (10) and (11) of Mosser & Appourchaux (2009) in the case of a zero-padded Figure 11: Same as Fig. 4, but for \(\Delta\nu\). _Left:_ Comparison for \(\Delta\nu\) measured through a blind search. The horizontal dashed lines indicate input \(\Delta\nu\) values of 0 \(\mu\)Hz, meaning that no oscillations were injected in the oscillation spectrum. _Bottom:_ Comparison for \(\Delta\nu\) measured through a guided search using \(\nu_{\mathrm{max}}\) as an input parameter. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Parameter & Detection & Consistency rate & Median relative & Median time & False positive \\ & rate & with input values & precision & per star & rate \\ \hline \(\Delta\nu\) blind & 18.5 \% & 89.6 \% & 2.7 \% & 2.3 s & 1.6 \% \\ \(\Delta\nu\) guided & 18.1 \% & 97.8 \% & 2.8 \% & 1.5 s & 0.0 \% \\ \hline \end{tabular} \end{table} Table 3: Same as Table 1, but for \(\Delta\nu\). spectrum such as \[A_{\rm lim}\simeq-\ln p+\ln\Big{(}3\,\frac{\Delta\tau}{\delta\tau}\Big{)}, \tag{22}\] where \(\Delta\tau=2/\Delta v\) is the time interval over which the large separation is searched for, and \(\delta\tau\) is the FWHM of the autocorrelation peak. We chose to set \(p\) to 0.4%, giving \(A_{\rm lim}=10\) (Fig. 10 ). We checked that we obtain the same threshold value based on Eq. (12) of Gabriel et al. (2002). #### 5.1.4 Precision on the \(\Delta\nu\) measurement The precision on \(\Delta\nu\) is derived through (Mosser & Apporchaux, 2009) \[\frac{\delta\Delta\nu}{\Delta\nu}=\frac{\beta}{2\pi}\frac{b}{A_{\rm lim}} \frac{\Delta\nu}{\delta\nu_{\rm H}}, \tag{23}\] where \(\beta\simeq 0.763\) (Mosser & Appourchaux, 2009) and the noise contribution is set to \(b=A_{\rm lim}\). The relative precision increases with \(\Delta\nu\) and \(\nu_{\rm max}\) (Fig. 2 ), as we can infer from Eqs. 15 and 16 that \[\frac{\delta\Delta\nu}{\Delta\nu}\propto\nu_{\rm max}^{-0.15}. \tag{24}\] ### Test and validation on artificial TESS oscillation spectra We applied our implementation of the EACF method to the sample of artificial stars analyzed in Sect. 3. #### 5.2.1 Blind search When measuring \(\Delta\nu\) with a blind search, we detect oscillations for 48 stars (left panel of Fig. 11 ). These include 1 star which have no injected oscillations, representing a false positive detection rate of 1.6 % (Table 3 ). We also derive a \(\Delta\nu\) measurement for 47 stars with injected oscillations, representing a detection rate of 18.5 %. 5 stars with injected oscillations have a relative deviation of at least 10% compared to the input value, giving a consistency rate of 89.6 %. We obtain a median relative precision of 2.7 % and a median computation time spent for each star of 2.3 s. #### 5.2.2 Guided search Given the rather low consistency rate obtained with a blind search, we also analysed TESS artificial red giants with a guided search that uses the measured \(\nu_{\rm max}\) as an input parameter to center the Hanning filter windowing the spectrum. We detect oscillations for 46 stars, all having injected oscillations (right panel of Fig. 11 ), representing a false positive detection rate of 0 % and a a detection rate of 18.1 % (Table 3 ). Among them, 1 has a relative deviation of at least 10% compared to the input value, giving a consistency rate of 97.8 %. We obtain a median relative precision of 2.8 % (Table 3 ), which is similar to the median relative precision obtained for \(\Delta\nu\) measured through a blind search. Relying on the measured \(\nu_{\rm max}\) to derive \(\Delta\nu\) appears to provide much more consistent results than a blind search for \(\Delta\nu\). Given the high consistency rate of our measured \(\nu_{\rm max}\) compared to reference measurements, this is a sensible approach. 2.3 Impact of the stellar magnitude and the number of TESS sectors on the detectability and consistency of \(\Delta\nu\) We note in Fig. 12 that both the proportion of stars with inconsistent measurements and of non-oscillating stars with false positive measurements exceeds 10 % for \(N_{\rm sectors}=2\), while the proportion of oscillating stars with no measurement Figure 12: Same as Fig. 5, but for \(\Delta\nu\) measured through a blind search. exceeds 10 % for \(N_{\rm{sectors}}\leq 3\). Additionally, we note that both the proportion of stars with inconsistent \(\Delta\nu\) measurements and of oscillating stars with no \(\Delta\nu\) measurement exceeds 10 % for a stellar magnitude above 9, while the proportion of non-oscillating stars with false positive \(\Delta\nu\) measurements exceeds 10 % for a stellar magnitude above 10. This is globally consistent with what we obtain for the \(\nu_{\rm{max}}\) measurement in Sect. 3. ### Analysis of 1589 _Kepler_ red giants We analysed _Kepler_ red giants with our own implementation of the EACF method (Mosser & Appourchaux, 2009) presented in Sect. 5.1. We obtained a \(\Delta\nu\) measurement for all _Kepler_ stars analysed in this study (Table 4 ). We have a median relative precision of 2.7 %, as for TESS artificial targets (Table 3 ). The computation time spent for each star is between 1.1 s and 7.2 s, with a median time of 2.6 s per run, which is similar to what we obtain for TESS artificial targets (Table 3 ). For 10 stars, the relative deviation between our \(\Delta\nu\) values and the existing ones is of at least 10% (Fig. 13 ). In order to discriminate between the two measurements in these cases, we computed the \(\nu_{\rm{max}}\) estimate derived from the reference \(\Delta\nu\) measurement using the scaling relation (Mosser et al., 2010) \[\nu_{\rm{max}}=\left(\frac{\Delta\nu}{0.28}\right)^{4/3}. \tag{25}\] We then compared through a visual inspection this estimate with ours, which corresponds to the center of the Hanning filter used to window the spectrum (Sect. 5.1.1). We find that our \(\nu_{\rm{max}}\) estimates are consistent with the observed bump of oscillations for these 10 stars, contrary to the \(\nu_{\rm{max}}\) estimates obtained from Eq. 25 using the reference \(\Delta\nu\) measurements (see APPENDIX E:). This indicates that our \(\Delta\nu\) measurement is self-consistent. ### Analysis of 2344 TESS red giants We analysed TESS red giants with our own implementation of the EACF method (Mosser & Appourchaux, 2009) presented in Sect. 5.1. #### 5.4.1 Blind search We derived a \(\Delta\nu\) measurement through a blind search for 1223 stars, i.e. 52.2 % of the analyzed sample (upper left panel of Fig. 14 ). The majority of the 1121 stars for which we did not detect oscillations based on the measurement of \(\Delta\nu\) have low reference \(\Delta\nu\) measurements, below 5.5 \(\mu\)Hz with a peak around 4 \(\mu\)Hz (upper right panel of Fig. 14 ). We additionally note that the minimum \(\Delta\nu\) we detect is 1.484 \(\mu\)Hz, while the minimum \(\Delta\nu\) measured by Mackereth et al. (2021) is 0.890 \(\mu\)Hz. We discuss the difficulty we encounter in measuring low \(\Delta\nu\) values in Sect. 5.5. For 150 stars, the relative deviation between \(\Delta\nu\) from Mackereth et al. (2021) and our measurements is of at least 10% (upper left panel of Fig. 14 ). We provide examples in APPENDIX F:. This gives a consistency rate of 87.7 % (Table 4 ). This is lower than for TESS artificial stars (Table 3 ) because inconsistent measurements for real TESS stars occur for \(\Delta\nu\leq 5.5\)\(\mu\)Hz, while the artificial sample have \(\Delta\nu\geq 10\)\(\mu\)Hz. The vast majority of these discrepant stars are characterised by a greatly overestimated \(\Delta\nu\) using our method. In general, \(\Delta\nu<5.5\)\(\mu\)Hz as measured by Mackereth et al. (2021) for these stars, of which 120 (i.e 80 %) have \(\Delta\nu\leq 4\)\(\mu\)Hz. We do not encounter such disagreement with regard to our _Kepler_ sample because the observation duration is much longer, thus the EACF signal is much higher. In the case of TESS stars with much shorter observation durations, the signal-to-noise ratio is lower even when using oversampling, and this significantly impacts the quality of the autocorrelation signal. The maximal EACF signal then actually rejects the null hypothesis in some cases, leading to spurious detections when carrying a blind search for \(\Delta\nu\) with the EACF method (see APPENDIX F:). These incorrect detections are more frequent for low \(\Delta\nu\) because such low values are complicated to detect with the EACF method, as we already mentioned. In many of these cases, the reference \(\Delta\nu\) measurement from Mackereth et al. (2021) does not even lead to a corresponding acceptable EACF signal because the Hanning filter is too narrow and the peak in the autocorrelation appears too close to the edges of the scanned autocorrelation space to be considered as relevant (see APPENDIX F:). In such cases, the EACF signal corresponding to the accurate \(\Delta\nu\) is not kept and thus cannot overpass the maximal EACF signal for another \(\Delta\nu\) value. We have a median relative precision of 3 % (Table 4). This is slightly worse than for _Kepler_ red giants and artificial TESS targets (Table 3). This is consistent with the overall lower \(\Delta\nu\) values of the analysed TESS sample, which are less precise (Fig. 2). The median computation time spent for each star is 2.3 s, which is higher to what we obtain for artificial TESS targets (Table 3) as there is a lower number of stars in our real TESS sample with a low number of observed TESS sectors. The computation time spent for each star is between 0.9 s and 14.2 s, with a median time of 3.7 s per run. The computation time is highly dependent on the number of observed TESS sectors (bottom panel of Fig. 14). As for the \(\nu_{\rm max}\) measurement, the median computation time is higher for TESS red giants compared to _Kepler_ ones with 4-year long datasets (Table 4), due to the oversampling factor applied to obtain TESS power spectra. It is also slightly higher compared to artificial TESS targets, as there is a greater proportion of longer numbers of sectors among the real TESS red giants. #### Guided search using \(\nu_{\rm max}\) When measuring \(\Delta\nu\) through a guided search using \(\nu_{\rm max}\) as an input parameter, we derived a \(\Delta\nu\) measurement for 1725 stars, i.e. 73.6 % of the analyzed sample (upper left panel of Fig. 15 ). As for the blinded search, the majority of the 619 \begin{table} \begin{tabular}{c c c c} \hline \hline Sample & Consistency rate with & Median relative & Median time \\ & existing measurement & precision & per star \\ \hline _Kepler_ & 99.4 \% & 2.7 \% & 2.6 s \\ \hline TESS blind & 87.7 \% & 3.0 \% & 3.7 s \\ TESS guided & 99.4 \% & 3.2 \% & 1.3 s \\ \hline \end{tabular} \end{table} Table 4: Characteristics of the \(\Delta\nu\) measurement for the 1589 _Kepler_ and 2344 TESS stars analysed. Figure 15: Same as Fig. 8, but for \(\Delta\nu\) measured through a guided search using the measured \(\nu_{\rm max}\) as an input parameter. _Upper right:_ Vertical red lines represent \(\Delta\nu\) = 4 \(\mu\)Hz and \(\Delta\nu\) = 5 \(\mu\)Hz. stars for which we did not detect oscillations based on the measurement of \(\Delta\nu\) have low reference \(\Delta\nu\) measurements, below 5 \(\mu\)Hz with a peak around 4 \(\mu\)Hz (upper right panel of Fig. 15). These non-detections are discussed in Section 5.5. We have a median relative precision of 3.2 % (Table 4). This is slightly lower than for the blind search as we could retrieve a larger number of low \(\Delta\nu\) values, which are more challenging to measure and are therefore less precise (Fig. 2). The computation time spent for each star is between 0.6 s and 14.8 s, with a median time of 1.3 s per run (Table 4). This is similar to what we obtain for artificial TESS targets (Table 3). The median computation time is no longer significantly dependent on the number of observed TESS sectors (bottom panel of Fig. 15). Indeed, our guided search uses only one position of the Hanning filter using \(\nu_{\rm max}\) as an input parameter, so the search for \(\Delta\nu\) is performed only for one targeted portion of the power spectrum instead of 18 different portions as with the blind search, minimising the significant increase in the number of data points in the spectrum as a result of the larger number of observed TESS sectors. We now have only 10 stars for which the relative deviation between \(\Delta\nu\) from Mackereth et al. (2021) and our measurements is of at least 10% (upper left panel of Fig. 15). This gives a consistency rate of 99.4 % (Table 4). This consistency rate is higher than for TESS artificial stars for the same reason than for \(\nu_{\rm max}\). ### Impact of the stellar magnitude and the number of TESS sectors on the consistency of \(\Delta\nu\) Contrary to what we observe for \(\nu_{\rm max}\) measured for TESS red giants, we do not see such clear impact of the number of observed TESS sectors and the stellar magnitude on the consistency of our \(\Delta\nu\) measurements obtained through a blind search for the TESS stars analyzed by Mackereth et al. (2021) (Fig. 16). Indeed, the proportion of stars with inconsistent measurements appears to be above the proportion of stars with consistent measurements at intermediate \(N_{\rm sectors}\), between 4 and 10, and at intermediate stellar magnitude, between 7 and 9. The consistency of our \(\Delta\nu\) measurements obtained through a blind search mostly depends on \(\Delta\nu\) itself. Indeed, as stated earlier almost all our inconsistent measurements correspond to \(\Delta\nu<5.5\)\(\mu\)Hz, of which 80 % have \(\Delta\nu\leq 4\)\(\mu\)Hz (upper left panel of Fig. 14). This is strikingly similar to the distribution of \(\Delta\nu\) for stars for which we do not have a \(\Delta\nu\) detection (upper right panel of Fig. 14). Hence, both the non-detections of low \(\Delta\nu\) values and the inconsistent measurements of many low \(\Delta\nu\) values have a similar cause, which has to do with the strength of the EACF signal. We thus tested the strength of the EACF signal against the tested \(\Delta\nu\) values by applying the EACF method to 500 artificial power spectra made of pure white noise. For each \(\Delta\nu\) tested, we then compared the median value of the maximum EACF signal computed accross the 500 power spectra. We found that the EACF signal tends to be particularly low for low \(\Delta\nu\) values, significantly decreasing around \(\Delta\nu=5.5\)\(\mu\)Hz and reaching a minimum close to \(\Delta\nu=4\)\(\mu\)Hz (upper left panel of Fig. 17). This is completely consistent with the \(\Delta\nu\) values for which we do not find a detection and for which we obtain inconsistent measurements. This observed trend of the EACF signal being slightly higher for higher \(\Delta\nu\) values leads to spurious detections in some cases, in particular for values around \(\Delta\nu=4\)\(\mu\)Hz for which the EACF signal is particularly low. We explored the reasons behind this trend. The higher the tested \(\Delta\nu\) value, the smaller the lower limit of the \(\tau\) interval used to look for \(\Delta\nu\) in the autocorrelation space (upper right panel of Fig. 17 ). Hence, we are exploring an interval closer to the main lobe as \(\Delta v\) increases, leading to a slightly higher EACF signal. This trend is even more pronounced for \(\Delta v>14.6\,\mu\)Hz. Indeed, the center frequency of the Hanning filter becomes too close to the Nyquist frequency to allow us to use the optimal width for the Hanning filter, forcing us to reduce this width (lower panel of Fig. 17 ). However, the smaller the FWHM of the Hanning filter, the larger the main lobe in the autocorrelation space. We are thus testing a \(\tau\) interval which lower limit becomes particularly close to the edge of the main lobe for \(\Delta v>14.6\,\mu\)Hz (upper right panel of Fig. 17 ). We note that such cases do not lead to too many spurious detections since in many cases, the maximum of the EACF signal appears too close to the lower edge of the tested \(\tau\) interval to allow this configuration to be considered as valid (see some examples in APPENDIX F:). We do not encounter this problem for _Kepler_ targets for two reasons. First, there are no \(\Delta v<6\)\(\mu\)Hz in the sample we analyzed (Fig. 13 ). Second, the increasing EACF signal with increasing \(\Delta v\) should not be a problem when the signal-to-noise ratio is high, as it is the case for _Kepler_ targets with 4-year long observations, since the EACF signal is then high enough for low \(\Delta v\) values to deliver a consistent measurement. However, this observed trend between the EACF signal and \(\Delta v\) becomes a limiting factor for noisier data, making it challenging to derive consistent measurements when \(\Delta v\leq 5.5\)\(\mu\)Hz. Since our \(\nu_{\rm max}\) measurement is not sensitive to noise, the EACF signal is not sensitive to noise. Figure 17: _Upper left:_ Median value of the maximum EACF signal computed using 500 white noise power spectra, as a function of the tested \(\Delta v\) value. Vertical dashed lines represent \(\Delta v=4\,\mu\)Hz and \(\Delta v=5.5\,\mu\)Hz, while the vertical continuous line represent \(\Delta v=14.6\,\mu\)Hz. _Upper right:_ EACF signal of a white noise power spectrum for 5 tested \(\Delta v\) values, as a function of the time lag in the autocorrelation space. Vertical dashed lines represent the lower limit of the \(\tau\) interval used to look for \(\Delta v\) in the autocorrelation space, with same color code as the tested \(\Delta v\) values. The dashed lines for \(\Delta v=17.0\,\mu\)Hz and \(\Delta v=18.7\)\(\mu\)Hz are almost superimposed. _Bottom:_ FWHM of the Hanning filter used to search for \(\Delta v\) as a function of \(\Delta v\). The blue curve corresponds to a width proportional to \(\nu_{\rm max}\) (Eq. 15) while the orange curve corresponds to the actual width we use in this study. measurements proved to be highly consistent for TESS targets (Table 2), using \(\nu_{\rm max}\) as an input parameter to guide the search for \(\Delta\nu\) represents a viable alternative for these power spectra and largely reduces the spurious detections for stars with low \(\Delta\nu\) values (Table 4). ## 6 Conclusions We developed a new pipeline to detect solar-like oscillations, which we named FRA, based on the detection and the measurement of \(\nu_{\rm max}\) through a local fit of the envelope of oscillations. FRA is entirely automated, fast (few seconds) and relies on statistical criteria to assess the presence of oscillations. It operates blindly, without any needed a priori information on the presence and the location of oscillations. It can detect solar-like oscillations and provide \(\nu_{\rm max}\) measurements for \(\nu_{\rm max}\gtrsim 10\)\(\mu\)Hz. We also used the Envelope AutoCorrelation Function (EACF) method (Mosser & Appourchaux, 2009) to measure \(\Delta\nu\) in addition to \(\nu_{\rm max}\), since both parameters are crucial to derive precise and accurate stellar masses and radii. We applied our pipeline to a set of 1589 red giants observed by _Kepler_ which have 4-year long lightcurves (Gehan et al., 2021, 2018), as well as a set of 2344 TESS red giants having between 2 and 13 observed sectors (Mackereth et al., 2021). All these are bona fide stars, for which the presence of oscillations is already established. We obtain consistent \(\nu_{\rm max}\) and \(\Delta\nu\) measurements for all _Kepler_ stars. For TESS stars, we obtain consistent \(\nu_{\rm max}\) measurements in more than 97 % of the cases, and consistent \(\Delta\nu\) measurements in almost 88 % of the cases. The inconsistent \(\nu_{\rm max}\) measurements we obtain majoriarily correspond to a low number of observed TESS sectors, \(N_{\rm sectors}\leq 3\), and/or a large G magnitude, above 10. Regarding \(\Delta\nu\), we majoritarily get inconsistent measurements for low values, i.e. \(\Delta\nu\leq 5.5\)\(\mu\)Hz, independently of the stellar magnitude and the number of observed TESS sectors. We tested our implementation of the EACF method on artificial power spectra made of pure white noise and found that the strength of the EACF signal tends to increase with the tested \(\Delta\nu\) value, resulting in many spurious measurements for low \(\Delta\nu\) values. This behaviour is significant for TESS targets with much shorter observation durations than _Kepler_ and, therefore, lower signal-to-noise ratios. We could overcome this limitation by using the measured \(\nu_{\rm max}\) as an input parameter to guide the search for \(\Delta\nu\), which leads to a consistency rate above 99 % with existing measurements for \(\Delta\nu\) for TESS stars. Given the high consistency of our \(\nu_{\rm max}\) measurements, this approach appears sensible to optimize the \(\Delta\nu\) measurement for TESS targets. We additionally analyzed a set of 254 artificial power spectra representative of TESS red giants, of which 76 % have injected oscillations and 24 % have no injected oscillations. Analyzing this artificial data set provides limits in stellar magnitude and in the number of observed TESS sectors to obtain consistent measurements, to maximise the detectability of oscillations and to minimise the false positive detections. Our analysis reveals that we can expect to get consistent \(\nu_{\rm max}\) and \(\Delta\nu\) measurements while minimizing both the false positive measurements and the non-detections for a number of observed TESS sectors \(N_{\rm sectors}>3\). This is in agreement with the limit we obtained to derive consistent \(\nu_{\rm max}\) measurements for the TESS targets from Mackereth et al. (2021). For a G magnitude above 9.5, one extra step has to be performed to discard spurious \(\nu_{\rm max}\) measurements by assessing that the obtained \(\nu_{\rm max}\) and \(\Delta\nu\) are consistent with each other. ## Acknowledgments CG thanks B. Mosser and J. T. Mackereth for providing the power spectra of the _Kepler_ and the TESS red giants analyzed in this study. The authors acknowledge the support by FCT/MCTES through the research grants UIDB/04434/2020, UIDP/04434/2020 and PTDC/FIS-AST/30389/2017, and by FEDER - Fundo Europeu de Desenvolvimento Regional through COMPETE2020 - Programa Operacional Competitivadae e Internacionalizacao (grant: POCI-01-0145-FEDER-030389). CG was also supported by Max Planck Society (Max Planck Gesellschaft) grant "Preparations for PLATO Science" M.FE.A.Aero 0011. MSC and TLC are supported by national funds through FCT in the form of work contracts (CEECIND/02619/2017 and CEECIND/00476/2018, respectively).
2307.11885
A determinantal point process approach to scaling and local limits of random Young tableaux
We obtain scaling and local limit results for large random Young tableaux of fixed shape $\lambda^0$ via the asymptotic analysis of a determinantal point process due to Gorin and Rahman (2019). More precisely, we prove: (1) an explicit description of the limiting surface of a uniform random Young tableau of shape $\lambda^0$, based on solving a complex-valued polynomial equation; (2) a simple criteria to determine if the limiting surface is continuous in the whole domain; (3) and a local limit result in the bulk of a random Poissonized Young tableau of shape $\lambda^0$. Our results have several consequences, for instance: they lead to explicit formulas for the limiting surface of $L$-shaped tableaux, generalizing the results of Pittel and Romik (2007) for rectangular shapes; they imply that the limiting surface for $L$-shaped tableaux is discontinuous for almost-every $L$-shape; and they give a new one-parameter family of infinite random Young tableaux, constructed from the so-called random infinite bead process.
Jacopo Borga, Cédric Boutillier, Valentin Féray, Pierre-Loïc Méliot
2023-07-21T20:00:51Z
http://arxiv.org/abs/2307.11885v2
# A determinantal point process approach to scaling and local limits of random Young tableaux ###### Abstract. We obtain scaling and local limit results for large random Young tableaux of fixed shape \(\lambda^{0}\) via the asymptotic analysis of a determinantal point process due to Gorin and Rahman (2019). More precisely, we prove: * an explicit description of the limiting surface of a uniform random Young tableau of shape \(\lambda^{0}\), based on solving a complex-valued polynomial equation; * a simple criteria to determine if the limiting surface is continuous in the whole domain; * and a local limit result in the bulk of a random Poissonized Young tableau of shape \(\lambda^{0}\). Our results have several consequences, for instance: they lead to explicit formulas for the limiting surface of \(L\)-shaped tableaux, generalizing the results of Pittel and Romik (2007) for rectangular shapes; they imply that the limiting surface for \(L\)-shaped tableaux is discontinuous for almost-every \(L\)-shape; and they give a new one-parameter family of infinite random Young tableaux, constructed from the so-called _random infinite bead process_. ###### Contents * 1 Introduction * 1.1 Overview * 1.2 Young tableaux, Poissonized Young tableaux and bead configurations * 1.3 The limiting height function and the limiting surface * 1.4 Applications for simple limit shapes * 1.5 Local limit results * 1.6 The complex Burger's equation and a PDE for the limiting height function * 1.7 Methods * 1.8 Future work * 1.9 Outline of the paper * 2 Preliminaries * 2.1 Gorin-Rahman's determinantal formula * 2.2 A topology for bead processes * 3 Identifying the critical points of the action * 3.1 The kernel of the renormalized bead process * 3.2 Asymptotic analysis of the integrand * 3.3 The critical points of the action and the shape of the liquid region * 3.4 The imaginary and real part of the action on the real line * 4 Asymptotics for the kernel in the liquid region * 4.1 Landscape of the action * 4.2 Moving contours and asymptotic of the kernel * 4.3 Recovering the bead kernel * 5 Asymptotics for the kernel in the frozen region * 5.1 The small \(t\) region * 5.2 The large \(t\) region * 5.3 The intermediate \(t\) region * 6 The limiting height function and the continuity of the limiting surface * 7 Applications * 8 Local limits for random Young tableaux * 8.1 Local topology for standard Young tableaux * 8.2 Local convergence for Poissonized Young tableaux ## 1. Introduction ### Overview Random Young diagrams form a classical theme in probability theory, starting with the work of Logan-Shepp and Vershik-Kerov on the Plancherel measure [14, 15], motivated by Ulam's problem on the typical length of the longest increasing subsequence in a uniform random permutation. The topic also has connections with random matrix theory and particle systems, and has known an increase of interest after the discovery of an underlying determinantal point process for a Poissonized version of the Plancherel measure [13]. It would be vain to do a complete review of the related literature, we only refer to [12, 11] for books on the topic. We also refer the reader to Section 1.2 for precise definitions of the objects mentioned in this section. In comparison, the random Young tableaux, which are in essence dynamic versions of random diagrams, have a shorter history. Motivations to study random Young tableaux range from asymptotic representation theory to connections with other models of combinatorial probability, such as random permutations with short monotone subsequences [12] or most notably random sorting networks; see [1] and many later papers. As in most of the literature, we are interested in the simple model where we fix a shape \(\lambda^{0}\) (or rather a sequence of growing shapes) and consider a uniform random tableau \(T\) of shape \(\lambda^{0}\). In [12], Pittel and Romik derived a limiting surface result for uniform random Young tableaux of rectangular shapes, based on the hook formula [11] and counting arguments. An earlier result of Biane in asymptotic representation theory [16] implies in fact, the existence of such limiting surfaces for any underlying shape. However, unlike in the Pittel-Romik paper, explicit computations are usually intractable: they involve the Markov-Krein correspondence and the free compression of probability measures, the latter being rarely an explicit computation. More recently, entropy optimization methods have been applied to prove the existence of limiting surfaces, extending the result to skew shapes [13]. These techniques lead to some natural gradient variational problems in \(\mathbb{R}^{2}\) whose solutions are explicitly parameterized by \(\kappa\)-harmonic functions, as show in [10]. Recently, in [11], a determinantal point process structure was discovered for a Poissonized version of random Young tableaux. This determinantal structure was used for a specific problem motivated by the aforementioned sorting networks, namely describing the local limit of uniform tableaux of staircase shape around their outer diagonal [11, 1]. The goal of the current paper is to exploit this determinantal point process structure in order to get limiting results for a large family of shapes. Namely, we consider shapes obtained as dilatations of any given Young diagram \(\lambda^{0}\). Here is an informal description of our results. * We obtain a new description of the limiting surface corresponding to the shape \(\lambda^{0}\), based on solving a complex-valued polynomial equation (Theorem 4 and Eqs. (13) and (14)). This new description is more explicit compared to the one obtained through the existence approaches. * expressed in terms of some equations involving the so-called _interlacing coordinates_ of \(\lambda^{0}\) - to determine if the limiting surface is continuous (Theorem 7). This shows that such limiting surfaces are typically discontinuous. * We obtain a local limit result in the bulk of a random Young tableau (Theorem 15). The limit is a new model of infinite random tableaux (Definition 16), constructed from the so-called _random infinite bead process_ (Definition 13) introduced by the second author in [12]. Interestingly (but somehow expectedly by analogy with results on lozenge tilings [11, 1]), this local limit depends on the underlying shape \(\lambda\) and on the chosen position in \(\lambda\) only through a single parameter \(\beta\in(-1,1)\). ### Young tableaux, Poissonized Young tableaux and bead configurations Let us start by fixing terminology and notation. An _integer partition_ of \(n\), or _partition_ of \(n\) for short, is a non-increasing list \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{l})\) of positive integers with \(n=\sum_{i=1}^{l}\lambda_{i}\). We write \(|\lambda|=n\) for the _size_ of the partition and \(\ell(\lambda)=l\) for the _length_ of the partition, and use the convention \(\lambda_{i}=0\) when \(i>\ell(\lambda)\). We will represent partitions graphically with the _Russian convention_, i.e. for each \(i\leq\ell(\lambda)\) and \(j\leq\lambda_{i}\) we have a square box whose sides are parallel to the diagonals \(x=y\) and \(x=-y\) and whose center has coordinates \((j-i,i+j-1)\); see the left-hand side of Fig. 1. We call this graphical representation the _Young diagram_ of shape \(\lambda\). When looking at a Young diagram \(\lambda\), its upper boundary is the graph of a \(1\)-Lipschitz function, denoted by \(\omega_{\lambda}:\mathbb{R}\to\mathbb{R}\), and the diagram \(\lambda\) can be encoded using the local minima and maxima of the function \(\omega_{\lambda}\). Following Kerov [10], we denote them by \[a_{0}<b_{1}<a_{1}<b_{2}<\cdots<b_{m}<a_{m},\qquad a_{i},b_{i}\in\mathbb{Z}, \tag{1}\] and we call them _interlacing coordinates_. See the right-hand side of Fig. 1 for an example. Note that \(a_{0}=-\ell(\lambda)\) and \(a_{m}=\lambda_{1}\). Furthermore, interlacing coordinates satisfy \[\sum_{i=0}^{m}a_{i}=\sum_{i=1}^{m}b_{i}, \tag{2}\] see, e.g., [10, Proposition 2.4]. A _Young tableau_ of shape \(\lambda\) is a filling of the boxes of \(\lambda\) with the numbers \(1,2,\ldots,n\) such that the numbers along every row and column are increasing. We encode a Young tableau as a function \(T:\lambda\to[n]=\{1,2,\ldots,n\}\), where the Young diagram \(\lambda\) is identified with the set \(\{(j-i,i+j-1),i\leq\ell(\lambda),j\leq\lambda_{i}\}\); see again the right-hand side of Fig. 1 for an example. The function \(T:\lambda\to[n]\) can be thought of as the graph of a (non-continuous) surface above that set. Note that, given a Young diagram \(\lambda\), there are finitely many Young tableaux of shape \(\lambda\), so it makes sense to consider a _uniform random Young tableau_ of shape \(\lambda\). In this paper, following [10], we also consider _Poissonized Young tableaux_ of shape \(\lambda\), which are functions \(T:\lambda\to[0,1]\) with distinct real values that are increasing along rows and columns. See the left-hand side of Fig. 2 for an example. Note that for any fixed \(\lambda\), the admissible functions \(T:\lambda\to[0,1]\) form a subset of \([0,1]^{\lambda}\) of positive (and finite) Lebesgue measure, so it makes sense to consider a _uniform random Poissonized Young tableau_ of shape \(\lambda\). A _bead configuration_ is a collection of points (called _beads_) positioned on parallel vertical threads which is locally finite and satisfy an interlacing relation on the vertical positions of the beads: for every pair of consecutive beads on a thread, on each of its neighboring threads there is exactly one bead whose vertical position is between them. See the right-hand side of Fig. 2 for an Figure 1. **Left:** The Young diagram of the partition \((4,4,3,2)\) drawn in Russian convention, with the coordinates of each box inside it. **Right:** A Young tableau \(T:\lambda\to[n]\) of shape \(\lambda\) corresponding to the partition \((6,6,6,4,4,4,3,3)\) drawn with Russian convention; all the boxes are squares with area \(2\). We indicate the interlacing coordinates \(a_{0}<b_{1}<a_{1}<b_{2}<\cdots<b_{m}<a_{m}\) below the \(x\)-axis. example. In the present paper, we will consider both finite and infinite bead configurations, i.e. configurations containing finitely or infinitely many beads. In the finite case, a bead configuration is a finite subset of \(A\times[0,1]\) where \(A\) is a finite sub-interval of \(\mathbb{Z}\), while in the infinite case a bead configuration is a locally finite subset of \(\mathbb{Z}\times\mathbb{R}\). Given a Poissonized Young tableau \(T:\lambda\to[0,1]\) of shape \(\lambda\), we associate a finite bead configuration \(M_{\lambda,T}\) in \(([-\ell(\lambda),\lambda_{1}]\cap\mathbb{Z})\times[0,1]=([a_{0},a_{m}]\cap \mathbb{Z})\times[0,1]\) defined by \[M_{\lambda,T}=\Set{(i,T(i,j))\mid(i,j)\in\lambda}. \tag{3}\] An example is given in Fig. 2. Note that the monotonicity condition of Poissonized Young tableaux implies that points on neighboring vertical lines are interlacing, i.e. satisfy the bead configuration constraints. We also introduce the _height function_\(H_{\lambda,T}:([a_{0},a_{m}]\cap\mathbb{Z})\times[0,1]\to\mathbb{Z}_{\geq 0}\), defined by \[H_{\lambda,T}(x,t)=\#\left(M_{\lambda,T}\cap(\{x\}\times[0,t])\right),\quad \text{for all }(x,t)\in([a_{0},a_{m}]\cap\mathbb{Z})\times[0,1], \tag{4}\] i.e. \(H_{\lambda,T}(x,t)\) is the number of beads on the thread \(x\) below height \(t\). We note that \(H_{\lambda,T}\) is non-decreasing in \(t\) and has the following boundary values: \[\begin{cases}H_{\lambda,T}(a_{0},t)=H_{\lambda,T}(x,0)=H_{\lambda,T}(a_{m},t)= 0,&\quad\text{for all }t\in[0,1],\\ H_{\lambda,T}(x,1)=\frac{1}{2}\left(\omega_{\lambda}(x)-|x|\right),&\quad\text {for all }x\in[a_{0},a_{m}]\cap\mathbb{Z}.\end{cases} \tag{5}\] Clearly, the bead configuration \(M_{\lambda,T}\) is entirely determined by the height function \(H_{\lambda,T}\). Moreover, we have that \[T(x,y)<t\quad\text{if and only if}\quad H(x,t)>\tfrac{1}{2}(y-|x|). \tag{6}\] Fixing a Young diagram \(\lambda\) and taking a uniform random (Poissonized) Young tableau \(T\) of shape \(\lambda\) gives a random bead configuration \(M_{\lambda,T}\) and a random height function \(H_{\lambda,T}\), often simply denoted by \(M_{\lambda}\) and \(H_{\lambda}\). **Remark 1**.: _We note that the notion of Poissonized Young tableaux had appeared in disguise in earlier work than that of Gorin and Rahman. Indeed, given a finite partially ordered set (or poset) \(P\), it is standard to consider its order polytope, i.e. the subset of \([0,1]^{P}\) satisfying order constraints given by the poset (\(i\leq_{P}j\Rightarrow x_{i}\leq x_{j}\)). Then the volume of this order polytope is known to be proportional to the number of linear extensions [10]._ _Now, a Young diagram \(\lambda\) can be seen as a partially ordered set, where the elements are the cells and the order is given by coordinate-wise comparison. Then the linear extensions are standard Young tableaux, and Poissonized Young tableaux are points of the order polytopes. This has been used for counting (skew) Young tableaux in [10, 1] and for the analysis of random tableaux in [11, 1], where the name "continuous Young diagram" is used._ Figure 2. **Left:** A Poissonized Young tableau \(T\) of shape \(\lambda=(5,3,1,1)\). **Right:** The corresponding bead configuration \(M_{\lambda,T}\). To illustrate the definition of the height function, we have indicated the values \(H_{\lambda,T}(0,.6)\) and \(H_{\lambda,T}(1,.6)\) and circled the corresponding beads. ### The limiting height function and the limiting surface We fix a Young diagram \(\lambda^{0}\) which will determine the shape of our growing sequence of diagrams. For an integer \(n>0\), we define \(N=N(n,\lambda^{0})=n^{2}|\lambda^{0}|\) and consider the \((n\times n)\)-dilated diagram \(\lambda_{N}\) obtained by replacing each box of \(\lambda^{0}\) by a square of \(n\times n\) boxes. Note that \(\lambda_{N}\) has size \(N\) and has the same (dilated) shape as \(\lambda^{0}\). We set \(\eta=1/\sqrt{|\lambda^{0}|}\) and consider the interval \([-\eta\,\ell(\lambda^{0}),\eta\,\lambda_{1}^{0}]\stackrel{{(1)}}{ {=}}[\eta\,a_{0},\eta\,a_{m}]\). Informally, the interval \([\eta\,a_{0},\eta\,a_{m}]\) is the projection on the \(x\)-axis of the Russian representation of \(\lambda^{0}\), scaled in both directions by a factor \(\eta\) in order to have total area \(2\). In particular, given a Young tableau of shape \(\lambda_{N}\), the corresponding bead configuration \(M_{\lambda_{N}}\) is supported on the set \(([n\,a_{0},n\,a_{m}]\cap\mathbb{Z})\times[0,1]\). The following convergence result for the height function of \(M_{\lambda_{N}}\) is proved in [18, Theorem 7.15] in the case of uniform random Poissonized Young tableaux. It also follows indirectly from concentration results on random Young diagrams by Biane [1], as made explicit recently by Sniady and Maslanka [17, Proposition 10.1] (the latter considers standard tableaux and not Poissonized tableaux, but it is simple to see that this has no influence on the next statement). **Theorem 2** ([18, Theorem 7.15] and [17, Proposition 10.1]).: _Let \(\lambda^{0}\) be a fixed Young diagram and let \(T_{N}\) be a uniform random (Poissonized) Young tableau of shape \(\lambda_{N}\). With the above notation, there exists a deterministic height function \(H^{\infty}:[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\to\mathbb{R}\) such that the following convergence in probability holds:_ \[\frac{1}{\sqrt{N}}\,H_{\lambda_{N},T_{N}}(\lfloor x\sqrt{N}\rfloor,t) \xrightarrow[N\to+\infty]{}H^{\infty}(x,t), \tag{7}\] _uniformly for all \((x,t)\) in \([\eta\,a_{0},\eta\,a_{m}]\times[0,1]\)._ In [18], the limiting function \(H^{\infty}\) is implicitly found as the unique maximizer of a certain entropy functional subject to some boundary conditions depending on the diagram \(\lambda^{0}\). Using the approach of [1, 17], for each \(t\in[0,1]\), the section \(H^{\infty}(\cdot,t)\) is described via the free cumulants of an associated measure. Both descriptions are hard to manipulate. Our first result gives an alternative and more explicit description of \(H^{\infty}\) via the solution of a polynomial equation, called the _critical equation_. #### 1.3.1. Critical equations, liquid regions and limiting height functions for bead processes Let \(a_{0}<b_{1}<a_{1}<b_{2}<\cdots<a_{m}\) be the interlacing coordinates of \(\lambda^{0}\), introduced in (1). For \((x,t)\) in \([\eta\,a_{0},\eta\,a_{m}]\times[0,1]\), we consider the following polynomial equation, referred to throughout the paper as the _critical equation_:1 Footnote 1: This terminology is justified by the results at the beginning of Section 3.3. \[U\,\prod_{i=1}^{m}(x-\eta\,b_{i}+U)=(1-t)\,\prod_{i=0}^{m}(x-\eta\,a_{i}+U). \tag{8}\] This is a polynomial equation in the complex variable \(U\) of degree \(m+1\). Using the fact that the \(a_{i}\)'s and \(b_{i}\)'s are alternating, one can easily prove that (8) has at least \(m-1\) real solutions; see Lemma 24 below. Hence it has either \(0\) or \(2\) non-real solutions. **Definition 3** (Liquid region).: _We let \(L\) be the set of pairs \((x,t)\) in \([\eta\,a_{0},\eta\,a_{m}]\times[0,1]\) such that (8) has two non-real solutions and we call it liquid region. The complement of the liquid region in \([\eta\,a_{0},\eta\,a_{m}]\times[0,1]\) will be referred to as the frozen region.2_ Footnote 2: The terminology _liquid and frozen region_ is standard in the dimer literature. See for instance Theorem 15 for a justification. Several equivalent descriptions of the liquid region are given in Proposition 27. For instance, we will show that \(L\) is an open subset of \([\eta\,a_{0},\eta\,a_{m}]\times[0,1]\) and give an explicit parametrization of its boundary, i.e. the so-called _frozen boundary curve_. For \((x,t)\in L\), we denote by \(U_{c}=U_{c}(x,t)\) the unique solution with a positive imaginary part of the critical equation (8) and we define \[\alpha(x,t):=\frac{\mathcal{I}U_{c}}{(1-t)}\qquad\text{and}\qquad\beta(x,t):= \frac{\mathfrak{R}U_{c}}{|U_{c}|}, \tag{9}\] where \(\mathfrak{I}\) and \(\mathfrak{R}\) denote the imaginary and real part of a complex number. For \((x,t)\notin L\), we set \(\alpha(x,t):=0\), and we leave \(\beta(x,t)\) undefined. In particular, note that this defines a function \(\alpha(x,t)\) for all \((x,t)\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\). It turns out that the limiting height function \(H^{\infty}\) is directly related to the function \(\alpha\) via the following simple formula. **Theorem 4**.: _With the above notation, the limiting height function \(H^{\infty}\) from Theorem 2 takes the form_ \[H^{\infty}(x,t)=\frac{1}{\pi}\int_{0}^{t}\alpha(x,s)\,\mathrm{d}s\,,\quad\text {for all }(x,t)\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1].\] Informally, the asymptotic density of beads at position \((\lfloor x\sqrt{N}\rfloor,t)\) in \(M_{\lambda_{N}}\) is \(\sqrt{N}\,\alpha(x,t)\). In particular, the liquid region coincides with the region where there is a macroscopic quantity of beads. #### 1.3.2. Limiting surfaces for Young tableaux and discontinuity phenomena It is natural to try to translate the limiting result for the bead process to a limit result for the tableau itself, seen as a discrete surface. Namely, we set \[D_{\lambda^{0}}:=\left\{(x,y)\in\mathbb{R}^{2}:|x|<y<\omega_{\eta\lambda^{0}}( x)\right\}, \tag{10}\] which is the shape (seen as an open domain of the plane) of the diagram \(\lambda^{0}\) in Russian notation, normalized to have area 2. For \((x,y)\) in \(D_{\lambda^{0}}\), letting \(T_{N}\) be a uniform Poissonized tableau of shape \(\lambda_{N}\), we consider \[\widetilde{T}_{N}(x,y):=T_{N}(\lfloor x\sqrt{N}\rfloor,\lfloor y\sqrt{N} \rfloor+\delta), \tag{11}\] where \(\delta\in\{0,1\}\) is chosen so that the arguments of \(T_{N}\) have distinct parities. We want to find a scaling limit for the function \(\widetilde{T}_{N}(x,y)\). (Again, it is simple to see that considering uniform Poissonized or (classical) uniform Young tableaux of shape \(\lambda\) is irrelevant here.) To this end, we set for all \((x,y)\in D_{\lambda^{0}}\), \[T_{+}^{\infty}=T_{+}^{\infty}(x,y) :=\sup\left\{t\in[0,1]\,:\,H^{\infty}(x,t)\leq\tfrac{1}{2}(y-|x|) \right\}, \tag{12}\] \[T_{-}^{\infty}=T_{-}^{\infty}(x,y) :=\inf\left\{t\in[0,1]\,:\,H^{\infty}(x,t)\geq\tfrac{1}{2}(y-|x|) \right\}.\] **Proposition 5**.: _For all \(\varepsilon>0\), the following limit holds uniformly for all \((x,y)\in D_{\lambda^{0}}\):_ \[\lim_{N\to+\infty}\mathbb{P}\big{(}\widetilde{T}_{N}(x,y)<T_{-}^{\infty}- \varepsilon\big{)}=\lim_{N\to+\infty}\mathbb{P}\big{(}\widetilde{T}_{N}(x,y) >T_{+}^{\infty}+\varepsilon\big{)}=0.\] Proof.: Recalling the relation (6) and the rescaling (11), we note that \[\widetilde{T}_{N}(x,y)<T_{-}^{\infty}-\varepsilon\quad\text{if and only if}\quad H_{\lambda_{N}}\big{(}\lfloor x\sqrt{N}\rfloor,T_{-}^{\infty}- \varepsilon\big{)}>\frac{1}{2}(\lfloor y\sqrt{N}\rfloor+\delta-|\lfloor x \sqrt{N}\rfloor|).\] We claim that the latter event happens with probability tending to 0. Indeed, Theorem 2 guarantees the following convergence in probability uniformly for all \((x,y)\in D_{\lambda^{0}}\): \[\lim_{N\to+\infty}\frac{1}{\sqrt{N}}H_{\lambda_{N}}\big{(}\lfloor x\sqrt{N} \rfloor,T_{-}^{\infty}-\varepsilon\big{)}=H^{\infty}(x,T_{-}^{\infty}- \varepsilon)<\tfrac{1}{2}(y-|x|),\] where the last inequality follows by definition of \(T_{-}^{\infty}\). The statement for \(T_{+}^{\infty}\) is proved similarly. We let \(D_{\lambda^{0}}^{\text{reg}}\) be the set of coordinates \((x,y)\in D_{\lambda^{0}}\) such that \(T_{-}^{\infty}(x,y)=T_{+}^{\infty}(x,y)\). For such points, we simply write \(T^{\infty}(x,y)\) for this common value. By definition, on \(D_{\lambda^{0}}^{\text{reg}}\), one has \[H^{\infty}(x,T^{\infty}(x,y))=\tfrac{1}{2}(y-|x|). \tag{13}\] Moreover, Proposition 5 implies the following convergence in probability for \((x,y)\in D^{\mathrm{reg}}_{\lambda^{0}}\): \[\lim_{N\to+\infty}\widetilde{T}_{N}(x,y)=T^{\infty}(x,y), \tag{14}\] Note that this convergence holds uniformly on compact subsets of \(D^{\mathrm{reg}}_{\lambda^{0}}\). **Remark 6**.: _For any \(x\) fixed, since \(t\mapsto H^{\infty}(x,t)\) is non-decreasing, the points \((x,y)\) are in \(D^{\mathrm{reg}}_{\lambda^{0}}\) for all but countably many \(y\). As a consequence, \(D_{\lambda^{0}}\setminus D^{\mathrm{reg}}_{\lambda^{0}}\) has zero (two-dimensional) Lebesgue measure. Moreover, if \((x,y)\in D_{\lambda^{0}}\setminus D^{\mathrm{reg}}_{\lambda^{0}}\) and \(x-\varepsilon,x+\varepsilon\in D^{\mathrm{reg}}_{\lambda^{0}}\), then \(T^{\infty}(x-\varepsilon,y)\) and \(T^{\infty}(x+\varepsilon,y)\) converge respectively to \(T^{\infty}_{-}(x,y)\) and \(T^{\infty}_{+}(x,y)\) as \(\varepsilon\) tends to \(0\) with \(\varepsilon>0\). Therefore the limiting surface \(T_{\infty}\) is discontinuous at such points \((x,y)\) and we do not know whether \(\widetilde{T}_{N}(x,y)\) converges or not. This discontinuity phenomenon was overlooked in [11, Theorem 9.1], where it is claimed that the convergence holds for all \((x,y)\) in \(D_{\lambda^{0}}\)._ A natural question is whether such discontinuity points \((x,y)\) exist at all in \(D_{\lambda^{0}}\). The following result shows that such points indeed exist unless \(\lambda^{0}\) is a rectangle, or unless its interlacing coordinates satisfy some specific equations. **Theorem 7**.: _For a Young diagram \(\lambda^{0}\), the following assertions are equivalent:_ 1. _The limiting surface_ \(T^{\infty}\) _is continuous in the whole domain_ \(D_{\lambda^{0}}\)_;_ 2. _The interlacing coordinates defined in (_1_) satisfy the system of equations:_ \[\sum_{\begin{subarray}{c}i=0\\ i\neq i_{0}\end{subarray}}^{m}\frac{1}{a_{i_{0}}-a_{i}}=\sum_{i=1}^{m}\frac{1}{a _{i_{0}}-b_{i}},\qquad\text{for all $i_{0}=1,\ldots,m-1$}.\] (15) Note that when \(m=1\), i.e. when \(\lambda^{0}\) has a rectangular shape, there are no equations in the second item. Indeed, the limiting surface \(T^{\infty}\) is always continuous in this case. ### Applications for simple limit shapes In this section, we illustrate our results in the cases \(m=1\) (rectangular shapes) and \(m=2\) (\(L\)-shapes). Before starting, let us note that our model and all results are invariant when multiplying all interlacing coordinates of \(\lambda^{0}\) by the same positive integers. We will therefore allow ourselves to work with diagrams \(\lambda^{0}\) with rational (non-necessarily integer) interlacing coordinates. The statements of this section are proved in Section 7. #### 1.4.1. An explicit formula for the rectangular case In this section, we consider a rectangular diagram \(\lambda_{0}\). Without loss of generality, we assume \(a_{0}=-1\) and write \(r=a_{1}\). Necessarily, \(b_{1}=r-1\). Solving explicitly the second degree critical equation (8) yields: **Proposition 8**.: _The limiting height function corresponding to a \(1\times r\) rectangular shape \(\lambda^{0}\) is given by_ \[H^{\infty}_{r}(x,t)=\frac{1}{\pi}\int_{0}^{t}\frac{\sqrt{s(4r-(1+r)^{2}s)+2(r- 1)\sqrt{rsx-rx^{2}}}}{2\sqrt{r}(1-s)s}\,\mathrm{d}s \tag{16}\] _with the convention that \(\sqrt{x}=0\) if \(x\leq 0\). Moreover, the limiting surface \(T^{\infty}_{r}\) is continuous on \(D_{\lambda^{0}}\) and is therefore implicitly determined by the equation_ \[H^{\infty}_{r}(x,T^{\infty}_{r}(x,y))=\tfrac{1}{2}(y-|x|). \tag{17}\] **Remark 9**.: _In the case \(r=1\) (square Young tableaux), we get_ \[H^{\infty}_{1}(x,t)=\frac{1}{\pi}\int_{0}^{t}\frac{\sqrt{4s-4s^{2}-x^{2}}}{2s -2s^{2}}\,\mathrm{d}s\,.\] _The graph of the function \(\frac{\sqrt{4s-4s^{2}-x^{2}}}{2s-2s^{2}}\) is plotted on the left-hand side of Fig. 3, while the corresponding limiting surface \(T^{\infty}_{1}\) is on the right. The above integral can be explicitly computed, recovering the formula found by Pittel and Romik from [14]. Pittel and Romik also give formulas for the general rectangular case, which should coincide with (16), though we could not verify it._ Using precisely the same proof of Proposition 8, one can also obtain explicit formulas for \(L\)-shaped diagrams; since the latter expressions are pretty involved, we decided not to display them. #### 1.4.2. Two concrete examples of \(L\)-shape diagrams We now consider two specific diagrams \(\lambda^{0}\) and \(\widetilde{\lambda}^{0}\) which are both \(L\)-shaped (i.e. \(m=2\)). Because of the shape of the corresponding liquid regions (see pictures in Figs. 4 and 5), the first one is called the _heart example_ and the second one the _pipe example_. In the heart example (c.f. Fig. 4), the Young diagram \(\lambda^{0}\) has interlacing coordinates \[a_{0}=-5\;<\;b_{1}=-4\;<\;a_{1}=-1\;<\;b_{2}=3\;<\;a_{2}=5. \tag{18}\] In this case we have \(|\lambda^{0}|=13\), so that \(\eta=1/\sqrt{|\lambda^{0}|}=1/\sqrt{13}\) and \([\eta\,a_{0},\eta\,a_{m}]=[-5/\sqrt{13},5/\sqrt{13}]\approx[-1.39,1.39]\). In the pipe example (c.f. Fig. 5), the Young diagram \(\widetilde{\lambda}^{0}\) has interlacing coordinates \[\widetilde{a}_{0}=-200\;<\;\widetilde{b}_{1}=-197\;<\;\widetilde{a}_{1}=-90\; <\;\widetilde{b}_{2}=10\;<\;\widetilde{a}_{2}=103. \tag{19}\] In this case, we have \(|\widetilde{\lambda}^{0}|=9900\), so that \(\widetilde{\eta}=\frac{1}{30\sqrt{11}}\) and \([\widetilde{\eta}\,\widetilde{a}_{0},\widetilde{\eta}\,\widetilde{a}_{m}]=[- \frac{200}{30\sqrt{11}},\frac{103}{30\sqrt{11}}]\approx[-2.01,1.04]\). Various pictures of these two examples, discussed in the sequel, are presented in Figs. 4 and 5. For both examples, we have computed (using the parametrization from Proposition 27 below) the boundary of the liquid region defined in Definition 3. Independently, we have also generated uniform random tableaux of shape \(\lambda_{N}\) for large \(N\) (using the Greene-Nijenhuis-Wilf hook walk algorithm [15]). The bead processes corresponding to these uniform random tableaux are also plotted, and we see that, in both cases, the position of the beads coincides with the liquid region, as predicted by Theorem 4. Finally, we plotted the height functions associated with the bead processes. An essential difference between the two examples is that the heart example, the interlacing coordinates satisfy Condition (15), while this is not the case in the heart example. From Theorem 7, we, therefore, expect to see a continuous limiting surface in the heart example and not in the pipe example. This is indeed the case, as we now explain. In the heart example, the intersection of the liquid region with any vertical line is connected; in other terms, for every \(x\in[\eta\,a_{0},\eta\,a_{m}]\), the function \(t\mapsto H^{\infty}(x,t)\) is first constant equal to \(0\), then strictly increasing, and then constant equal to its maximal value. Therefore, with the notation of (12), we have \(T_{1}^{\infty}(x,y)=T_{1}^{\infty}(x,y)\) for all \((x,y)\) in \(D_{\lambda^{0}}\) and the limiting surface \(T^{\infty}\) is defined and Figure 3. **Left:** The graphs of the function \(\frac{\sqrt{4s-4s^{2}-x^{2}}}{2s-2s^{2}}\) from Remark 9. **Right:** The corresponding limiting surface \(T_{1}^{\infty}\) for squared diagrams. Note that we are using two different axes’ orientations to improve the visualization’s quality. Figure 4. Figures for the heart example. **Top-left:** The Young diagram \(\lambda^{0}\) considered in the heart example, see (18). **Top-right:** The frozen boundary curve of the corresponding liquid region. **Bottom (from left to right):** a uniform random tableau of shape \(\lambda_{N}\) with \(N=130000\) boxes (\(n=100\)) displayed as a discrete surface in a 3D space (brown is used for small values of the surface and blue for large ones); the corresponding bead processes \(M_{\lambda_{N}}\); the corresponding height function \(H_{\lambda_{N}}\). Figure 5. Figures for the pipe example. **Top-left:** The Young diagram \(\widetilde{\lambda}^{0}\) considered in the pipe example, see (19). **Top-right:** The frozen boundary curve of the corresponding liquid region. **Bottom (from left to right):** a uniform random tableau of shape \(\lambda_{N}\) with \(N=59400\) boxes (\(n=6\)) displayed as a discrete surface in a 3D space (brown is used for small values of the surface and blue for large ones); the corresponding bead process \(M_{\lambda_{N}}\); the corresponding height function \(H_{\lambda_{N}}\). The red circle in the picture in the left-hand side highlights the discontinuous location of the limiting surface. continuous on the whole set \(D_{\lambda^{0}}\). Looking at the random tableau drawn as a discrete surface, it is indeed plausible that it converges to a continuous surface. In the pipe example, however, we can find some \(x_{0}\) (on the right of \(\eta\,a_{1}=-\frac{3}{\sqrt{11}}\approx-0.9\)) so that the liquid region intersects the line \(x_{0}\times[0,1]\) in two disjoint intervals. The function \(t\mapsto H^{\infty}(x,t)\) is then constant between these two intervals, and the limiting surface \(T^{\infty}\) is discontinuous. This discontinuity can be observed on the simulation of a uniform random tableau (see the zoom inside the red circle on the left-hand side). #### 1.4.3. Characterizing continuity for \(L\)-shapes In numerical simulations, we need to consider extremely unbalanced diagrams \(\lambda^{0}\) (as the one in the pipe example above) to observe a discontinuity in the limit shape. We will see, however, in this section that such discontinuity occurs for generic \(L\)-shape diagrams \(\lambda^{0}\). To this end, let us parametrize \(L\)-shape Young diagrams as follows. For \((p,q,r)\in\diamond:=\{(p,q,r)\in\mathbb{Q}^{3}\,|\,r>0,p\in(-1,r),|p|<q\leq\min \{p+2,2r-p\}\}\), we let \(\lambda^{0}_{p,q,r}\) have interlacing coordinates \[a_{0}=-1\quad<\quad b_{1}=\frac{p+q-2}{2}\quad<\quad a_{1}=p\quad<\quad b_{2}= \frac{p-q+2r}{2}\quad<\quad a_{2}=r. \tag{20}\] Then the inner corner of \(\lambda^{0}_{p,q,r}\) has coordinates \((p,q)\) and \(\eta=\frac{2}{\sqrt{p^{2}-2p(-1+r)+q(2-q+2r)}}\); see Fig. 6 for an illustration. Given the diagram \(\lambda^{0}_{p,q,r}\), we denote by \(D_{p,q,r}=D_{\lambda^{0}_{p,q,r}}\) and \(T^{\infty}_{p,q,r}\) the corresponding domain and limiting surface, as defined in Section 1.3.2. The following results characterizes the triplets \((p,q,r)\), for which \(T^{\infty}_{p,q,r}\) is defined and continuous on the whole domain \(D_{p,q,r}\). **Proposition 10**.: _The following results hold:_ * _If_ \(r=1\) _then the surface_ \(T^{\infty}_{p,q,1}\) _is defined and continuous on_ \(D_{p,q,1}\) _if and only if_ \[p=0\qquad\text{or}\qquad q=2-\sqrt{2-p^{2}}=:Q(p).\] * _If_ \(r\neq 1\)_, then the surface_ \(T^{\infty}_{p,q,r}\) _is defined and continuous on_ \(D_{p,q,r}\) _if and only if_ \[p\leq 0\text{ and }q=Q^{+}_{r}(p)\qquad\text{or}\qquad p\geq r-1\text{ and }q=Q^{-}_{r}(p),\] _where_ \[Q_{r}^{\pm}(p)=1+r\pm\frac{\sqrt{(1+p-r)(1+2p-r)(pr+(1+r)^{2}-p-2p^{2})}}{1+2p-r}.\] **Remark 11**.: _In the case \(r=1\), we note that there is a dense set (dense in the continuous curve \(\{\,(x,y)\in\mathbb{R}^{2}\mid y=Q(x),x\in(0,1)\,\}\)) of rational solutions \((p,q)\) to the equation \(q=Q(p)\). They are parametrized by_ \[\left\{\,(p,q)\in\mathbb{Q}^{2}\mid q=Q(p)\,\right\}=\left\{\,\left(\frac{s(2r +s)-r^{2}}{s^{2}+r^{2}},1+\frac{2s(s-r)}{s^{2}+r^{2}}\right)\,\right|\,(s,r)\in \mathbb{Q}^{2},sr\neq 0\,\right\}.\] _These results can be obtained in the same way as one obtains the parametrization for the rational points of the unit circle, noting that \(x^{2}+y^{2}=1\) if and only if \((x-y)^{2}+(x+y)^{2}=2\). A similar remark holds also in the case \(r>1\)._ ### Local limit results We now present our local limit results. We first introduce the random infinite bead process mentioned before. #### 1.5.1. The random infinite bead process The second author [1] constructed a two-parameter family of ergodic Gibbs measures on the set of infinite bead configurations (recall the terminology introduced in Section 1.2). These measures are constructed as limits of some dimer model measures on some bipartite graphs when certain weights degenerate. These Gibbs measures are shown to be determinantal point processes; we refer the reader to [13] for background on determinantal point processes. In particular, the following result is a slight reformulation of [1, Theorem 2], see Remark 14 below. **Theorem 12** ([1]).: _Let \((\alpha,\beta)\) be in \(\mathbb{R}_{+}\times(-1,1)\). There exists a determinantal point process \(M_{\alpha,\beta}\) on \(\mathbb{Z}\times\mathbb{R}\) with correlation kernel_ \[J_{\alpha,\beta}\big{(}(x_{1},t_{1}),(x_{2},t_{2})\big{)}=\begin{cases}\frac {\alpha}{2\pi}\int_{[-1,1]}\mathrm{e}^{\mathrm{i}(t_{1}-t_{2})\alpha u}\left( \beta+\mathrm{i}u\sqrt{1-\beta^{2}}\right)^{x_{2}-x_{1}}\mathrm{d}u&\text{ if }\quad x_{2}\geq x_{1};\\ -\frac{\alpha}{2\pi}\int_{\mathbb{R}[-1,1]}\mathrm{e}^{\mathrm{i}(t_{1}-t_{2} )\alpha u}\left(\beta+\mathrm{i}u\sqrt{1-\beta^{2}}\right)^{x_{2}-x_{1}} \mathrm{d}u&\text{ if }\quad x_{2}<x_{1}.\end{cases} \tag{21}\] Figure 7. **Left:** Here we fixed the parameter \(r=1\). The yellow region is the region \(\{(p,q)\in\mathbb{R}^{2}\,|\,p\in(-1,1),|p|<q\leq 2-|p|\}\). Proposition 10 states that the limiting surface \(T_{p,q,1}^{\infty}\) is continuous if and only if \((p,q)\) lies on one of the two red curves. **Right (case \(r>1\)):** Here \(r=3/2\). The yellow region is the region \(\{(p,q)\in\mathbb{R}^{2}\,|\,p\in(-1,3/2),|p|<q\leq\min\{p+2,2r-p\}\}\). Proposition 10 states that the limiting surface \(T_{p,q,r}^{\infty}\) is continuous if and only if \((p,q)\) lies on one of the two red curves. **Definition 13** (Random infinite bead process).: _The point process \(M_{\alpha,\beta}\) is called the random infinite bead process of intensity \(\alpha\) and skewness \(\beta\)._ These bead processes have nice properties. First, they are translation invariant in both directions and induce the uniform measure on bead configurations on every finite domain (see [1] for a precise statement). Moreover, they appear as limits of dimer tilings [1, 10] and of the eigenvalues of imbricated GUE matrices (called GUE corners process), see [1]. We also note that: * the expected number of beads in a portion of a thread of length \(1\) is \(\frac{\alpha}{\pi}\); * the expected ratio between the vertical distance of a bead \(b\) from its neighbor on the left and below3 it, and the distance between \(b\) and the successive bead on the same thread, is \(\frac{\arccos(\beta)}{\pi}\); see [1, Eq. (29)]. Footnote 3: The interpretation of the parameter \(\beta\) in the text of [1], where _above_ is used instead of _below_, is wrong because of a missing minus sign inside the arccos in Eq. (29) there. **Remark 14**.: _We note that Theorem 12 is a simple reformulation of [1, Theorem 2]. Indeed, for \(\alpha=1\), the above kernel \(J_{1,\beta}\) correspond to \(J_{\gamma}\) in [1] for \(\gamma=\beta\).4 For general \(\alpha\), the bead process \(M_{\alpha,\beta}\) is simply obtained by applying a dilatation with scaling factor \(\frac{1}{\alpha}\) to \(M_{1,\beta}\). Then the kernel of \(M_{\alpha,\beta}\) is given by \(J_{\alpha,\beta}\big{(}(x_{1},t_{1}),(x_{2},t_{2})\big{)}=\alpha J_{1,\beta} \big{(}(x_{1},\alpha t_{1}),(x_{2},\alpha t_{2})\big{)}\), which is consistent with the statement above._ Footnote 4: In the present paper, we use \(\beta\) instead of \(\gamma\) to avoid conflicts of notation with integration paths. #### 1.5.2. Local limit result for the bead process Given a Young diagram \(\lambda^{0}\), we now fix \((x_{0},t_{0})\) in \([\eta\,a_{0},\eta\,a_{m}]\times[0,1]\). In the rest of the paper, we assume that \(x_{0}\sqrt{|\lambda^{0}|}\) is an integer5 so that \(x_{0}\sqrt{N}\) is always an integer. We look at the random bead process \(M_{\lambda_{N}}\) in a window of size \(O(1)\times O(1/\sqrt{N})\) around \((x_{0}\sqrt{N},t_{0})\). To this end, we define Footnote 5: This is not as restrictive as it might seem. Indeed, since we are going to look at the limit when \(n\to\infty\), as soon as \(x_{0}\sqrt{|\lambda^{0}|}\) is a rational number, it is possible to replace \(\lambda^{0}\) with a rescaled version of \(\lambda^{0}\) so that \(x_{0}\sqrt{|\lambda^{0}|}\) is an integer. \[\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}=\Big{\{}\;(x,t)\in\mathbb{Z}\times \mathbb{R}\;\Big{|}\;\big{(}x_{0}\sqrt{N}+x,t_{0}+\tfrac{t}{\sqrt{N}}\big{)} \in M_{\lambda_{N}}\;\Big{\}}\,. \tag{22}\] In the next result, we consider the standard topology for point processes (see Section 2.2 for further details). Recall also the definition of the liquid region from Definition 3, and the subsequent definitions of \(\alpha(x_{0},t_{0})\) and \(\beta(x_{0},t_{0})\). **Theorem 15** (Local limit of the bead process associated with a Poissonized Young tableau).: _Fix a Young diagram \(\lambda^{0}\) and consider a sequence \(T_{N}:\lambda_{N}\to[0,1]\) of uniform random Poissonized Young tableaux of shape \(\lambda_{N}\). We choose \(x_{0}\in[\eta\,a_{0},\eta\,a_{m}]\) and \(t_{0}\in[0,1]\), and we let \(n\) go to infinity (so \(N\) goes to infinity)._ * _If_ \((x_{0},t_{0})\) _is in the liquid region, then the random bead process_ \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\) _in (_22_) converges in distribution to the random infinite bead process of intensity_ \(\alpha(x_{0},t_{0})\) _and skewness_ \(\beta(x_{0},t_{0})\)_._ * _If_ \((x_{0},t_{0})\) _is in the complement of the liquid region, then the random bead process_ \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\) _converges in probability to the empty set._ Note that the second case contains the case where \((x_{0},t_{0})\) lies on the boundary of the liquid region (recall from Proposition 27 that the liquid region is an open set). See Section 1.8 for a discussion on this case. #### 1.5.3. Local limit result for the tableau Our next result describes the local limit of uniform random Poissonized Young tableaux. We need some preliminary definitions. A _marked standard Young tableau_ is a triplet \((\lambda,T,(x,y))\) where \((\lambda,T)\) is a standard Young tableau of shape \(\lambda\) and \((x,y)\) are the coordinates of a distinguished box in \(\lambda\). A _marked Poissonized Young tableau_ is defined analogously. We introduce a one-parameter family of random infinite standard Young tableaux directly constructed from the random infinite bead process. These will be our candidate local limits for random uniform Poissonized Young tableaux. Let \(M_{\beta}\) be the infinite bead process of intensity \(1\) and skewness \(\beta\in(-1,1)\). There is a natural bijection between the set of beads in \(M_{\beta}\) and the set \(\{\,(x,y)\in\mathbb{Z}^{2}\mid x+y\text{ is odd}\,\}\): we label by \((0,1)\) the first bead in the zero-thread with positive height, and then we label all the other beads as shown in the left-hand side of Fig. 8. Given a bead \((x,y)\) in \(M_{\beta}\), we denote by \(H_{\beta}(x,y)\) the height of the bead \((x,y)\). The collection of heights \(\{H_{\beta}(x,y)\}\) with \((x,y)\in\{\,(x,y)\in\mathbb{Z}^{2}\mid x+y\text{ is odd and }y>\left|x\right|\,\}\) can be ranked increasingly, starting from the smallest one to which we assign rank \(1\) (see the left-hand side of the bead diagram in Fig. 8; we prove in Proposition 44 below that the heights \(\{H_{\beta}(x,y)\}\) are all distinct and have no accumulation points a.s.). Given a bead \((x,y)\), we denote its ranking by \(R_{\beta}(x,y)\). **Definition 16** (Random infinite standard Young tableau).: _The random infinite standard Young tableau of skewness \(\beta\in(-1,1)\) is the random infinite standard Young tableau \((\diamond_{\infty},R_{\beta})\), where \(\diamond_{\infty}\) is the infinite Young diagram formed by all the boxes at positions \((x,y)\in\mathbb{Z}^{2}\) such that \(x+y\) is odd and \(y>\left|x\right|\)._ See the right-hand side of Fig. 8 for an example of random infinite standard Young tableau \((\diamond_{\infty},R_{\beta})\). **Remark 17**.: _It is, of course, possible to do the same construction starting from a bead process of general intensity \(\alpha\) instead of \(1\). However, a bead process of intensity \(\alpha\) is obtained from one of intensity \(1\) by rescaling the vertical axis. Since the construction of the tableau only involves comparing the heights of various beads and not their actual heights, such a rescaling does not modify the law of the resulting tableau. Hence the skewness \(\beta\) is indeed the only relevant parameter._ In the following result, we endow the space of marked standard Young tableaux with the topology induced by the _local convergence_, formally introduced in Definition 43. Roughly speaking, Figure 8. **Left:** A section of a sampling of the infinite bead process \(M_{\beta}\). We only show the labels of the beads with a label \((x,y)\in\diamond_{\infty}\). These beads are highlighted in black. The ranking of some black beads is shown on the left-hand side of the diagram. **Right:** The corresponding random infinite standard Young tableau \((\diamond_{\infty},R_{\beta})\). On both sides of the picture, we colored some elements in correspondence to help the reader to compare the two pictures. this topology says that a (deterministic) sequence of marked Young tableaux \((\lambda_{n},T_{n},(x_{n},y_{n}))\) converges to an infinite Young tableau \((\diamond_{\infty},R)\), if the values of the boxes in \((\lambda_{n},T_{n})\) contained in any finite neighborhood above the marked box at \((x_{n},y_{n})\) are eventually _in the same relative order_ as the values of the boxes of \((\diamond_{\infty},R)\) contained in the same finite neighborhood above the box at \((0,0)\). **Corollary 18** (Local limit for the uniform Poissonized Young tableau; corollary of Theorem 15).: _Fix a Young diagram \(\lambda^{0}\) and \((x_{0},t_{0})\) in the corresponding liquid region \(L\). Consider a sequence \(T_{N}\) of uniform random Poissonized Young tableaux of shape \(\lambda_{N}\). We denote \(\Box_{N}\) the box corresponding to the first bead of \(M_{\lambda_{N},T_{N}}\) above \(t_{0}\) in the \(x_{0}\sqrt{N}\)-th thread._ _Then, as \(N\to\infty\), the sequence of marked standard Young tableaux \((\lambda_{N},T_{N},\Box_{N})_{N}\) locally converges in distribution to the random infinite standard Young tableau \((\diamond_{\infty},R_{\beta})\) of skewness \(\beta=\beta(x_{0},t_{0})\)._ ### The complex Burger's equation and a PDE for the limiting height function A standard fact when describing limit shapes and local limits via determinantal point processes is that the solution \(U_{c}(x,t)\) of the critical equation (8), sometimes called _complex slope_, satisfy some partial differential equation (PDE). This PDE is normally related to the so-called _complex Burger's equation_, see [11, 10]. In our model, we get the following PDE. **Proposition 19**.: _In the sequel, the indices \(\_{x}\) and \(\_{t}\) denote partial derivatives with respect to the variables \(x\) and \(t\). The solution \(U_{c}=U_{c}(x,t)\) of the critical equation (8) satisfies the following PDE in the liquid region:_ \[(U_{c})_{t}+U_{c}\,\frac{(U_{c})_{x}+1}{1-t}=0. \tag{23}\] The proposition is proved in Section 3.3.1. This PDE yields a PDE for the complex-valued function \(\mathcal{H}^{\infty}(x,t):=\frac{1}{\pi}\int_{0}^{t}\frac{U_{c}(x,s)}{1-s}\, \mathrm{d}s\). Namely, \[\mathcal{H}^{\infty}_{tt}+\pi\,\mathcal{H}^{\infty}_{t}\,\mathcal{H}^{\infty} _{xt}=0. \tag{24}\] We recall from Theorem 4 that the limiting height function \(H^{\infty}(x,t)\) is simply obtained as \(H^{\infty}(x,t)=\mathcal{H}^{\infty}(x,t)\). Taking the imaginary part in Eq. (24), we get \[H^{\infty}_{tt}+\pi\,\widetilde{H}^{\infty}_{t}H^{\infty}_{xt}+\pi H^{\infty} _{t}\widetilde{H}^{\infty}_{xt}=0, \tag{25}\] where \(\widetilde{H}^{\infty}(x,t)=\mathfrak{R}\mathcal{H}^{\infty}(x,t)\). To get a PDE involving only the height function \(H^{\infty}(x,t)\), we need an extra ingredient. At least on an informal level, this extra ingredient is provided by a combination of the local convergence to the infinite bead process in Theorem 15 and the scaling limit result in Theorem 4, as we explain now. Recall from Section 1.5.1 that for the infinite bead process \(M_{\alpha,\beta}\), the expected ratio between the vertical distance of a bead \(b\) from its neighbor on the left and below it, and the distance between \(b\) and the successive bead on the same thread, is \(\frac{\arccos(\beta)}{\pi}\). But successive beads on the same thread are at expected distance \(\pi/\alpha\) so that the expected vertical distance between a bead \(b\) and its neighbor on the left and below it is \(\arccos(\beta)/\alpha\). Using this fact for \(M_{\alpha,\beta}\), and approximating the bead process \(\widetilde{M}^{(x,t)}_{\lambda_{N}}\) with the infinite bead process \(M_{\alpha,\beta}\) with parameters \(\alpha=\alpha(x,t)\) and \(\beta=\beta(x,t)\) (Theorem 15), we get that for large \(N\) the height function \(H_{\lambda_{N}}\) around \((x,t)\) with \(x>0\) satisfies \[H_{\lambda_{N}}(x\sqrt{N},t)\approx H_{\lambda_{N}}\left(x\sqrt{N}-1,t-\frac{ \arccos(\beta)}{\alpha\sqrt{N}}\right), \tag{26}\] where we used the fact that when \(x>0\), the first bead on the \((x\sqrt{N})\)-thread is above the first bead on the \((x\sqrt{N}-1)\)-thread. Iterating this argument \(\varepsilon\sqrt{N}\) times, and multiplying both sides by \(1/\sqrt{N}\), we get \[\frac{1}{\sqrt{N}}\,H_{\lambda_{N}}\left(x\sqrt{N},t\right)\approx\frac{1}{\sqrt{ N}}H_{\lambda_{N}}\left((x-\varepsilon)\sqrt{N},t-\varepsilon\,\frac{\arccos( \beta)}{\alpha}\right).\] Finally, using Theorem 2 and sending \(N\) to infinity, we deduce that \[H^{\infty}(x,t)=H^{\infty}\left(x-\varepsilon,t-\varepsilon\,\frac{\arccos( \beta)}{\alpha}\right).\] This implies \(H_{x}^{\infty}+\frac{\arccos(\beta)}{\alpha}H_{t}^{\infty}=0\) for \(x>0\). When \(x\leq 0\), we need to use the fact that the first bead on the \((x\sqrt{N})\)-thread is now _below_ the first bead on the \((x\sqrt{N}-1)\)-thread. As a consequence, (26) is replaced by \[H_{\lambda_{N}}(x\sqrt{N},t)\approx H_{\lambda_{N}}\left(x\sqrt{N}-1,t+\frac{ \pi-\arccos(\beta)}{\alpha\sqrt{N}}\right),\] where we also used the fact that the expected vertical distance between a bead \(b\) and its neighbor on the left and _above_ it is \((\pi-\arccos(\beta))/\alpha.\) Hence, for \(x\leq 0\) one gets \(H_{x}^{\infty}-\frac{\pi-\arccos(\beta)}{\alpha}H_{t}^{\infty}=0\). The equations obtained for the cases \(x>0\) and \(x\leq 0\) can be grouped in a single formula as follows: \[H_{x}^{\infty}+\left(\frac{\arccos(\beta)}{\alpha}-\frac{\pi\delta_{x\leq 0 }}{\alpha}\right)H_{t}^{\infty}=0.\] Since \(H_{t}^{\infty}=\alpha/\pi\) by Theorem 4, the previous equation simplifies to \(\beta=\cos(\pi\delta_{x\leq 0}-\pi H_{x}^{\infty})\). Recalling from (9) that \(\beta=\frac{\mathfrak{R}U_{c}}{|U_{c}|}\), we deduce that \(\operatorname{Arg}(U_{c})=\pi\delta_{x\leq 0}-\pi\,H_{x}^{\infty}\). Recalling also from (9) that \(\alpha=\frac{\mathfrak{R}U_{c}}{1-t}\), we conclude that \[\widetilde{H}_{t}^{\infty}=\frac{\mathfrak{R}U_{c}}{\pi(1-t)}=\frac{\alpha}{ \pi\tan(\operatorname{Arg}(U_{c}))}=\frac{H_{t}^{\infty}}{\tan(\pi\delta_{x \leq 0}-\pi\,H_{x}^{\infty})}=-\,\frac{H_{t}^{\infty}}{\tan(\pi\,H_{x}^{ \infty})},\] and \[\widetilde{H}_{xt}^{\infty}=-\frac{H_{xt}^{\infty}}{\tan(\pi H_{x}^{\infty})} +\pi H_{t}^{\infty}H_{xx}^{\infty}\left(1+\frac{1}{\tan^{2}(\pi H_{x}^{\infty })}\right).\] Combining the latter two equations with (25), we get the desired PDE for the height function: \[H_{tt}^{\infty}-2\pi\,\frac{H_{t}^{\infty}H_{xt}^{\infty}}{\tan(\pi\,H_{x}^{ \infty})}+\pi^{2}(H_{t}^{\infty})^{2}H_{xx}^{\infty}\left(1+\frac{1}{\tan^{2}( \pi\,H_{x}^{\infty})}\right)=0. \tag{27}\] Such a PDE for the limiting surface was obtained by Sun as a consequence of its variational principle [11, Eq. (28)]. Note that our equation is slightly different from [11, Eq. (28)] since we do not use the same convention for the height function \(H^{\infty}\). We do not know if this heuristic argument can be transformed into an actual proof of (27); we did not pursue in this direction because of the earlier appearance of the equation in the work of Sun. ### Methods As mentioned in the background section, our starting point is the determinantal structure of the bead process associated to a uniform Poissonized tableau of given shape [10]. The local limit theorem for bead processes around \((x_{0}\sqrt{N},t_{0})\) (Theorem 15) is then obtained by an asymptotic analysis of the kernel in the appropriate regime. Since the determinantal kernel is expressed in terms of a double contour integral, such asymptotic analysis is performed via saddle point analysis, following the method used, e.g., in the papers [1, 10, 10]. The saddle points are precisely the two non-real roots of the critical equation (8) if they exist (liquid region), or some of its real roots when all roots are real (frozen region). In the case of the frozen region, the construction of the contours depend on the relative position of \((x_{0},t_{0})\) with respect to the liquid region. The case where we have an alternation of liquid and frozen phases along a line \(\{x_{0}\}\times[0,1]\) (leading to a discontinuity of the surface) is particularly interesting since one has to split one of the integration contours into two new distinguished integration contours (Section 5.3). Such alternation of liquid and frozen phases has appeared in certain models of random tilings, such as plane partitions with 2-periodic weights [12, Figure 3] or pyramid partitions [11, Figure 1.1], and in a different context (asymptotic representation theory of the unitary group) in [10, Theorem 4.6]. From the local limit theorem, we know that the parameter \(\alpha(x_{0},t_{0})\) controls the local density of beads around \((x_{0},t_{0})\). It is, therefore, not surprising that the limiting height function is obtained as an integral of this local density. Formally, we use a special case of the convergence of the determinantal kernel established for the local limit theorem to prove our formula for the limiting height function. The relation between local limit results via determinantal point processes, and limit shape results were remarked in other contexts, see e.g. [1, Remark 1.7] or [10, Section 3.1.10], but not formally used as we do in the present paper. ### Future work Here are a few directions that could be interesting to investigate. 1. We have limited ourselves here to a sequence of diagrams \(\lambda_{N}\), obtained as the dilatation of a base diagram \(\lambda^{0}\). It would be also interesting to consider more general sequences of diagrams with a continuous limit shape \(\omega(x)\) in the sense of [1, 10]. In this case, the critical equation is not polynomial anymore, but it should rewrite as \[1-t=U\,G_{\mu_{\omega}}(U+x),\] where \(G_{\mu_{\omega}}(z)=\int_{\mathbb{R}}\frac{1}{z-s}\,\mu(\mathrm{d}s)\) is the Cauchy transform of the transition measure of the limit shape (see [11]). 2. We believe that the discontinuities observed in this paper are created by the non-smoothness of the limit shape \(\omega(x)\). We conjecture that if \(\omega\) is smooth (probably \(\mathcal{C}^{1}\) is enough), then the limiting surface is continuous. An interesting question is also to find a general criterium for the limiting surface to be continuous, extending Theorem 7. The latter suggests that there should be some identity to check at each singular point of the limit shape \(\omega\); but at the moment we do not know what such criterium should be. 3. In Corollary 18, we consider the local limit of a Young diagram around a box \(\square_{N}\), whose coordinates are random (they depend on the tableau \(T_{N}\)). Considering the limit around a fixed box in the Young diagram would be more satisfactory. This corresponds, however, to a random position in the bead process. One potential strategy to attack this problem would be to refine our analysis of the kernel in Section 3 replacing the fixed point \(((x_{1},t_{1}),(x_{2},t_{2}))\) with a point which might depend on \(N\). We also believe that one can extend the results in Corollary 18 to uniform Young tableaux (instead of uniform Poissonized Young tableaux). 4. On the boundary of the liquid region, Theorem 15 states that there are with high probability no beads in a window of size \(O(1)\times O(1/\sqrt{N})\). We expect, however, that with a different rescaling in \(t\), the bead process will converge either to the Airy process (for typical points of the boundary) or to the Cusp-Airy/Pearcey process (for cusps points of the boundary). For instance, this type of behavior for typical points of the boundary was proved for lozenge tilings in [13]; while the aforementioned behavior for cusps points of the boundary was for instance investigated in [10, 11, 12] in the setting of random tiling models. The appearance of the Airy process could be used to establish Tracy-Widom fluctuations of a generic entry in the first row of random Young tableaux. Such a result was proved with a different and specific method for uniform random tableaux of square shape by Marchal [14]. 5. We believe that the family of infinite tableaux \((\diamond_{\infty},R_{\beta})\) constructed in this paper deserves more attention. Unlike Plancherel infinite tableaux [11], the tableaux \((\diamond_{\infty},R_{\beta})\) do not come from a Markovian growth process. But, as local limit of finite random Young tableaux, they should have an interesting invariant re-rooting property. ### Outline of the paper The rest of the paper is organized as follows. In Section 2, we first recall the determinantal structure of the bead process associated to a uniform Poissonized tableau of given shape [10], and then we rewrite the corresponding kernel in a more suitable way for our analysis. Moreover, in Section 2.2, we recall some basic topological facts for determinantal point processes. Section 3.1 is devoted to the analysis of the kernel of the random bead process \(\widetilde{M}^{(x_{0},t_{0})}_{\lambda_{N}}\) in (22). The main goal of this section will be to identify the _critical points of the action_ (Section 3.3). The next two sections are devoted to the proof of the local limit of the bead process stated in Theorem 15, which is the building block for all the other results in the paper: in Section 4 we look at the asymptotics of the kernel inside the liquid region, while in Section 5 we look at the asymptotics of the kernel in the frozen region. Finally, in the last three sections of this paper, we deduce Theorems 4 and 7 (Section 6), we give the proofs of our applications discussed in Section 1.4 (Section 7) and we conclude with the proof of the local limit for Young tableaux stated in Corollary 18 (Section 8). ## 2. Preliminaries ### Gorin-Rahman's determinantal formula #### 2.1.1. Young diagrams and associated meromorphic functions Recall the interlacing coordinates introduced in Eq. (1). With a Young diagram \(\lambda\), we associate a function of a complex variable \(u\). Throughout the paper, \(\Gamma\) stands for the usual \(\Gamma\) function. We set \[F_{\lambda}(u):=\Gamma(u+1)\,\prod_{i=1}^{\infty}\frac{u+i}{u-\lambda_{i}+i}= \frac{\prod_{i=0}^{m}\Gamma(u-a_{i}+1)}{\prod_{i=1}^{m}\Gamma(u-b_{i}+1)}, \tag{28}\] where the equivalence between the two formulas is proved in6[11, Eq. (2.7)]. This meromorphic function \(F_{\lambda}\) has an infinite countable set of simple poles, namely \(\{\,\lambda_{i}-i\mid\,i\geq 1\,\}\). Footnote 6: The quantity \(\Phi(z;\lambda)\) in [11] is defined right before Proposition 1.2. It is related to \(F_{\lambda}\) by the equation \(F_{\lambda}(u)=\Gamma(u+1)\,\Phi(u+\frac{1}{2};\lambda)\). Beware that notations for interlacing coordinates are different here and in [11]. From now on, \((a_{i})_{0\leq i\leq m}\) and \((b_{i})_{1\leq i\leq m}\) are the interlacing coordinates of our fixed base diagram \(\lambda^{0}\). The interlacing coordinates of the dilatation \(\lambda_{N}\) are simply \((n\,a_{0},n\,b_{1},\dots,n\,b_{m},n\,a_{m})\), where \(n\) is the dilatation factor (each box in \(\lambda^{0}\) corresponds to an \(n\times n\) square in \(\lambda_{N}\)). Recalling the relation \(N=n^{2}|\lambda^{0}|=n^{2}\eta^{-2}\), we have \[F_{\lambda_{N}}(u)=\frac{\prod_{i=0}^{m}\Gamma(u-\eta\,a_{i}\,\sqrt{N}+1)}{ \prod_{i=1}^{m}\Gamma(u-\eta\,b_{i}\,\sqrt{N}+1)}. \tag{29}\] #### 2.1.2. The Gorin-Rahman's correlation kernel From [10, Theorem 1.5], for any fixed diagram \(\lambda\), taking a uniform random Poissonized Young tableau \(T\) of shape \(\lambda\), the bead process \(M_{\lambda}=M_{\lambda,T}\) introduced in (3) is a determinantal point process on \(\mathbb{Z}\times[0,1]\) with correlation kernel \[K_{\lambda}((x_{1},t_{1}),(x_{2},t_{2})) =\mathbf{1}_{x_{1}>x_{2},\,t_{1}<t_{2}}\,\frac{(t_{1}-t_{2})^{x_{ 1}-x_{2}-1}}{(x_{1}-x_{2}-1)!} \tag{30}\] \[\quad+\frac{1}{(2\mathrm{i}\pi)^{2}}\oint_{\widetilde{\gamma}_{z }}\oint_{\widetilde{\gamma}_{w}}\frac{F_{\lambda}(x_{2}+z)}{F_{\lambda}(x_{1}- 1-w)}\,\frac{\Gamma(-w)}{\Gamma(z+1)}\,\frac{(1-t_{2})^{z}\,(1-t_{1})^{w}}{z+w +x_{2}-x_{1}+1}\,\mathrm{d}w\,\mathrm{d}z\,,\] where the double contour integral runs over counterclockwise paths \(\widetilde{\gamma}_{w}\) and \(\widetilde{\gamma}_{z}\) such that * \(\widetilde{\gamma}_{w}\) is inside \(\widetilde{\gamma}_{z}\) ; * \(\widetilde{\gamma}_{w}\) and \(\widetilde{\gamma}_{z}\) contain all the integers in \([0,\ell(\lambda)-1+x_{1}]\) and in \([0,\lambda_{1}-1-x_{2}]\) respectively; * the ratio \(\frac{1}{z+w+x_{2}-x_{1}+1}\) remains uniformly bounded. Recalling the expression for \(F_{\lambda}\) in (28), we get that * the simple poles of \(\frac{F_{\lambda}(x_{2}+z)}{\Gamma(z+1)}\) are \(\{\,\lambda_{i}-i-x_{2}\mid i\geq 1\,\}\cap\mathbb{Z}_{\geq 0}\); * the simple poles of \(\frac{\Gamma(-w)}{F_{\lambda}(x_{1}-1-w)}\) are \(\mathbb{Z}_{\geq 0}\setminus\{\,x_{1}-1-\lambda_{i}+i\mid i\geq 1\,\}\). For later convenience, we perform the following change of variables: \[\begin{cases}z^{\prime}=x_{2}+z,\\ w^{\prime}=x_{1}-1-w.\end{cases}\] If \(\widetilde{\gamma}_{z}\) was sufficiently large, after this change of variable, the new \(z\)-contour still encloses the new \(w\)-contour. Therefore, we obtain the following expression for the correlation kernel: \[K_{\lambda}((x_{1},t_{1}),(x_{2},t_{2})) =\mathbf{1}_{x_{1}>x_{2},t_{1}<t_{2}}\,\frac{(t_{1}-t_{2})^{x_{1 }-x_{2}-1}}{(x_{1}-x_{2}-1)!} \tag{31}\] \[\quad-\frac{1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma_{z}}\oint_{ \gamma_{w}}\frac{F_{\lambda}(z)}{F_{\lambda}(w)}\,\frac{\Gamma(w-x_{1}+1)}{ \Gamma(z-x_{2}+1)}\,\frac{(1-t_{2})^{z-x_{2}}\,(1-t_{1})^{-w+x_{1}-1}}{z-w}\, \mathrm{d}w\,\mathrm{d}z\,,\] where the double contour integral runs over counterclockwise paths \(\gamma_{w}\) and \(\gamma_{z}\) such that * \(\gamma_{w}\) is inside \(\gamma_{z}\); * \(\gamma_{w}\) and \(\gamma_{z}\) contain all the integers in \([-\ell(\lambda),x_{1}-1]\) and in \([x_{2},\lambda_{1}-1]\) respectively; * the ratio \(\frac{1}{z-w}\) remains uniformly bounded. We note that in the new integrand in (31), * the set of \(w\)-poles (which are all simple) is \(\mathbb{Z}_{\leq x_{1}-1}\setminus\{\,\lambda_{i}-i\mid i\geq 1\,\}\), which is included in \([-\ell(\lambda),x_{1}-1]\); * the set of \(z\)-poles (which are all simple) is \(\{\,\lambda_{i}-i\mid i\geq 1\,\}\cap\mathbb{Z}_{\geq x_{2}}\), which is included in \([x_{2},\lambda_{1}-1]\); * there is an additional simple pole for \(z=w\). Hence the second property of \(\gamma_{w}\) and \(\gamma_{z}\) given above ensures that the integration contours contain all poles. We refer to Fig. 9 for a visual representation of these poles and integration contours. #### 2.1.3. Removing the indicator function from the correlation kernel Let \(\gamma_{z}^{\prime}\) and \(\gamma_{w}^{\prime}\) be the two contours satisfying the same conditions as \(\gamma_{z}\) and \(\gamma_{w}\), except that \(\gamma_{z}^{\prime}\) lies inside \(\gamma_{w}^{\prime}\). A simple residue computation, done below, yields the following result. Figure 9. We consider here the Young diagram \(\lambda=(6,6,6,4,4,4,3,3)\) drawn with Russian convention. **Left:** We indicate the poles of \(F_{\lambda}\) in red (and in brown the corresponding points on the boundary of the diagram). **Right:** We show the poles of the integrand, the \(w\)-poles in green, and the \(z\)-poles in purple in the case when \(x_{1}=2\) and \(x_{2}=-1\) (recall that there is an additional simple pole for \(z=w\)). The integration contours in Eq. (31) are also represented. **Lemma 20**.: _For all \((x_{1},t_{1}),(x_{2},t_{2})\in\mathbb{Z}\times[0,1]\), it holds that_ \[\mathbf{1}_{x_{1}>x_{2}}\frac{(t_{1}-t_{2})^{x_{1}-x_{2}-1}}{(x_{1} -x_{2}-1)!}-\frac{1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma_{\!x}}\oint_{\gamma_{\! w}}\frac{F_{\lambda}(z)}{F_{\lambda}(w)}\,\frac{\Gamma(w-x_{1}+1)}{\Gamma(z-x_{2}+1)} \,\frac{(1-t_{2})^{z-x_{2}}\,(1-t_{1})^{-w+x_{1}-1}}{z-w}\,\mathrm{d}w\, \mathrm{d}z\\ =-\frac{1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma_{\!x}}\oint_{\gamma _{\!x}}\frac{F_{\lambda}(z)}{F_{\lambda}(w)}\,\frac{\Gamma(w-x_{1}+1)}{\Gamma( z-x_{2}+1)}\,\frac{(1-t_{2})^{z-x_{2}}\,(1-t_{1})^{-w+x_{1}-1}}{z-w}\,\mathrm{d}w\, \mathrm{d}z\,. \tag{32}\] Proof.: To simplify, we can assume that \(\gamma_{w}^{\prime}=\gamma_{w}\). Fixing some \(w\in\gamma_{w}\), we can deform \(\gamma_{z}\) into \(\gamma_{z}^{\prime}\), crossing uniquely the pole \(z=w\). Hence, for \(w\in\gamma_{w}\), by residue theorem we have that \[\frac{1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma_{z}}\frac{F_{\lambda}( z)}{F_{\lambda}(w)}\,\frac{\Gamma(w-x_{1}+1)}{\Gamma(z-x_{2}+1)}\,\frac{(1-t_{2})^{z- x_{2}}\,(1-t_{1})^{-w+x_{1}-1}}{z-w}\,\mathrm{d}w\,\mathrm{d}z\\ -\frac{1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma_{z}^{\prime}}\frac{F_ {\lambda}(z)}{F_{\lambda}(w)}\,\frac{\Gamma(w-x_{1}+1)}{\Gamma(z-x_{2}+1)}\, \frac{(1-t_{2})^{z-x_{2}}\,(1-t_{1})^{-w+x_{1}-1}}{z-w}\,\mathrm{d}w\,\mathrm{ d}z\\ =\frac{1}{2\mathrm{i}\pi}\frac{\Gamma(w-x_{1}+1)}{\Gamma(w-x_{2}+ 1)}\,(1-t_{2})^{w-x_{2}}\,(1-t_{1})^{-w+x_{1}-1} \tag{33}\] We now integrate the right-hand side over the close contour \(\gamma_{w}\). In the case \(x_{1}\leq x_{2}\), we have \[\frac{\Gamma(w-x_{1}+1)}{\Gamma(w-x_{2}+1)}=(w-x_{1})(w-x_{1}-1)\cdots(w-x_{2} +1),\] and the right-hand side of (33) is an entire function. Hence integrating over \(\gamma_{w}\) gives \(0\). On the contrary when \(x_{1}>x_{2}\), we have \[\frac{\Gamma(w-x_{1}+1)}{\Gamma(w-x_{2}+1)}=\frac{1}{(w-x_{2})(w-x_{2}-1) \cdots(w-x_{1}+1)}.\] In this case, the RHS of (33) is a meromorphic function of \(w\) with simple poles in \(x_{1}-1-i\), for \(i\in\{0,1,\ldots,x_{1}-x_{2}-1\}\). Bringing both cases together, we have \[\frac{1}{2\mathrm{i}\pi}\oint_{\gamma_{w}}\frac{\Gamma(w-x_{1}+1) }{\Gamma(w-x_{2}+1)} \,(1-t_{2})^{w-x_{2}}\,(1-t_{1})^{-w+x_{1}-1}\,\mathrm{d}w\] \[=\mathbf{1}_{x_{1}>x_{2}}\sum_{i=0}^{x_{1}-x_{2}-1}\frac{(1-t_{2} )^{(x_{1}-1-i)-x_{2}}\,(1-t_{1})^{-(x_{1}-1-i)+x_{1}-1}}{((x_{1}-1-i)-x_{2}) \cdots 1\cdot(-1)\cdots(-i)}\] \[=\frac{\mathbf{1}_{x_{1}>x_{2}}}{(x_{1}-x_{2}-1)!}\sum_{i=0}^{x_ {1}-x_{2}-1}\binom{x_{1}-x_{2}-1}{i}(1-t_{2})^{x_{1}-x_{2}-1-i}(t_{1}-1)^{i}\] \[=\mathbf{1}_{x_{1}>x_{2}}\frac{(t_{1}-t_{2})^{x_{1}-x_{2}-1}}{(x_ {1}-x_{2}-1)!}.\qed\] We, therefore, have the following simplified (and final) representation of the correlation kernel: \[K_{\lambda}((x_{1},t_{1}),(x_{2},t_{2}))=-\frac{1}{(2\mathrm{i}\pi)^{2}}\oint _{\gamma_{z}}\oint_{\gamma_{w}}\frac{F_{\lambda}(z)}{F_{\lambda}(w)}\,\frac{ \Gamma(w-x_{1}+1)}{\Gamma(z-x_{2}+1)}\,\frac{(1-t_{2})^{z-x_{2}}\,(1-t_{1})^{- w+x_{1}-1}}{z-w}\,\mathrm{d}w\,\mathrm{d}z\,, \tag{34}\] where the double contour integral runs over counterclockwise paths \(\gamma_{w}\) and \(\gamma_{z}\) such that * \(\gamma_{w}\) is inside (resp. outside) \(\gamma_{z}\) if \(t_{1}\geq t_{2}\) (resp. \(t_{1}<t_{2}\)); * \(\gamma_{w}\) and \(\gamma_{z}\) contain all the integers in \([-\ell(\lambda),x_{1}-1]=[a_{0},x_{1}-1]\) and in \([x_{2},\lambda_{1}-1]=[x_{2},a_{m}-1]\) respectively; * the ratio \(\frac{1}{z-w}\) remains uniformly bounded. Moreover, the integrand has simple poles for \(w\) at \(\mathbb{Z}_{\leq x_{1}-1}\setminus\{\,\lambda_{i}-i\mid i\geq 1\,\}\subseteq[-\ell( \lambda),x_{1}-1]\), simple poles for \(z\) at \(\{\,\lambda_{i}-i\mid i\geq 1\,\}\cap\mathbb{Z}_{\geq x_{2}}\subseteq[x_{2}, \lambda_{1}-1]\), and a simple pole for \(z=w\). ### A topology for bead processes We discuss in this section a (standard) notion of convergence for bead processes, or more generally for determinantal point processes. The expert reader may consider skipping this section. A bead process can be naturally interpreted as a random locally finite counting measure on the complete and separable metric space \(\mathcal{X}:=\mathbb{Z}\times\mathbb{R}\), i.e. as a random variable in the space of locally finite counting measures on \(\mathcal{X}\), denoted by \(\mathcal{M}^{\#}_{\mathcal{X}}\). We endow \(\mathcal{M}^{\#}_{\mathcal{X}}\) with the \(\sigma\)-algebra \(\mathcal{F}_{\mathcal{X}}\) generated by the cylinder sets \[C^{n}_{B}=\Set{\mu\in\mathcal{M}^{\#}_{\mathcal{X}}\Bigm{|}\mu(B)=n}, \tag{35}\] defined for all \(n\in\mathbb{N}\) and all Borel subsets \(B\) of \(\mathcal{X}\). The elements of this \(\sigma\)-algebra are the Borel subsets for the weak topology on the space \((\mathcal{M}^{\#}_{\mathcal{X}},\mathcal{F}_{\mathcal{X}})\) of locally finite counting measures on \(\mathcal{X}\). The convergent sequences for the weak topology are defined as follows: if \((\mu_{n})_{n}\) is a sequence in \(\mathcal{M}^{\#}_{\mathcal{X}}\) and \(\mu\in\mathcal{M}^{\#}_{\mathcal{X}}\), then \[\mu_{n}\xrightarrow{w}\mu\qquad\text{if}\qquad\int_{\mathcal{X}}f(x)\,\mathrm{ d}\mu_{n}\left(x\right)\to\int_{\mathcal{X}}f(x)\,\mathrm{d}\mu\left(x\right),\] for all bounded continuous functions \(f\) on \(\mathcal{X}\) with bounded support. This topology makes \(\mathcal{M}^{\#}_{\mathcal{X}}\) a Polish space, which eases the manipulation of sequences of random measures in \(\mathcal{M}^{\#}_{\mathcal{X}}\). In particular, by [11, Theorem 11.1.VII], if \((\nu_{n})_{n}\) is a sequence of random measures in \(\mathcal{M}^{\#}_{\mathcal{X}}\) and \(\nu\) is another random measure in \(\mathcal{M}^{\#}_{\mathcal{X}}\), then \(\nu_{n}\xrightarrow{d}\nu\) w.r.t. the weak topology if for all \(k\geq 1\), \[(\nu_{n}(B_{i}))_{1\leq i\leq k}\xrightarrow{d}(\nu(B_{i}))_{1\leq i\leq k}, \tag{36}\] for all collections \((B_{i})_{1\leq i\leq k}\) of bounded Borel \(\nu\)-continuity subsets7 of \(\mathcal{X}\). Since both \(\nu_{n}\) and \(\nu\) are random counting measures, the condition in (36) is equivalent to requiring that for all \(k\geq 1\), Footnote 7: Recall that a bounded Borel subset of \(\mathcal{X}\) is a _continuity subset_ for a random measure \(\nu\) in \(\mathcal{M}^{\#}_{\mathcal{X}}\) if \(\nu(\partial B)=0\) almost surely. \[\mathbb{P}(\nu_{n}(B_{i})=m_{i},\forall 1\leq i\leq k)\to\mathbb{P}(\nu(B_{i}) =m_{i},\forall 1\leq i\leq k), \tag{37}\] for all collections \((B_{i})_{1\leq i\leq k}\) of bounded Borel \(\nu\)-continuity subsets of \(\mathcal{X}\) and all integer-valued vectors \((m_{i})_{1\leq i\leq k}\in\left(\mathbb{Z}_{\geq 0}\right)^{k}\). In particular, we highlight for later scopes the following result. **Proposition 21**.: _Let \((\nu_{n})_{n}\) be a sequence of random measures in \(\mathcal{M}^{\#}_{\mathcal{X}}\) and \(\nu\) is another random measure in \(\mathcal{M}^{\#}_{\mathcal{X}}\). If \(\nu_{n}\xrightarrow{d}\nu\) w.r.t. the weak topology, then_ \[\mathbb{P}(\nu_{n}\in A)\to\mathbb{P}(\nu\in A)\] _for all sets \(A\in\mathcal{F}_{\mathcal{X}}\) which can be written as a countable union/intersection/complementation of cylinder sets \(C^{n}_{B}\) such that \(B\) is a bounded Borel \(\nu\)-continuity subset of \(\mathcal{X}\)._ Recalling that a bead process is a determinantal point process on \(\mathcal{X}=\mathbb{Z}\times\mathbb{R}\), we now look at the specific case where \(\nu\) and \(\nu_{n}\) are determinantal point process on \(\mathcal{X}\). Let \(J=J\big{(}(x_{1},t_{1}),(x_{2},t_{2})\big{)}\) and \(J_{n}=J_{n}\big{(}(x_{1},t_{1}),(x_{2},t_{2})\big{)}\) be the kernels of \(\nu\) and \(\nu_{n}\) respectively. We also introduce the following generating function for the determinantal point process \(\nu\) (and analogously for the determinantal point processes \(\nu_{n}\)): for \(z=(z_{1},\ldots,z_{k})\) and \(B=(B_{1},\ldots,B_{k})\), we set \[Q^{\nu}_{B}(z):=\sum_{\ell\in\mathbb{N}^{k}}\frac{(-z)^{\ell}}{\ell!}\int_{B^{ \ell}}\det\Big{(}\big{[}J\big{(}(x_{i},t_{i}),(x_{j},t_{j})\big{)}\big{]}_{1 \leq i,j\leq|\ell|}\Big{)}\,\mathrm{d}\lambda^{\otimes|\ell|}\,\big{(}(x_{1},t _{1}),\ldots,(x_{|\ell|},t_{|\ell|})\big{)},\] where \(\lambda\) denotes the product measure of the counting measure on \(\mathbb{Z}\) and the Lebesgue measure on \(\mathbb{R}\), and where we also used the following multi-index notation \[|\ell|=\sum_{i=1}^{k}\ell_{i},\qquad\ell!=\prod_{i=1}^{k}\ell_{i}!,\qquad B^{ \ell}=B_{1}^{\ell_{1}}\times\cdots\times B_{k}^{\ell_{k}},\qquad z^{\ell}=z_{1} ^{\ell_{1}}\cdots z_{k}^{\ell_{k}}.\] With some simple manipulations (see for instance [14, Equation 3.4]), one can show that for \(m=(m_{1},\ldots,m_{k})\in(\mathbb{Z}_{\geq 0})^{k}\), it holds that \[\mathbb{P}\big{(}\nu(B_{i})=m_{i},\forall 1\leq i\leq k\big{)}=\frac{(-1)^{|m|}}{m!}\left.\frac{\partial^{m}}{\partial z^{m}}\,Q_{B}^{\nu}(z)\right|_{z=(1, \ldots,1)},\] where \(\frac{\partial^{m}}{\partial z^{m}}\) stands for the differential operator \(\frac{\partial^{m_{1}}}{(\partial z_{1})^{m_{1}}}\cdots\frac{\partial^{m_{k}}} {(\partial z_{k})^{m_{k}}}\) and \(|m|=\sum_{i=1}^{k}m_{i}\). Therefore, thanks to the latter expression, the condition in (37) is equivalent to require that for all \(k\geq 1\), \[\frac{\partial^{m}}{\partial z^{m}}\,Q_{B}^{\nu_{n}}(z)\bigg{|}_{z=(1,\ldots, 1)}\xrightarrow{n\to\infty}\left.\frac{\partial^{m}}{\partial z^{m}}\,Q_{B}^ {\nu}(z)\right|_{z=(1,\ldots,1)} \tag{38}\] for all collections \((B_{i})_{1\leq i\leq k}\) of bounded Borel \(\nu\)-continuity subsets of \(\mathcal{X}\) and all integer-valued vectors \((m_{i})_{1\leq i\leq k}\in(\mathbb{Z}_{\geq 0})^{k}\). We have the following useful result. **Proposition 22**.: _If the kernel \(J=J\big{(}(x_{1},t_{1}),(x_{2},t_{2})\big{)}\) is locally bounded and \(J_{n}\) converges locally uniformly to \(J\), then \(\nu_{n}\xrightarrow{d}\nu\) w.r.t. the weak topology._ Proof.: Thanks to our previous discussions it is enough to show that the condition in (38) holds. Fix a collection \(B=(B_{i})_{1\leq i\leq k}\) of bounded Borel \(\nu\)-continuity subsets of \(\mathcal{X}\), and \(\ell\in\mathbb{N}^{k}\). Since \(B^{\ell}\) is relatively compact, by using the Hadamard inequality and the locally uniform convergence of \(J_{n}\) towards the locally bounded kernel \(J\), we see that for all \((x_{i},t_{i})_{1\leq i\leq|\ell|}\in B^{\ell}\), \[\Big{|}\text{det}\left(\big{[}J_{n}\big{(}(x_{i},t_{i}),(x_{j},t_{j})\big{)} \big{]}_{1\leq i,j\leq|\ell|}\right)\Big{|}\leq\prod_{j=1}^{|\ell|}\left(\sum _{i=1}^{|\ell|}J_{n}\big{(}(x_{i},t_{i}),(x_{j},t_{j})\big{)}^{2}\right)^{ \frac{1}{2}}\leq\Big{(}C\sqrt{|\ell|}\Big{)}^{|\ell|}\] for some positive constant \(C\). Since \(J_{n}\) converges pointwise to \(J\), for every fixed \((x_{i},t_{i})_{1\leq i\leq|\ell|}\in B^{\ell}\), \[\text{det}\left(\big{[}J_{n}\big{(}(x_{i},t_{i}),(x_{j},t_{j})\big{)}\big{]}_{ 1\leq i,j\leq|\ell|}\right)\to\text{det}\left(\big{[}J\big{(}(x_{i},t_{i}),(x _{j},t_{j})\big{)}\big{]}_{1\leq i,j\leq|\ell|}\right),\] and by dominated convergence, we get \[\int_{B^{\ell}}\text{det}\left(\big{[}J_{n}\big{(}(x_{i},t_{i}),( x_{j},t_{j})\big{)}\big{]}_{1\leq i,j\leq|\ell|}\right)\text{d}\lambda^{ \otimes|\ell|}\left((x_{1},t_{1}),\ldots,(x_{|\ell|},t_{|\ell|})\right)\\ \to\int_{B^{\ell}}\text{det}\left(\big{[}J\big{(}(x_{i},t_{i}),( x_{j},t_{j})\big{)}\big{]}_{1\leq i,j\leq|\ell|}\right)\text{d}\lambda^{ \otimes|\ell|}\left((x_{1},t_{1}),\ldots,(x_{|\ell|},t_{|\ell|})\right). \tag{39}\] It remains to prove that (39) implies the condition in (38). Thanks to the estimate given above, the term with label \(\ell\) in the series \(Q_{B}^{\nu_{n}}(z)\) or \(Q_{B}^{\nu}(z)\) is bounded from above by \[\frac{|z|^{\ell}\,|\lambda(B)|^{\ell}}{\ell!}(C\sqrt{|\ell|})^{|\ell|}\leq \left(\frac{\|z\|_{\infty}\,\|\lambda(B)\|_{\infty}\,C\,\text{ke}}{\sqrt{|\ell|} }\right)^{|\ell|}.\] Here we used the fact that for any \(k\)-tuple \(\ell=(\ell_{1},\ldots,\ell_{k})\), \(\ell!\geq(\Gamma(\frac{|\ell|}{k}+1))^{k}\geq(\frac{|\ell|}{\text{ke}})^{|\ell|}\); see also the discussion below [14, Lemma 3.1]. Therefore, \(Q_{B}^{\nu_{n}}(z)\) and \(Q_{B}^{\nu}(z)\) are entire functions of \(z\), and by dominated convergence of the terms of the series, \(Q_{B}^{\nu_{n}}(z)\) converges locally uniformly on \(\mathbb{C}^{k}\) towards \(Q_{B}^{\nu}(z)\). By Cauchy's differentiation formula, this implies the (locally uniform) convergence of all the partial derivatives, whence the general condition (38). We finish this section by recalling a standard observation in the theory of determinantal point processes. Assume that \(K_{1}\) and \(K_{2}\) are two kernels on a set \(X\) differing by a _conjugation factor_, i.e. such that there exists a function \(g\) on \(X\) satisfying \[K_{1}(x,y)=\frac{g(x)}{g(y)}K_{2}(x,y),\quad\text{for all }x,y\in X.\] Then, for all \(x_{1},\ldots,x_{k}\) in \(X\), we have \[\det\left(K_{1}(x_{i},x_{j})\right)_{1\leq i,j\leq k}=\frac{\prod_{i=1}^{k}g(x_{i} )}{\prod_{j=1}^{k}g(x_{j})}\det\left(K_{2}(x_{i},x_{j})\right)_{1\leq i,j\leq k }=\det\left(K_{2}(x_{i},x_{j})\right)_{1\leq i,j\leq k}.\] Therefore the two kernels induce the same correlation functions (of any order) and therefore the associated point processes are equal in distribution. ## 3. Identifying the critical points of the action ### The kernel of the renormalized bead process We start our analysis by looking at the random bead process \(M_{\lambda_{N}}\) introduced in (3) in a window of size \(O(1)\times O(1/\sqrt{N})\) around a fixed point \((x_{0}\sqrt{N},t_{0})\) with \((x_{0},t_{0})\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\). In particular, in (22) we introduced the renormalized bead process \[\widetilde{M}^{(x_{0},t_{0})}_{\lambda_{N}}=\Set{(x,t)\in\mathbb{Z}\times \mathbb{R}}\Set{\left(x_{0}\sqrt{N}+x,t_{0}+\frac{t}{\sqrt{N}}\right)\in M_{ \lambda_{N}}}.\] Using simple properties of determinantal point processes, we know that \(\widetilde{M}^{(x_{0},t_{0})}_{\lambda_{N}}\) is a determinantal point process with correlation kernel \[\widetilde{K}^{(x_{0},t_{0})}_{\lambda_{N}}((x_{1},t_{1}),(x_{2},t_{2})):= \frac{1}{\sqrt{N}}\,K_{\lambda_{N}}\left(\left(x_{0}\sqrt{N}+x_{1},t_{0}+\frac {t_{1}}{\sqrt{N}}\right),\left(x_{0}\sqrt{N}+x_{2},t_{0}+\frac{t_{2}}{\sqrt{N }}\right)\right). \tag{40}\] We will use the double integral expression for \(K_{\lambda_{N}}\) given in (34). The integration contour are different in the case \(t_{1}\geq t_{2}\) and \(t_{1}<t_{2}\). Unless stated otherwise, expressions below are valid for \(t_{1}\geq t_{2}\), and we indicate only when necessary the needed modification for \(t_{1}<t_{2}\). Performing the change of variables \((w,z)=(\sqrt{N}(W+x_{0}),\,\sqrt{N}(Z+x_{0}))\), we have \[\widetilde{K}^{(x_{0},t_{0})}_{\lambda_{N}}((x_{1},t_{1}),(x_{2}, t_{2}))=-\frac{1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma_{Z}}\oint_{\gamma_{W}} \frac{F_{\lambda_{N}}(\sqrt{N}(Z+x_{0}))}{F_{\lambda_{N}}(\sqrt{N}(W+x_{0}))} \frac{\Gamma(\sqrt{N}W-x_{1}+1)}{\Gamma(\sqrt{N}Z-x_{2}+1)}\\ \cdot\frac{(1-\widetilde{t}_{2})^{\sqrt{N}Z-x_{2}}\,(1-\widetilde {t}_{1})^{-\sqrt{N}W+x_{1}-1}}{(Z-W)}\,\mathrm{d}W\,\mathrm{d}Z\,, \tag{41}\] where \(\widetilde{t}_{1}=t_{0}+\frac{t_{1}}{\sqrt{N}}\) and \(\widetilde{t}_{2}=t_{0}+\frac{t_{2}}{\sqrt{N}}\). With the new variables \(W\) and \(Z\), recalling the comments below (34), we get that the poles of the integrand in (41) occur * for \(W=Z\); * for some values of \(W\) in an interval of the form \(I_{W}:=[\frac{n\,a_{0}}{\sqrt{N}}-x_{0},\frac{x_{1}-1}{\sqrt{N}}]=[\eta\,a_{0} -x_{0},o(1)]\) (referred to as \(W\)-poles below); * for some values of \(Z\) in an interval \(I_{Z}:=[\frac{x_{2}}{\sqrt{N}},\frac{n\,a_{0}}{\sqrt{N}}-x_{0}]=[o(1),\eta\,a_ {m}-x_{0}]\) (referred to as \(Z\)-poles below). The integration contours \(\gamma_{W}\) and \(\gamma_{Z}\) above should be such that \(\gamma_{W}\) is inside \(\gamma_{Z}\) and contain \(I_{W}\) and \(I_{Z}\) respectively. Setting \(L=\eta\max(|a_{0}|,a_{m})\) and recalling that \(x_{0}\) is in \([\eta\,a_{0},\eta\,a_{m}]\), we have \[(\eta\,a_{0}-x_{0}),(\eta\,a_{m}-x_{0})\in[-2L,2L], \tag{42}\] so that, for \(N\) large enough, we can take (for instance) \(\gamma_{W}=\partial D(0,3L)\) and \(\gamma_{Z}=\partial D(0,4L)\) (both followed in counterclockwise order). For convenience, we will often write \(\mathrm{Int}_{N}(W,Z)\) for the integrand in (41). ### Asymptotic analysis of the integrand Our next goal is to write the double integral in (41) under an exponential form using asymptotic approximations. In the following, for two sequence \(A=(A_{N})\) and \(B=(B_{N})\), we write \(A\simeq B\) when \(A=B(1+O(N^{-1/2}))\). Moreover, unless stated otherwise, when \(A\) and \(B\) depends on \(Z\) and \(W\), the estimates are uniform for and \(W\) lying in the integration contours \(\gamma_{W}=\partial D(0,3L)\) and \(\gamma_{Z}=\partial D(0,4L)\) (we recall that \(x_{0}\), \(x_{1}\), \(x_{2}\), and \(t_{0}\), \(t_{1}\), \(t_{2}\) are fixed). First, since \(\widetilde{t}_{1}=t_{0}+\frac{t_{1}}{\sqrt{N}}\) and \(\widetilde{t}_{2}=t_{0}+\frac{t_{2}}{\sqrt{N}}\), we have that \[(1-\widetilde{t}_{2})^{\sqrt{N}Z-x_{2}}=(1-t_{0})^{\sqrt{N}Z-x_{2}} \left(1-\frac{t_{2}}{(1-t_{0})\sqrt{N}}\right)^{\sqrt{N}Z-x_{2}}\simeq(1-t_{0} )^{-x_{2}}\mathrm{e}^{\sqrt{N}Z\log(1-t_{0})}\mathrm{e}^{-\frac{2t_{2}}{1-t_{0 }}};\] \[(1-\widetilde{t}_{1})^{-\sqrt{N}W+x_{1}-1}\simeq(1-t_{0})^{x_{1} -1}\mathrm{e}^{-\sqrt{N}W\log(1-t_{0})}\,\mathrm{e}^{\frac{Wt_{1}}{1-t_{0}}}. \tag{43}\] We now estimate the quotients involving \(F_{\lambda_{N}}\) and \(\Gamma\) in (41). In what follows we use the principal determination of the logarithm, defined and continuous on \(\mathbb{C}\setminus\mathbb{R}_{-}\). We also use the convention that, for negative real \(x\), we have \(\log(x)=\log(-x)+\mathrm{i}\pi\). In particular \(\mathfrak{R}\log(x)\) is continuous on \(\mathbb{C}\setminus\{0\}\). The following function \(S_{1}:\mathbb{C}\to\mathbb{C}\) will play a crucial role in our analysis (see Lemma 23 and Eq. (46) below): \[S_{1}(U):=-g(U)+\sum_{i=0}^{m}g(U+x_{0}-\eta\,a_{i})-\sum_{i=1}^{m}g(U+x_{0}- \eta\,b_{i}),\quad\text{ where }\quad g(U)=U\log(U). \tag{44}\] Let also introduce a specific subset \(D_{S}\) of the complex plane. An interval of the form \([\eta\,a_{i-1}-x_{0},\eta b_{i}-x_{0}]\) or \([\eta\,b_{i}-x_{0},\eta a_{i}-x_{0}]\) for some \(1\leq i\leq m\) will be called _negative_ (resp. _positive_) if it is included in \((-\infty,0]\) (resp. \([0,+\infty)\)). Then we define \(D_{S}\) as the complex plane \(\mathbb{C}\) from which we remove the following closed real intervals: * the negative intervals of the form \([\eta\,a_{i-1}-x_{0},\eta b_{i}-x_{0}]\); * the positive intervals of the form \([\eta\,b_{i}-x_{0},\eta a_{i}-x_{0}]\); * either \([\eta\,a_{i_{0}-1}-x_{0},0]\) if \(\eta\,a_{i_{0}-1}-x_{0}<0\leq\eta\,b_{i_{0}}-x_{0}\) for some \(i_{0}\), or \([0,\eta\,a_{i_{0}}-x_{0}]\) if \(\eta\,b_{i_{0}}-x_{0}\leq 0<\eta\,a_{i_{0}}-x_{0}\) for some \(i_{0}\). Finally, following e.g. [1], we introduce the Cauchy transform of the _transition measure_ of the renormalized diagram \(\eta\lambda^{0}\): \[G(U):=\frac{\prod_{i=1}^{m}(U-\eta\,b_{i})}{\prod_{i=0}^{m}(U-\eta\,a_{i})}.\] **Lemma 23**.: _Fix an integer \(x\in\mathbb{Z}\). As \(N\to+\infty\), the following approximation holds, uniformly for \(U\) in compact subsets of \(D_{S}\):_ \[\frac{F_{\lambda_{N}}(\sqrt{N}(U+x_{0}))}{\Gamma(\sqrt{N}U-x+1)}\simeq\mathrm{ e}^{\alpha_{0}(g(\sqrt{N})-\sqrt{N})}\,\mathrm{e}^{S_{1}(U)\sqrt{N}}\left(U \sqrt{N}\right)^{x}\left(U\cdot G(U+x_{0})\right)^{-\frac{1}{2}}, \tag{45}\] _Moreover, writing \(\mathrm{LHS}(U)\) and \(\mathrm{RHS}(U)\) for the left-hand and right-hand sides of the previous estimates, for any \(\varepsilon>0\), we have the uniform bound \(\frac{|\mathrm{LHS}(U)|}{|\mathrm{RHS}(U)|}=O\left(\exp\!\left(\varepsilon\pi \sqrt{N}\right)\right)\) for \(U\) in compact subsets of_ \[\{V:\mathfrak{R}(V)<0\text{ and }|\mathfrak{I}(V)|\leq\varepsilon\}\setminus\{\eta\,a_{0 }-x_{0},\eta\,b_{0}-x_{0},\ldots,\eta\,b_{m}-x_{0},\eta\,a_{m}-x_{0}\}.\] _Similarly, for any \(\varepsilon>0\), we have the uniform bound \(\frac{|\mathrm{RHS}(U)|}{|\mathrm{LHS}(U)|}=O\left(\exp\!\left(\varepsilon\pi \sqrt{N}\right)\right)\) for \(U\) in compact subsets of_ \[\{V:\mathfrak{R}(V)>0\text{ and }|\mathfrak{I}(V)|\leq\varepsilon\}\setminus\{\eta\,a_{0 }-x_{0},\eta\,b_{0}-x_{0},\ldots,\eta\,b_{m}-x_{0},\eta\,a_{m}-x_{0}\}.\] Before proving the lemma, let us discuss some implication. Recall that we write \(\mathrm{Int}_{N}(W,Z)\) for the integrand in (41). Bringing together the estimates in Eqs. (43) and (45), we get that uniformly for \(Z\) and \(W\) in compact subsets of \(D_{S}\), \[\mathrm{Int}_{N}(W,Z)\simeq(\sqrt{N})^{x_{2}-x_{1}}\mathrm{e}^{\sqrt{N}(S(W)-S (Z))}\,h_{(x_{2},t_{2})}^{(x_{1},t_{1})}(W,Z), \tag{46}\] where \[S(U):=-S_{1}(U)-U\log(1-t_{0})\] \[\stackrel{{\eqref{eq:S1}}}{{=}}g(U)-U\log(1-t_{0})- \sum_{i=0}^{m}g(x_{0}-\eta\,a_{i}+U)+\sum_{i=1}^{m}g(x_{0}-\eta\,b_{i}+U);\] \[h_{(x_{2},t_{2})}^{(x_{1},t_{1})}(W,Z):=\frac{\mathrm{e}^{\frac{ Wt_{1}-2t_{2}}{1-t_{0}}}\,(1-t_{0})^{x_{1}-x_{2}-1}}{W^{x_{1}}Z^{-x_{2}}(Z-W)}\, \left(\frac{W\,G(x_{0}+W)}{Z\,G(x_{0}+Z)}\right)^{\frac{1}{2}}. \tag{47}\] We note for future reference, that as \(|U|\) tends to infinity, we have that \[S(U)=|\log(1-t_{0})|\,U+O(\log(|U|)). \tag{48}\] Indeed, the terms of order \(\Theta(U\log U)\) cancel out. Proof of Lemma 23.: From (29) we have that \[F_{\lambda_{N}}\big{(}\sqrt{N}(U+x_{0})\big{)}=\frac{\prod_{i=0}^{m}\Gamma \big{(}(U+x_{0}-\eta\,a_{i})\sqrt{N}+1\big{)}}{\prod_{i=1}^{m}\Gamma\big{(}(U+ x_{0}-\eta\,b_{i})\sqrt{N}+1\big{)}}.\] We use Stirling's approximation for the \(\Gamma\) function: \[\Gamma(U+1)=\mathrm{e}^{U\log U-U}\,\sqrt{2\pi U}\,(1+O(U^{-1})); \tag{49}\] this approximation is uniform for \(\mathrm{Arg}(U)\) in a compact sub-interval of \((-\pi,\pi)\). Then a straightforward computation shows that (45) holds uniformly for \(U\) in compact subsets of \(\mathbb{C}\setminus(-\infty,\eta\,a_{m}-x_{0}]\). We need to extend this estimate to compact subsets of \(D_{S}\). For this we will prove that (45) holds on compact subsets of \(I+\mathrm{i}\mathbb{R}\), where \(I\) is one interval among: the interval \((-\infty,\eta\,a_{0}-x_{0})\); the negative intervals \((\eta\,b_{i}-x_{0},\eta a_{i}-x_{0})\); the positive intervals \((\eta\,a_{i-1}-x_{0},\eta b_{i}-x_{0})\); and either \((0,\eta\,b_{i_{0}}-x_{0})\) or \((\eta\,b_{i_{0}}-x_{0},0)\). We will treat the case where \(I\) is a negative interval \((\eta\,b_{i}-x_{0},\eta a_{i}-x_{0})\), the other cases being similar. The idea is to use Euler's reflection formula \(\Gamma(U)\,\Gamma(1-U)=\frac{\pi}{\sin(\pi U)}\) to get rid of \(\Gamma\) functions applied to negative real number. For \(U\in I+\mathrm{i}\mathbb{R}\), we have \[\frac{F_{\lambda_{N}}(\sqrt{N}(U+x_{0}))}{\Gamma(\sqrt{N}U-x+1)}= (-1)^{(x_{0}+\eta(b_{i+1}+\dots+b_{m}-a_{i}-\dots-a_{m}))\sqrt{N}+x}\,\Gamma \Big{(}x-\sqrt{N}U\Big{)}\\ \frac{\prod_{j=0}^{i-1}\Gamma\big{(}(U+x_{0}-\eta\,a_{j})\sqrt{N} +1\big{)}}{\prod_{j=1}^{i}\Gamma\big{(}(U+x_{0}-\eta\,b_{j})\sqrt{N}+1\big{)} }\,\frac{\prod_{j=i+1}^{m}\Gamma\Big{(}-(U+x_{0}-\eta\,b_{j})\sqrt{N}\Big{)}} {\prod_{j=i}^{m}\Gamma\Big{(}-(U+x_{0}-\eta\,a_{j})\sqrt{N}\Big{)}};\] the sign comes from the quotient of the \(\sin(\pi U)\) factors in Euler's reflection factor, noting that we have applied the reflection formulas as many times in the numerator as in the denominator, and that the argument of the various \(\Gamma\) functions differ by integer values. For \(U\in I+\mathrm{i}\mathbb{R}\), all arguments of \(\Gamma\) functions in the above formula have positive real part. Thus we can apply Stirling's formula (49) and we find that, uniformly on compact subsets of \(I+\mathrm{i}\mathbb{R}\), \[\frac{F_{\lambda_{N}}(\sqrt{N}(U+x_{0}))}{\Gamma(\sqrt{N}U-x+1)} \simeq(-1)^{(x_{0}+\eta(b_{i+1}+\dots+b_{m}-a_{i}-\dots-a_{m}))\sqrt{N}+x}\\ \mathrm{e}^{x_{0}(g(\sqrt{N})-\sqrt{N})}\,\mathrm{e}^{S_{2}(U) \sqrt{N}}\left(-U\sqrt{N}\right)^{x}\left(UG(U+x_{0})\right)^{-\frac{1}{2}}, \tag{50}\] where \[S_{2}(U)=g(-U)+\sum_{j=0}^{i-1}g(U+x_{0}-\eta\,a_{j})-\sum_{j=1}^ {i}g(U+x_{0}-\eta\,b_{i})\\ -\sum_{j=i}^{m}g(-U-x_{0}+\eta\,a_{j})+\sum_{j=i+1}^{m}g(-U-x_{0} +\eta\,b_{j}).\] For \(\mathfrak{R}(U)<0\), we have \(g(-U)=-g(U)+i\pi U\) if \(\mathfrak{I}(U)\geq 0\) and \(g(-U)=-g(U)-i\pi U\) if \(\mathfrak{I}(U)<0\). Hence, comparing the latter displayed equation with (44), we have for \(U\in I+\mathrm{i}\mathbb{K}\) that (recall that \(I\) is a negative interval): \[S_{2}(U)=\begin{cases}S_{1}(U)+\mathrm{i}\pi(-x_{0}+\eta(a_{i}+\ldots+a_{m}-b_ {i+1}-\ldots-b_{m}))&\text{ if }\mathfrak{I}(U)\geq 0,\\ S_{1}(U)-\mathrm{i}\pi(-x_{0}+\eta(a_{i}+\ldots+a_{m}-b_{i+1}-\ldots-b_{m}))& \text{ if }\mathfrak{I}(U)<0.\end{cases}\] In particular, recalling that \(x_{0}\sqrt{N}\) and \(\eta\sqrt{N}\) are integers, we obtain \[\mathrm{e}^{S_{2}(U)\sqrt{N}}=(-1)^{(-x_{0}+\eta(a_{i}+\ldots+a_{m}-b_{i+1}- \ldots-b_{m}))\sqrt{N}}\,\mathrm{e}^{S_{1}(U)\sqrt{N}}.\] Consequently, signs cancel out in (50) and we get that (45) is also valid uniformly on compact subsets of \(I+\mathrm{i}\mathbb{R}\). Doing similar reasoning for the other intervals \(I\) listed above, we conclude that (45) is valid uniformly on compact subsets of \(D_{S}\). We now prove the bound \(\frac{|\mathrm{LHS}(U)|}{|\mathrm{LHS}(U)|}=O\left(\exp\Bigl{(}\varepsilon\pi \sqrt{N}\Bigr{)}\right)\) on compact subsets of \[\{V:\mathfrak{R}(V)<0\text{ and }|\mathfrak{I}(V)|<\varepsilon\}\setminus\{ \eta\,a_{0}-x_{0},\eta b_{0}-x_{0},\ldots,\eta b_{m}-x_{0},\eta a_{m}-x_{0}\}.\] Thanks to the first part of the lemma, it only remains to prove that this quantity is bounded on compact subsets of \(I+\mathrm{i}[-\varepsilon,\varepsilon]\), for any negative interval \(I\) of the form \((\eta\,a_{i-1}-x_{0},\eta b_{i}-x_{0})\). Let us take \(U\) in \(I+\mathrm{i}[-\varepsilon,\varepsilon]\). Using Euler's reflection formula (once more in the denominator than in the numerator), we have \[|\,\mathrm{LHS}(U)|=\frac{|F_{\lambda_{N}}(\sqrt{N}(U+x_{0}))|} {|\Gamma(\sqrt{N}U-x+1)|}=\frac{\Bigl{|}\sin\Bigl{(}-\pi\sqrt{N}(U+x_{0}-\eta b _{i})\Bigr{)}\Bigr{|}}{\pi}\Bigl{|}\Gamma\Bigl{(}x-\sqrt{N}U\Bigr{)}\Bigr{|} \\ \cdot\frac{\prod_{j=0}^{i-1}\big{|}\Gamma\bigl{(}(U+x_{0}-\eta\,a_ {j})\sqrt{N}+1\bigr{)}\big{|}}{\prod_{j=1}^{i-1}\big{|}\Gamma\bigl{(}(U+x_{0}- \eta\,b_{j})\sqrt{N}+1\bigr{)}\big{|}}\frac{\prod_{j=i}^{m}\big{|}\Gamma\Bigl{(} -(U+x_{0}-\eta\,a_{j})\sqrt{N}\Bigr{)}\big{|}}{\prod_{j=i}^{m}\big{|}\Gamma \Bigl{(}-(U+x_{0}-\eta\,a_{j})\sqrt{N}\Bigr{)}\big{|}}.\] Using the trivial bound \(|\sin(z)|\leq\exp(|\mathfrak{I}(z)|)\), we get that \(\Bigl{|}\sin\Bigl{(}-\pi\sqrt{N}(U+x_{0}-\eta b_{i})\Bigr{)}\Bigr{|}\leq\exp \Bigl{(}\varepsilon\pi\sqrt{N}\Bigr{)}\) for any \(U\) with \(|\mathfrak{I}(U)|\leq\varepsilon\). Moreover, the gamma factors can be controlled as above using Stirling formula (all their arguments have positive real parts for \(U\) in \(I+\mathrm{i}[-\varepsilon,\varepsilon]\)). This proves the second statement in the lemma. For the third statement, we need to work on rectangles \(I+\mathrm{i}[-\varepsilon,\varepsilon]\), where \(I\) is a positive interval of the form \(I=(\eta b_{i}-x_{0},\eta a_{i}-x_{0})\). Using again the reflection formula to get rid of the \(\Gamma\) functions applied to arguments with negative real parts yields: \[|\,\mathrm{LHS}(U)|=\frac{|F_{\lambda_{N}}(\sqrt{N}(U+x_{0}))|}{| \Gamma(\sqrt{N}U-x+1)|}=\frac{\pi}{\Bigl{|}\sin\Bigl{(}-\pi\sqrt{N}(U+x_{0}- \eta a_{i})\Bigr{)}\Bigr{|}}\frac{1}{|\Gamma(\sqrt{N}U-x+1)|}\\ \cdot\frac{\prod_{j=0}^{i-1}\big{|}\Gamma\bigl{(}(U+x_{0}-\eta\,a_ {j})\sqrt{N}+1\bigr{)}\big{|}}{\prod_{j=1}^{i}\big{|}\Gamma\bigl{(}(U+x_{0}- \eta\,b_{j})\sqrt{N}+1\bigr{)}\big{|}}\frac{\prod_{j=i+1}^{m}\big{|}\Gamma \Bigl{(}-(U+x_{0}-\eta\,b_{j})\sqrt{N}\Bigr{)}\big{|}}{\prod_{j=i}^{m}\big{|} \Gamma\Bigl{(}-(U+x_{0}-\eta\,a_{j})\sqrt{N}\Bigr{)}\big{|}}.\] Note that, this time, we applied the reflection formula once more in the numerator than in the denominator, yielding an extra sine term at the denominator. The bound \(\frac{|\mathrm{RHS}(U)|}{|\mathrm{LHS}(U)|}=O\left(\exp\Bigl{(}\varepsilon\pi \sqrt{N}\Bigr{)}\right)\) is then obtained bounding once again the sine term via the inequality \(|\sin(z)|\leq\exp(|\mathfrak{I}(z)|)\), and using Stirling formula for the \(\Gamma\) factors. ### The critical points of the action and the shape of the liquid region The function \(S\) in the estimate in (46) is called the _action_ of the model. In order to perform a saddle point analysis of the double integral, we look for the critical points of the action, i.e., complex solutions of the critical equation \(\frac{\partial S}{\partial U}(U)=0\). Here is an analogue of [14, Proposition 7.6] in our setting. Recall that \(x_{0}\in[\eta\,a_{0},\eta\,a_{m}]\) and \(t_{0}\in[0,1]\). In the next analysis we exclude the cases \(x_{0}=\eta\,a_{i}\) for \(i\in\{0,1,\ldots,m\}\); these cases will be treated later in Remark 26. **Lemma 24**.: _Let \(i_{0}\geq 1\) be such that \(x_{0}\in(\eta\,a_{i_{0}-1},\eta\,a_{i_{0}})\). Then for each \(i\) in \(\{1,\ldots,i_{0}-1\}\), the critical equation \(\frac{\partial S}{\partial U}(U)=0\) has at least one real root in the interval \([\eta\,b_{i}-x_{0},\eta\,a_{i}-x_{0})\), while, for each \(i\) in \(\{i_{0}+1,\ldots,m\}\), it has at least one real root in \((\eta\,a_{i-1}-x_{0},\eta\,b_{i}-x_{0}]\)._ Proof.: The critical equation writes as \[\log U-\log(1-t_{0})-\sum_{i=0}^{m}\log(x_{0}-\eta\,a_{i}+U)+\sum_{i=1}^{m} \log(x_{0}-\eta\,b_{i}+U)=0, \tag{51}\] and taking the exponential, we recover the critical equation (8) from the introduction, that is \[U\,\prod_{i=1}^{m}(x_{0}-\eta\,b_{i}+U)=(1-t_{0})\,\prod_{i=0}^{m}(x_{0}-\eta \,a_{i}+U). \tag{52}\] If \(t_{0}=1\) then the lemma statement immediately follows, hence we assume that \(t_{0}\in[0,1)\) (so that the factor \((1-t_{0})\) is non-zero). The lemma is then easily obtained by looking at the signs of the left and right-hand sides \[L(U):=U\prod_{i=1}^{m}(x_{0}-\eta\,b_{i}+U)\qquad\text{and}\qquad R(U):=(1-t_ {0})\,\prod_{i=0}^{m}(x_{0}-\eta\,a_{i}+U) \tag{53}\] at the boundary points of the intervals introduced above. For instance, if \(1\leq i\leq i_{0}-1\), then * \(L(\eta\,b_{i}-x_{0})=0\) and \(R(\eta\,b_{i}-x_{0})\) is non-zero and has sign \((-1)^{m-i+1}\); * \(L(\eta\,a_{i}-x_{0})\) is non-zero and has sign \((-1)^{m-i+1}\) (because \(0\in(\eta\,a_{i_{0}-1}-x_{0},\eta\,a_{i_{0}}-x_{0})\)) and \(R(\eta\,a_{i}-x_{0})=0\); so the difference \(L(U)-R(U)\) has to vanish between these two boundary points. The lemma locates \(m-1\) solutions out of the \(m+1\) solutions of the polynomial critical equation (8) (see (52) for a closer reference). The two extra solutions are either both real or non-real complex conjugate. Note that if \(t_{0}=0\) then the polynomial equation (8) is of degree \(m\) and so all the \(m\) solutions are real, while if \(t_{0}=1\) then the polynomial equation (8) has clearly only real solutions; we exclude these trivial cases from the next discussion. We recall from Definition 3 that the liquid region \(L\) is the set of pairs \((x_{0},t_{0})\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\) such that the critical equation (8) has exactly two non-real solutions. The following proposition gives us some information about the shape of the liquid region, and the localization of the critical points (in addition to those identified in Lemma 24). **Proposition 25**.: _Fix \(x_{0}\in[\eta\,a_{0},\eta\,a_{m}]\). As above, let \(i_{0}\geq 1\) be such that \(x_{0}\in(\eta\,a_{i_{0}-1},\eta\,a_{i_{0}})\) (note that we do not know how \(\eta\,b_{i_{0}}\) compares with \(x_{0}\)). Then the following assertions hold:_ * _There exists_ \(t_{-}=t_{-}(x_{0})\geq 0\) _such that the critical equation (_8_) has two real solutions (counted with multiplicities) outside the interval_ \((\eta\,a_{0}-x_{0},\eta\,a_{m}-x_{0})\) _if and only if_ \(t_{0}\leq t_{-}\)_. Moreover,_ \(t_{-}(x_{0})=0\) _if and only if_ \(x_{0}=0\)_. For_ \(x_{0}<0\)_, these solutions are in_ \((-\infty,\eta\,a_{0}-x_{0})\)_, while for_ \(x_{0}>0\)_, they lie in_ \((\eta\,a_{m}-x_{0},+\infty)\)_._ * _There exists_ \(t_{+}=t_{+}(x_{0})\leq 1\) _such that the critical equation (_8_) has two real solutions (counted with multiplicities) inside the interval_ \((\eta\,a_{i_{0}-1}-x_{0},\eta\,a_{i_{0}}-x_{0})\) _if and only if_ \(t_{0}\geq t_{+}\) _If this is the case, these solutions are inside the sub-interval_ \((0\wedge(\eta\,b_{i_{0}}-x_{0}),0\vee(\eta\,b_{i_{0}}-x_{0}))\)_. Moreover,_ \(t_{+}(x_{0})=1\) _if and only if_ \(x_{0}=\eta\,b_{i_{0}}\)_._ * _Finally, if_ \(t_{0}\in(t_{-}(x_{0}),t_{+}(x_{0}))\) _and the critical equation (_8_) has only real solutions, then the two extra real solutions_8 _(counted with multiplicities) are either both inside a negative interval_ \((\eta\,b_{j_{0}}-x_{0},\eta\,a_{j_{0}}-x_{0})\) _for some_ \(j_{0}<i_{0}\)_, or both inside a positive interval_ \((\eta\,a_{j_{0}}-x_{0},\eta\,b_{j_{0}+1}-x_{0})\) _for some_ \(j_{0}\geq i_{0}\)_._ Footnote 8: in addition to the ones determined by Lemma 24. We call the regions \(\{0\leq t_{0}\leq t_{-}(x_{0})\}\), \(\{t_{+}(x_{0})\leq t_{0}\leq 1\}\) and \(\{t_{-}(x_{0})<t_{0}<t_{+}(x_{0})\}\) the _small_\(t\), _large_\(t\), and _intermediate_\(t\) regions, respectively. By Definition 3, the liquid region is included in the intermediate \(t\) region, but the converse is not true; i.e. it might happen that \(t_{0}\) is in the intermediate \(t\) region, but not in the liquid region, which corresponds to the discontinuity phenomenon discussed in the introduction in Section 1.3.2. An illustrative examples of the various results obtained in this section is given in Fig. 10. Proof of Proposition 25.: Recall that we can restrict ourselves to the case \(t_{0}\in(0,1)\). We subdivide the proof into four main steps. _Step 1: We fix \(x_{0}>0\) and show that there exists \(t_{-}^{1}(x_{0})>0\) such that the critical equation (8) has two real solutions (counted with multiplicities) in the interval \((\eta\,a_{m}-x_{0},+\infty)\) if and only if \(t_{0}\leq t_{-}^{1}(x_{0})\), and no real solutions in this interval otherwise._ First note that, as a consequence of Lemma 24, the critical equation (8) cannot have more than two real solutions on this interval. Now, recall from (53) the functions \(L(U)=L(U,x_{0})\) and \(R(U)=R(U,t_{0})\) characterizing the left and right-hand sides of the critical equation (8), i.e. \[L(U)=U\,\prod_{i=1}^{m}(x_{0}-\eta\,b_{i}+U)\qquad\text{and}\qquad R(U)=(1-t_{ 0})\,\prod_{i=0}^{m}(x_{0}-\eta\,a_{i}+U).\] For \(U\) tending to \(\eta\,a_{m}-x_{0}\) from the right, \(L(U)\) stays positive while \(R(U)\) tends to \(0\), so that \(L(U)-R(U)\) is positive. For \(U\) tending to \(+\infty\), the difference \(L(U)-R(U)\) is asymptotically equivalent to \(t_{0}U^{m+1}\), and hence is also positive (since \(t_{0}>0\)). Thus \(L(U)-R(U)\) has two zeroes (counted with multiplicities) in the interval \((\eta\,a_{m}-x_{0},+\infty)\) if and only if \[\theta(t_{0}):=\inf_{U\geq\eta\,a_{m}-x_{0}}\left\{L(U)-R(U,t_{0})\right\}, \qquad t_{0}\in[0,1],\] is non-positive; and \(L(U)-R(U,t_{0})\) has no zeroes in this interval otherwise. Since \(\theta(t_{0})\) is continuous and non-decreasing in \(t_{0}\), if we show that \(\theta(t_{0})\) is non-positive for \(t_{0}\) small enough, this would complete the proof of Step 1. Note that, as \(U\) tends to \(+\infty\), we have that \[L(U) =U^{m+1}+\left(mx_{0}-\eta\sum_{i=1}^{m}b_{i}\right)U^{m}+O(U^{m- 1}),\] \[R(U,0) =U^{m+1}+\left((m+1)x_{0}-\eta\sum_{i=0}^{m}a_{i}\right)U^{m}+O(U ^{m-1}).\] Using the identity \(\sum_{i=0}^{m}a_{i}=\sum_{i=1}^{m}b_{i}\) in (2), this implies that when \(x_{0}>0\), \(L(U)-R(U,0)\) tends to \(-\infty\) as \(U\) goes to \(+\infty\). This implies that \(\theta(t_{0})<0\) for \(t_{0}\) small enough, and conclude the proof of Step 1. _Step 2: We complete the proof of the first item in the proposition statement._ By symmetry, if \(x_{0}<0\) there exists \(t_{-}^{2}(x_{0})>0\) such that the critical equation has two zeroes (counted with multiplicities) in the interval \((-\infty,\eta\,a_{0}-x_{0})\) if and only if \(t_{0}\leq t_{-}^{2}(x_{0})\), and no real solutions in this interval otherwise. From Lemma 24, the critical equation (8) cannot have at the same time two solutions in the intervals \((-\infty,\eta\,a_{0}-x_{0})\) and \((\eta\,a_{m}-x_{0},+\infty)\). Therefore \(t_{-}^{1}(x_{0})=0\) for \(x_{0}<0\) and \(t_{-}^{2}(x_{0})=0\) for \(x_{0}>0\). Setting \(t_{-}(x_{0})=t_{-}^{1}(x_{0})\) for \(x_{0}>0\) and \(t_{-}(x_{0})=t_{-}^{2}(x_{0})\) for \(x_{0}<0\), we have proven the first item in the Proposition statement for \(x_{0}\neq 0\). The case \(x_{0}=0\) follows from continuity arguments. _Step 3: We complete the proof of the second item in the proposition statement._ To fix ideas and notation, let us suppose that \[\eta\,a_{i_{0}-1}-x_{0}<0\leq\eta\,b_{i_{0}}-x_{0}<\eta\,a_{i_{0}}-x_{0};\] the other case \(\eta\,b_{i_{0}}-x_{0}<0\) being symmetric; while the case \(\eta\,a_{i_{0}}-x_{0}=0\) needs minor adjustments discussed below. The left-hand side \(L(U)\) vanishes only at the edges of the interval \([0,\eta\,b_{i_{0}}-x_{0}]\), while the right-hand side \(R(U)\) has sign \((-1)^{m-i_{0}+1}\). Thus the difference \(L(U)-R(U)\) has two zeroes on \([0,\eta\,b_{i_{0}}-x_{0}]\) if and only if \[\theta(t_{0}):=\sup_{0\leq U\leq\eta\,b_{i_{0}}-x_{0}}\left\{(-1)^{m-i_{0}+1}( L(U)-R(U,t_{0}))\right\}\] is non-negative, and no zeroes otherwise (it cannot have more than two zeroes, as a consequence of Lemma 24). The quantity \(\theta(t_{0})\) is increasing with \(t_{0}\) and \(\theta(1)=\sup_{0\leq U\leq\eta\,b_{i_{0}}-x_{0}}(-1)^{m-i_{0}+1}L(U)\geq 0\), which implies the existence of \(t_{+}(x_{0})\leq 1\) as in the proposition statement. If \(x_{0}=\eta\,b_{i_{0}}\), then, for all \(t_{0}<1\), one has \[\theta(t_{0})=(-1)^{m-i_{0}+1}(L(0)-R(0))=-(1-t_{0})|R(0)|<0,\] proving \(t_{+}(x_{0})=1\). On the other hand, if \(x_{0}\neq\eta\,b_{i_{0}}\) one has \(\theta(t_{0})>0\) as soon as \[1-t_{0}<\frac{\sup_{0\leq U\leq\eta\,b_{i_{0}}-x_{0}}|L(U)|}{\sup_{0\leq U\leq \eta\,b_{i_{0}}-x_{0}}|R(U)|},\] proving that \(t_{+}(x_{0})<1\). _Step 4: We complete the proof of the third item in the proposition statement._ Note that since \(t_{0}\in(t_{-}(x_{0}),t_{+}(x_{0}))\), as a consequence of the first two items in the proposition statement, there are no real solutions outside the interval \((\eta\,a_{0}-x_{0},\eta\,a_{m}-x_{0})\) and inside the interval \((\eta\,a_{i_{0}-1}-x_{0},\eta\,a_{i_{0}}-x_{0})\). Combining this observation with the results from Lemma 24, we conclude the proof of this last item. **Remark 26**.: _As already mentioned, the case where \(x_{0}=\eta\,a_{i_{0}}\) is not considered in the above results, because it needs some adjustment. In this case, the factor \(U\) in \(L(U)\) cancels out with the factor \((x_{0}-\eta\,a_{i_{0}}+U)\) in \(R(U)\) and the critical equation (8) (see (52) for a closer reference) has degree only \(m\). An analogue of Lemma 24 states that the critical equation (8) has at least one root in each interval \((\eta\,b_{i}-x_{0},\eta\,a_{i}-x_{0})\) for \(i\in\{1,\ldots,i_{0}-1\}\) and in each interval \((\eta\,a_{i-1}-x_{0},\eta\,b_{i}-x_{0})\) for \(i\in\{i_{0}+2,\ldots,m\}\)._ _If \(i_{0}\notin\{0,m\}\), this locates \(m-2\) roots out of the \(m\) roots of the equation. As in the generic case, the location of the two last roots is partially described by an analogue of Proposition 25. The first item of Proposition 25 holds true in the case \(x_{0}=\eta\,a_{i_{0}}\) without any modification. For the second item, it holds that the critical equation (8) has two real solutions (counted with multiplicities) inside the interval \((\eta\,b_{i_{0}}-x_{0},\eta\,b_{i_{0}+1}-x_{0})\) if and only if \(t_{0}\) is at least equal to some \(t_{+}(x_{0})\). The proof is a simple adaptation of that of Proposition 25._ _Finally, if \(x_{0}=\eta\,a_{0}\) (resp. \(x_{0}=\eta\,a_{m}\)), the critical equation (8) has degree \(m\), and has at least one root in each interval \((\eta\,a_{i-1}-x_{0},\eta\,b_{i}-x_{0})\) for \(i\geq 2\) (resp. \((\eta\,b_{i}-x_{0},\eta\,a_{i}-x_{0})\) for \(i\leq m-1\)). In both case, it has at least \(m-1\) real roots, and hence cannot have complex roots. The vertical lines \(\{\eta\,a_{0}\}\times[0,1]\) and \(\{\eta\,a_{m}\}\times[0,1]\) thus entirely lie in the frozen region, as well as the horizontal lines \([\eta\,a_{0},\eta\,a_{m}]\times\{0\}\) and \([\eta\,a_{0},\eta\,a_{m}]\times\{1\}\) (recall the discussion above Proposition 25)._ #### 3.3.1. The shape of the liquid region We conclude this section proving Proposition 19 from the introduction, and giving an alternative description of the liquid region (see Proposition 27 below). These results have all rather standard proofs, which we include for the sake of completeness. Proof of Proposition 19.: Recall that \(U_{c}(x,t)\) is the unique solution with positive imaginary part of the critical equation \(\frac{\partial S}{\partial U}(U)=0\), where \(S(U)\) is as in (47). Setting9\(\widetilde{U}_{c}(x,t):=U_{c}(x,t)+x\), we get that \(\widetilde{U}_{c}(x,t)\) solves the equation Footnote 9: This substitution is not strictly needed here but makes the computations nicer and also useful for the proof of Proposition 27. \[\log\Bigl{(}\widetilde{U}_{c}-x\Bigr{)}-\log(1-t)-\sum_{i=0}^{m}\log\Bigl{(} \widetilde{U}_{c}-\eta\,a_{i}\Bigr{)}+\sum_{i=1}^{m}\log\Bigl{(}\widetilde{U}_ {c}-\eta\,b_{i}\Bigr{)}=0. \tag{54}\] Differentiating in \(x\) and \(t\) the above equation, we obtain that \[\begin{cases}\frac{-1}{\widetilde{U}_{c}-x}=(\widetilde{U}_{c})_{x}\cdot \bigl{(}\Sigma(\widetilde{U}_{c})-\frac{1}{\widetilde{U}_{c}-x}\bigr{)},\\ \frac{1}{1-t}=(\widetilde{U}_{c})_{t}\cdot\bigl{(}\Sigma(\widetilde{U}_{c})- \frac{1}{\widetilde{U}_{c}-x}\bigr{)}.\end{cases}\] where \(\Sigma(s)=\sum_{i=0}^{m}\frac{1}{s-\eta\,a_{i}}-\sum_{i=1}^{m}\frac{1}{s-\eta\,b _{i}}\). The quantity \(\Sigma(\widetilde{U}_{c})-\frac{1}{\widetilde{U}_{c}-x}\) corresponds to \(\frac{\partial^{2}S}{\partial U^{2}}\), and is thus nonzero in the liquid region; otherwise \(U_{c}\) would be a double root of the critical equation, which is impossible since the latter has at most \(2\) non-real conjugate solutions, counted with multiplicities (Lemma 24). Consequently, \(\widetilde{U}_{c}(x,t)\) solves the equation \[-\frac{(\widetilde{U}_{c})_{t}}{\widetilde{U}_{c}-x}=\frac{(\widetilde{U}_{c})_ {x}}{1-t}.\] Substituting \(U_{c}(x,t)=\widetilde{U}_{c}(x,t)-x\) in the equation above, we get (23). **Proposition 27**.: _We have the following equivalent description of the liquid region \(L\):_ \[L=\left\{(x,t)\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\,|\,\mathrm{Disc}_{U}(P _{x,t})<0\right\},\] _where \(\mathrm{Disc}_{U}(P_{x,t})\) denotes the discriminant10 of the polynomial \(P_{x,t}(U)\) appearing in the critical equation (8). As a consequence, \(L\) is an open subset of \([\eta\,a_{0},\eta\,a_{m}]\times[0,1]\) and the boundary of \(L\), called frozen boundary curve, is given by_ Footnote 10: Since \(L\) is defined in terms of the sign of the discriminant, let us recall the standard convention of normalisation of the discriminant: \(\mathrm{Disc}_{U}(P_{x,t})=(-1)^{\frac{m(m+1)}{2}}t^{-1}\,\mathrm{Res}_{U}(P_{x,t},P_{x,t}^{\prime})\), where \(\mathrm{Res}\) is the resultant. \[\partial L=\left\{(x,t)\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\,|\,\mathrm{ Disc}_{U}(P_{x,t})=0\right\}. \tag{55}\] _Moreover, the frozen boundary curve 0L can be parametrized by \(-\infty<s<\infty\), as follows:_ \[\begin{cases}x(s)=s-\frac{1}{\Sigma(s)},\\ t(s)=1-\frac{G(s)}{\Sigma(s)},\end{cases}\] _where \(\Sigma(s):=\sum_{i=0}^{m}\frac{1}{s-\eta\,a_{i}}-\sum_{i=1}^{m}\frac{1}{s-\eta \,b_{i}}\) and \(G(s):=\frac{\prod_{i=1}^{m}(s-\eta\,b_{i})}{\prod_{i=0}^{m}(s-\eta\,a_{i})}\). The tangent vector \((\dot{x}(s),\dot{t}(s))\) to the frozen boundary curve is parametrized by:_ \[\begin{cases}\dot{x}(s)=1+\frac{\dot{\Sigma}(s)}{\Sigma(s)^{2}},\\ \dot{t}(s)=G(s)\left(1+\frac{\Sigma(s)}{\Sigma(s)^{2}}\right).\end{cases}\] _In particular, it has slope \(\frac{i(s)}{x(s)}=G(s)\)._ Proof of Proposition 27.: Recall that for a polynomial with real coefficients, its discriminant is positive (resp. zero) if and only if the number of non-real roots is a multiple of \(4\) (resp. the polynomial has a multiple root). Since Lemma 24 shows that at least \(m-1\) roots of the critical equation (8) are simple and real, we conclude that the existence of two non-real solutions for the critical equation (8) is equivalent to \(\mathrm{Disc}_{U}(P_{x,t})<0\), where we recall that \(\mathrm{Disc}_{U}(P_{x,t})\) denotes the discriminant of the polynomial \(P_{x,t}(U)\) appearing in the critical equation (8). \(\mathrm{Disc}_{U}(P_{x,t})\) is itself a polynomial in \((x,t)\), which implies that the liquid region \(L\) is an open subset of \([\eta\,a_{0},\eta\,a_{m}]\times[0,1]\) and its boundary is described by (55). We now turn to the proof of the claimed parametrization for the frozen boundary curve. Note that, if a point \((x,t)\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\) approaches the frozen boundary curve \(\partial L\), the solution \(U_{c}(x,t)\) of the critical equation becomes real and merges with \(\overline{U_{c}(x,t)}\). Equivalently, the frozen boundary can be characterized as the set of points \((x,t)\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\) such that the action \(S\) in (47) has a double critical point. For computational reasons, it is convenient to take \(\widetilde{U}_{c}(x,t)=U_{c}(x,t)+x\) as parameter to describe \(\partial L\), and express \(x\) and \(t\) in terms of \(\widetilde{U}_{c}(x,t)\). Hence, we need to impose that \(\widetilde{U}_{c}(x,t)\) solves the equation \(\frac{\partial\widetilde{S}}{\partial U}(U)=0\) (written in (54)) and the equation \(\frac{\partial^{2}\widetilde{S}}{\partial U^{2}}(U)=0\). We obtain: \[\begin{cases}\frac{\widetilde{U}_{c}-x}{1-t}=(G(\widetilde{U}_{c}))^{-1},\\ \frac{1}{U_{c}-x}=\Sigma(\widetilde{U}_{c}).\end{cases}\] Solving the above linear system for \(x\) and \(t\), we get the parametrization claimed in the proposition statement (with \(s=\widetilde{U}_{c}\)). The claims for the tangent vector follow from standard computations, noting that \(\dot{G}(s)=-\Sigma(s)\cdot G(s)\). ### The imaginary and real part of the action on the real line The imaginary and real part of the action \(S\) (introduced in (47)) on the real line play a key-role in our analysis in the next sections. Hence we analyze them here. In what follows, we keep working with the principal determination of the logarithm, defined on \(\mathbb{C}\setminus\{0\}\) and continuous on \(\mathbb{C}\setminus\mathbb{R}_{-}\); with the convention that, for negative real \(x\), we have \(\log(x)=\log(-x)+\mathrm{i}\pi\). For \(u\in\mathbb{R}\), we have \[\mathfrak{I}S(u)=-\pi u^{-}+\sum_{i=0}^{m}\pi(u+x_{0}-\eta\,a_{i})^{-}-\sum_{i =1}^{m}\pi(u+x_{0}-\eta\,b_{i})^{-}, \tag{56}\] where \(x^{-}=\max(0,-x)\). In particular, \(\mathfrak{I}S(u)\) is piecewise affine, has value \(-\pi x_{0}\) for \(u<\eta\,a_{0}-x_{0}\) (recall from (2) that \(\sum_{i=0}^{m}a_{i}-\sum_{i=1}^{m}b_{i}=0\)), has slope alternatively \(+\pi\) and \(0\) for \(u<0\), then alternatively \(0\) or \(-\pi\) for \(u>0\), and finally takes value \(0\) for \(u>\eta\,a_{m}-x_{0}\). See Fig. 11 for an example. On the other hand, for \(u\in\mathbb{R}\), we have \[\mathfrak{R}S(u)=\widetilde{g}(u)-u\cdot\ln(|1-t_{0}|)-\sum_{i=0}^{m} \widetilde{g}(x_{0}-\eta\,a_{i}+u)+\sum_{i=1}^{m}\widetilde{g}(x_{0}-\eta\,b_{ i}+u), \tag{57}\] where \(\widetilde{g}(u)=u\cdot\ln(|u|)\) and we use the convention that \(\widetilde{g}(0)=0\). See again Fig. 11 for an example. In particular, the map \(u\to\mathfrak{R}S(u)\) is well-defined and continuous on the real line. Moreover, since \(\widetilde{g}^{\prime}(u)=\ln(|u|)+1\), the map \(u\to\mathfrak{R}S(u)\) is differentiable as a function \(\mathbb{R}\to\mathbb{R}\) except at the points \(\eta\,a_{i}-x_{0}\) (where it has a positive infinite slope) and at the points \(0\) and \(\eta\,b_{i}-x_{0}\) (where it has a negative infinite slope). Its derivative vanishes exactly when \(u\) satisfies \[|u|\prod_{i=1}^{m}|u+x_{0}-\eta\,b_{i}|=(1-t_{0})\prod_{i=0}^{m}|u+x_{0}-\eta \,a_{i}|,\] i.e. when \(u\) satisfies the critical equation (8) or the companion equation \[u\prod_{i=1}^{m}(u+x_{0}-\eta\,b_{i})=-(1-t_{0})\prod_{i=0}^{m}(u+x_{0}-\eta \,a_{i}). \tag{58}\] ## 4. Asymptotics for the kernel in the liquid region Within this section, we assume that \((x_{0},t_{0})\) lies inside the liquid region, i.e. that the critical equation (8) has two non-real solutions, denoted by \(U_{c}\) and \(\overline{U_{c}}\) and with the convention that \(\mathfrak{I}U_{c}>0\). ### Landscape of the action As a preparation for the saddle point analysis, we also need to understand to some extent the real part \(\mathfrak{R}S(U)\) of the action \(S\) introduced in Eq. (47). In particular, we are interested in the shape of the region \[\{\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\}:=\{\,U\in\mathbb{C}\mid\mathfrak{R}S (U)>\mathfrak{R}S(U_{c})\,\}\,.\] We will use similar notation for various regions below. We invite the reader to compare the following discussion with the pictures in Fig. 12. Since \(S\) is analytic around \(U_{c}\) (and more generally on \(\mathbb{C}\setminus(-\infty,\eta\,a_{m}-x_{0})\)) and \(U_{c}\) is a simple critical point of \(S\), we have that, locally around \(U_{c}\) \[S(U)=S(U_{c})+\tfrac{S^{\prime\prime}(U_{c})}{2}\,(U-U_{c})^{2}+O((U-U_{c})^{ 3}).\] In particular,11 there are _eight special curves_ leaving \(U_{c}\) (see for instance the left-hand side of Fig. 12): four curves corresponding to the level sets \(\{\mathfrak{R}S(U)=\mathfrak{R}S(U_{c})\}\) and four other curves corresponding to the level sets \(\{\mathfrak{I}S(U)=\mathfrak{I}S(U_{c})\}\). We will refer to these curves as _real_ or _imaginary level lines_, respectively. The four real level lines split the neighborhood of \(U_{c}\) into four regions, belonging alternatively to the set \(\{\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\}\) or \(\{\mathfrak{R}S(U)<\mathfrak{R}S(U_{c})\}\), each of this region containing one imaginary level line. Since \(S\) is analytic with a non-zero derivative on \(\{\,V\mid V\neq U_{c},\,\mathfrak{I}(V)>0\,\}\), locally around such \(V\) with \(\mathfrak{R}S(V)=\mathfrak{R}S(U_{c})\) (resp. with \(\mathfrak{I}S(V)=\mathfrak{I}S(U_{c})\)), the level set \(\{\mathfrak{R}S(U)=\mathfrak{R}S(U_{c})\}\) (resp. the level set \(\{\mathfrak{I}S(U)=\mathfrak{I}S(U_{c})\}\)) looks like a single simple curve. Therefore the real and imaginary level lines leaving \(U_{c}\) go either to the real line, or to infinity. Moreover, these lines cannot cross in the closed upper half-plane. Indeed, following a real level line, the analyticity of \(S\) implies that \(U\mapsto\mathfrak{I}S(U)\) is strictly monotone: a local extremum would violate the open mapping theorem. Therefore starting from \(U_{c}\) and following a real level line in the set \(\{\mathfrak{R}S(u)=\mathfrak{R}S(U_{c})\}\), we never reach a point \(V\) with \(\mathfrak{I}S(V)=\mathfrak{I}S(U_{c})\). **Lemma 28**.: _Exactly three of the real level lines leaving \(U_{c}\) go to the real line._ Proof.: Since \(S(U)\sim|\log(1-t_{0})|\,U\) as \(|U|\) tends to infinity (as already remarked in (48)), we have that \(\mathfrak{R}S(U)<\mathfrak{R}S(U_{c})\) when \(\mathfrak{R}U\) goes to \(-\infty\) with a fixed imaginary part. Likewise, \(\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\) when \(\mathfrak{R}U\) goes to \(+\infty\) with a fixed imaginary part. Therefore, there is an odd number of real level lines going to the real line. Assume that there is only one. Then three real level lines are going to infinity. Because of the asymptotics \(S(U)\sim|\log(1-t_{0})|\,U\), real level lines going to infinity should be asymptotically inside the region \(\{\,y\geq|\,x|\,\}:=\{\,x+iy\mid y\geq|x|\,\}\) (otherwise we would have \(\lim_{|U|\to+\infty}\mathfrak{R}S(U)=+\infty\) and this would be in contradiction with the definition of real level line). These three real lines determine two unbounded regions, each of them containing an imaginary level line. In particular, these two imaginary level lines go to infinity, inside the region \(\{\,y\geq|\,x|\,\}\). But inside this region, we have that \(\lim_{|U|\to+\infty}\mathfrak{I}S(U)=+\infty\) since \(S(U)\sim|\log(1-t_{0})|\,U\). This is in contradiction with the definition of imaginary level line. We, therefore, conclude that there are exactly three of the real level lines leaving \(U_{c}\) and going to the real line. Figure 11. **Top:** Two plots of the function \(\mathfrak{I}S(u)\) in (56) in the specific example of Fig. 10. On the left we set \(x_{0}=-0.9\), while on the right \(x_{0}=1\). **Bottom:** Two plots of the function \(\mathfrak{R}S(u)\) in (57) in the same specific example of Fig. 10. Again, on the left we set \(x_{0}=-0.9\) and \(t_{0}=0.5\), while on the right \(x_{0}=1\) and \(t_{0}=0.5\). We call \(A\leq B\leq C\) the intersection points of these three real level lines and the real axis. Also, let \(D\) and \(E\) be the intersection points of the imaginary level lines in between these three level lines and the real axis. Since real and imaginary level lines do not intersect, we have \(A<D<B<E<C\). **Lemma 29**.: _With the above notation, \(D<0<E\). Moreover there is no real \(x\) in \((-\infty,A)\cup(C,+\infty)\) such that \(\mathfrak{I}S(x)=\mathfrak{I}S(U_{c})\)._ Proof.: Since \(D\) and \(E\) belong to an imaginary level line leaving \(U_{c}\), and since \(\mathfrak{I}S(u)\) is continuous on the closed upper half-plane, we have \(\mathfrak{I}S(D)=\mathfrak{I}S(E)=\mathfrak{I}S(U_{c})\). Furthermore, \(\mathfrak{I}S(U_{c})\) is distinct from the values \(\mathfrak{I}S(A)\), \(\mathfrak{I}S(B)\), \(\mathfrak{I}S(C)\) since, as explained above, \(\mathfrak{I}S\) is strictly monotone on the real level lines going from \(U_{c}\) to \(A\), \(B\) or \(C\). Looking at the graph of the function \(u\mapsto\mathfrak{I}S(u)\) on the real line (see the computations in Section 3.4), we see that a necessary condition to have \(\mathfrak{I}S(D)=\mathfrak{I}S(E)\neq\mathfrak{I}S(B)\) is that \(D<0<E\). Moreover, looking again at the shape of \(\mathfrak{I}S(U)\) and recalling that \[\mathfrak{I}S(A)\neq\mathfrak{I}S(D)=\mathfrak{I}S(U_{c})=\mathfrak{I}S(E)\neq \mathfrak{I}S(C),\] we see that there cannot be an \(x\) in either \((-\infty,A)\) or \((C,+\infty)\) such that \(\mathfrak{I}S(x)=\mathfrak{I}S(U_{c})\). As said above, we are interested in describing the regions \(\{\mathfrak{R}S(u)>\mathfrak{R}S(U_{c})\}\) or \(\{\mathfrak{R}S(u)<\mathfrak{R}S(U_{c})\}\). From the asymptotics \(S(U)\sim|\log(1-t_{0})|\,U\), we know that for \(\mathfrak{R}(U)\) negative (resp. positive) and large in absolute value (in particular larger than \(K|\mathfrak{I}(U)|\) for come constant \(K>0\)), \(U\) belongs to the region \(\{\mathfrak{R}S(U)<\mathfrak{R}S(U_{c})\}\) (resp. \(\{\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\}\)). Thus, the real level lines leaving \(U_{c}\) split the complex plane as shown on the left-hand side of Fig. 12. Note that we do not exclude the existence of smaller islands inside the main regions as shown on the right-hand side of Fig. 12. For the sake of simplicity, we will not draw such potential islands in future figures. ### Moving contours and asymptotic of the kernel We recall from (41) that the renormalized kernel \(\widetilde{K}^{(x_{0},t_{0})}_{\lambda_{N}}\) writes as a double contour integral over contours \(\gamma_{Z}\) and \(\gamma_{W}\) (where the integrand is denoted by \(\operatorname{Int}_{N}(W,Z)\)). Our strategy for the asymptotic analysis of \(\widetilde{K}^{(x_{0},t_{0})}_{\lambda_{N}}\) goes as follows: 1. Move the integration contours from \(\gamma_{Z}\) and \(\gamma_{W}\) to properly chosen contours \(\gamma_{Z}^{\text{new}}\) and \(\gamma_{W}^{\text{new}}\) given in Lemma 30; 2. Replace the integrand \(\operatorname{Int}_{N}(W,Z)\) by the asymptotically equivalent expression given in (46): \[(\sqrt{N})^{x_{2}-x_{1}}\mathrm{e}^{\sqrt{N}(S(W)-S(Z))}\,h^{(x_{1},t_{1})}_{ (x_{2},t_{2})}(W,Z).\] This second step is performed in Proposition 31. The contours \(\gamma_{Z}^{\text{new}}\) and \(\gamma_{W}^{\text{new}}\) will be constructed in such a way that, for almost all \((W,Z)\in\gamma_{W}^{\text{new}}\times\gamma_{Z}^{\text{new}}\), one has \(S(W)<S(Z)\). In this way, after the second step, the integrand - and thus the integral - will converge to \(0\). The asymptotic expansion of the kernel \(\widetilde{K}^{(x_{0},t_{0})}_{\lambda_{N}}\) will then be given by the potential residue term created by the change of contours. Recalling the discussion below (41), the integrand has poles for \(W=Z\), for some values of \(W\) in the interval \(I_{W}=[\eta\,a_{0}-x_{0},o(1)]\) and for some values of \(Z\) in the interval \(I_{Z}=[o(1),\eta\,a_{m}-x_{0}-1]\). We also recall from (42) that we chose the contours \(\gamma_{W}=\partial D(0,3L)\) and \(\gamma_{Z}=\partial D(0,4L)\) (both followed in counterclockwise order), with \(L=\eta\max(|a_{0}|,a_{m})\). (Recall that we restricted our analysis to the case \(t_{1}\geq t_{2}\), i.e. when \(\gamma_{W}\) is inside \(\gamma_{Z}\).) **Lemma 30**.: _There exist two integration contours \(\gamma_{W}^{\text{new}}\) and \(\gamma_{Z}^{\text{new}}\) (both followed in counterclockwise direction) such that_ * \(\gamma_{W}^{\text{new}}\) _and_ \(\gamma_{Z}^{\text{new}}\) _intersect each other only at_ \(U_{c}\) _and_ \(\overline{U_{c}}\) * \(\gamma_{W}^{\text{new}}\) _(resp._ \(\gamma_{Z}^{\text{new}}\)_) contains_ \(I_{W}\) _(resp._ \(I_{Z}\)_) in its interior;_ * \(\gamma_{W}^{\text{new}}\setminus\{U_{c},\overline{U_{c}}\}\) _(resp._ \(\gamma_{Z}^{\text{new}}\setminus\{U_{c},\overline{U_{c}}\}\) _) lies inside the region_ \(\{\mathfrak{R}S(U)<\mathfrak{R}S(U_{c})\}\) _(resp._ \(\{\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\}\)_)._ We refer the reader to Fig. 13 for an illustration of the original and new integration contours. Proof.: We first explain how to construct \(\gamma_{Z}^{\text{new}}\), as the concatenation of two paths going from \(U_{c}\) to \(\overline{U_{c}}\) (reversing the right-most one so that we have a counterclockwise contour). Recall from the previous section (see also Fig. 12) that there is a portion of \(\{\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\}\) between the point \(U_{c}\), \(\overline{U_{c}}\), \(A\) and \(B\) and that this region contains an imaginary level line of \(S\) that intersects the real axis at \(D<0\) (Lemma 29). We choose this imaginary level line as the first path to construct \(\gamma_{Z}^{\text{new}}\). Figure 12. Two possible shapes for the landscape of \(\mathfrak{R}S\). The black fat lines are the real level lines \(\{\mathfrak{R}S(U)=\mathfrak{R}S(U_{c})\}\). The yellow regions correspond to \(\{\mathfrak{R}S(U)<\mathfrak{R}S(U_{c})\}\), while the white regions correspond to \(\{\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\}\). On the left-hand side, we have also represented the imaginary level lines \(\{\mathfrak{I}S(U)=\mathfrak{I}S(U_{c})\}\) in dotted lines. The right-hand side shows another possible landscape of \(\mathfrak{R}S\) with a more complicated configuration (to avoid overloading the picture, we did not draw imaginary level lines here). For the definition of the points \(A,B,C,D,E\), see the discussion above Lemma 29. Figure 13. **Left:** The picture from Fig. 12 together with the original integration contours \(\gamma_{W}\) (in green) and \(\gamma_{Z}\) (in purple) appearing in the kernel \(\widetilde{K}_{\lambda_{N}}^{(x_{0},t_{0})}\). The green and purple dots are respectively the \(W\)-poles and \(Z\)-poles of the integrand. We also recall that the yellow regions correspond to \(\{\mathfrak{R}S(U)<\mathfrak{R}S(U_{c})\}\), while the white regions correspond to \(\{\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\}\). **Right:** The same picture with the new contours \(\gamma_{Z}^{\text{new}}\) and \(\gamma_{W}^{\text{new}}\) from Lemma 30. Around \(U_{c}\), we can find another connected area of the region \(\{\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\}\). This part also contains an imaginary level line of \(S\). This imaginary level line cannot go to the real axis, otherwise the meeting point would be an \(x\) in \((C,+\infty)\) with \(\mathfrak{I}S(x)=\mathfrak{I}S(U_{c})\), contradicting Lemma 29. Because of the asymptotics \(S(U)\sim|\log(1-t_{0})|\,U\), this imaginary line should go to infinity inside the region \(\{y\leq|x|\}\) and therefore meets the region \(\{x\geq M\}\), for any \(M>0\). Moreover, we can choose \(M>0\) such that \(M\geq\max I_{Z}\) and such that if \(|x|\geq M\) and \(|y|\leq x\) then \(\mathfrak{R}S(x+iy)>\mathfrak{R}S(U_{c})\). The second path used to construct \(\gamma_{Z}^{\rm new}\) is then defined as follows: we first follow the imaginary level line from \(U_{c}\) until it meets the region \(\{|x|\geq M,\,y\leq|x|\}\), and then join the real axis inside this region. We complete the path by symmetry until it hits \(\overline{U_{c}}\). By construction, \(\gamma_{Z}^{\rm new}\) goes through \(U_{c}\) and \(\overline{U_{c}}\) and \(\gamma_{Z}^{\rm new}\setminus\{U_{c},\overline{U_{c}}\}\) lies in \(\{\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\}\). Also, the intersection points of \(\gamma_{Z}^{\rm new}\) with the real axis are \(D<0\) on the one side and some number larger than \(M>\max I_{Z}\) on the other side, so that \(\gamma_{Z}^{\rm new}\) encloses \(I_{Z}\) as wanted. We construct \(\gamma_{W}^{\rm new}\) in a symmetric way, completing the proof of the lemma. We are now ready to compute the asymptotics of the kernel \(\widetilde{K}_{\lambda_{N}}^{(x_{0},t_{0})}\) in (41) when \((x_{0},t_{0})\) lies inside the liquid region. and hence compute the limit of the associated bead process \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\). **Proposition 31**.: _Assume that \((x_{0},t_{0})\) is in the liquid region. The bead process \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\) converges in distribution to a determinantal process with correlation kernel_ \[K_{\infty}^{(x_{0},t_{0})}\big{(}(x_{1},t_{1}),(x_{2},t_{2})\big{)}:=\frac{1}{ 2{\rm i}\pi}\int_{\gamma}\frac{1}{1-t_{0}}\,{\rm e}^{\frac{W}{1-t_{0}}(t_{1}-t _{2})}\,\left(\frac{W}{1-t_{0}}\right)^{x_{2}-x_{1}}{\rm d}W\,, \tag{59}\] _where \(\gamma\) is a path going from \(\overline{U_{c}}\) to \(U_{c}\) and passing on the left of \(0\) for \(t_{1}\geq t_{2}\) and on the right of \(0\) for \(t_{1}<t_{2}\) (the point 0 is a pole of the integrand in the case \(x_{1}<x_{2}\))._ Proof.: We first treat the case \(t_{1}\geq t_{2}\), so that the original contours in (41) are such that \(\gamma_{W}\) is inside \(\gamma_{Z}\). We move the latter contours to the new contours \(\gamma_{W}^{\rm new}\) and \(\gamma_{Z}^{\rm new}\) constructed in Lemma 30. Since \(I_{W}\) (resp. \(I_{Z}\)) is inside both \(\gamma_{W}\) and \(\gamma_{W}^{\rm new}\) (resp. \(\gamma_{Z}\) and \(\gamma_{Z}^{\rm new}\)), we do not cross any of the \(W\)-poles (resp. \(Z\)-poles). However, since the relative positions of the contours is not the same (\(\gamma_{W}^{\rm new}\) is not inside \(\gamma_{Z}^{\rm new}\)), we have to take care of the pole at \(Z=W\). Fix \(Z\in\gamma_{Z}\). Possibly enlarging the contour \(\gamma_{Z}\), we can assume that the point \(Z\) is outside both \(\gamma_{W}\) and \(\gamma_{W}^{\rm new}\). Therefore, recalling that \(\operatorname{Int}_{N}(W,Z)\) denotes the integrand in (41), we have \[\oint_{\gamma_{W}}\operatorname{Int}_{N}(W,Z)\,{\rm d}W=\oint_{\gamma_{W}^{\rm new }}\operatorname{Int}_{N}(W,Z)\,{\rm d}W,\] which implies that \[\widetilde{K}_{\lambda_{N}}^{(x_{0},t_{0})}((x_{1},t_{1}),(x_{2},t_{2}))= \frac{-1}{(2{\rm i}\pi)^{2}}\oint_{\gamma_{Z}}\oint_{\gamma_{W}} \operatorname{Int}_{N}(W,Z)\,{\rm d}W\,{\rm d}Z=\frac{-1}{(2{\rm i}\pi)^{2}} \oint_{\gamma_{Z}}\oint_{\gamma_{W}^{\rm new}}\operatorname{Int}_{N}(W,Z)\,{ \rm d}W\,{\rm d}Z\,.\] Fix now \(W\) in \(\gamma_{W}^{\rm new}\setminus\{U_{c},\overline{U_{c}}\}\). Note that \(\gamma_{W}^{\rm new}\setminus\{U_{c},\overline{U_{c}}\}=\gamma_{W}^{\rm new, left}\sqcup\gamma_{W}^{\rm new,right}\), where \(\gamma_{W}^{\rm new,left}\) and \(\gamma_{W}^{\rm new,right}\) denote respectively the left and right parts of the contour \(\gamma_{W}^{\rm new}\); c.f. Fig. 13. We then move the \(Z\)-contour from \(\gamma_{Z}\) (left-hand side of Fig. 13) to \(\gamma_{Z}^{\rm new}\) (right-hand side of Fig. 13). If \(W\) is in \(\gamma_{W}^{\rm new,right}\), then deforming the \(Z\)-contour from \(\gamma_{Z}\) to \(\gamma_{Z}^{\rm new}\) can be done without crossing any pole. On the other hand, if \(W\) is in \(\gamma_{W}^{\rm new,left}\), when deforming the \(Z\)-contour from \(\gamma_{Z}\) to \(\gamma_{Z}^{\rm new}\), we cross the pole \(Z=W\). By the residue theorem, for \(W\) fixed in \(\gamma_{W}^{\rm new}\setminus\{U_{c},\overline{U_{c}}\}\), we have \[\frac{1}{2{\rm i}\pi}\oint_{\gamma_{Z}}\operatorname{Int}_{N}(W,Z)\,{\rm d}Z= \frac{1}{2{\rm i}\pi}\oint_{\gamma_{Z}^{\rm new}}\operatorname{Int}_{N}(W,Z)\,{ \rm d}Z+\delta_{W\in\gamma_{W}^{\rm new,left}}\,\operatorname{Res}\big{(} \operatorname{Int}_{N}(W,Z),Z=W\big{)},\] where \(\mathrm{Res}\,\big{(}\operatorname{Int}_{N}(W,Z),Z=W\big{)}\) denotes the residue of \(\operatorname{Int}_{N}(W,Z)\) corresponding to the pole \(Z=W\). Integrating over \(W\) in \(\gamma_{W}^{\mathrm{new}}\) gives \[\widetilde{K}_{\lambda_{N}}^{(x_{0},t_{0})}((x_{1},t_{1}),(x_{2}, t_{2}))=-\frac{1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma_{W}^{\mathrm{new}}}\oint_{ \gamma_{Z}^{\mathrm{new}}}\operatorname{Int}_{N}(W,Z)\,\mathrm{d}Z\,\mathrm{ d}W\\ -\frac{1}{2\mathrm{i}\pi}\oint_{\gamma_{W}^{\mathrm{new,left}}} \mathrm{Res}\,\big{(}\operatorname{Int}_{N}(W,Z),Z=W\big{)}\,\mathrm{d}W\,, \tag{60}\] where the set \(\gamma_{W}^{\mathrm{new,left}}\) is interpreted as a path going from \(U_{c}\) to \(\overline{U_{c}}\); note that it passes on the left of \(0\). Note that because of the pole in \(Z=W\) of \(\operatorname{Int}_{N}(W,Z)\), the term \(\oint_{\gamma_{Z}^{\mathrm{new}}}\operatorname{Int}_{N}(W,Z)\,\mathrm{d}Z\) has a logarithmic singularity when \(W\) tends to \(U_{c}\) or \(\overline{U_{c}}\); this singularity is integrable so that the double integral above is well-defined. Recalling that \(\operatorname{Int}_{N}(W,Z)=\frac{F_{\lambda_{N}}(\sqrt{N}(Z+x_{0}))}{F_{ \lambda_{N}}(\sqrt{N}(W+x_{0}))}\frac{\Gamma(\sqrt{N}W-x_{1}+1)}{\Gamma(\sqrt{ N}Z-x_{2}+1)}\cdot\frac{(1-\widetilde{t}_{2})^{\sqrt{N}Z-x_{2}}(1-\widetilde{t}_{1} )^{-\sqrt{N}W+x_{1}-1}}{(Z-W)}\), the right term in the latter displayed equation can be computed as follows: \[\mathrm{Res}\,\big{(}\operatorname{Int}_{N}(W,Z),Z=W\big{)} =\frac{\Gamma(\sqrt{N}W-x_{1}+1)}{\Gamma(\sqrt{N}W-x_{2}+1)}\,(1- \widetilde{t}_{2})^{\sqrt{N}W-x_{2}}\,(1-\widetilde{t}_{1})^{-\sqrt{N}W+x_{1} -1}\] \[\simeq(\sqrt{N}W)^{x_{2}-x_{1}}(1-t_{0})^{x_{1}-x_{2}-1}\exp \left(\frac{W}{1-t_{0}}\,(t_{1}-t_{2})\right),\] where we used Eq. (43) in the last estimation. The estimate is uniform for \(W\) in compact subsets of \(\mathbb{C}\setminus\{0\}\), implying \[-\frac{1}{2\mathrm{i}\pi}\oint_{\gamma_{W}^{\mathrm{new,left}}} \mathrm{Res}\,\big{(}\operatorname{Int}_{N}(W,Z),Z=W\big{)}\,\mathrm{d}W\\ \simeq-\frac{(\sqrt{N})^{x_{2}-x_{1}}}{2\mathrm{i}\pi}\oint_{ \gamma_{W}^{\mathrm{new,left}}}\frac{1}{1-t_{0}}\,\mathrm{e}^{\frac{W}{1-t_{0 }}(t_{1}-t_{2})}\left(\frac{1-t_{0}}{W}\right)^{x_{1}-x_{2}}\mathrm{d}W\,.\] We now consider the double integral term \(\oint_{\gamma_{W}^{\mathrm{new}}}\oint_{\gamma_{Z}^{\mathrm{new}}}\operatorname {Int}_{N}(W,Z)\,\mathrm{d}Z\,\mathrm{d}W\) in (60). Call \(Z_{-}\) and \(Z_{+}\) (resp. \(W_{-}\) and \(W_{+}\)) the intersections points of the contour \(\gamma_{Z}^{\mathrm{new}}\) (resp. \(\gamma_{W}^{\mathrm{new}}\)) with the real axis, with the convention \(Z_{-}<0<Z_{+}\) (resp. \(W_{-}<0<W_{+}\)). By construction, \(W_{-}\) and \(Z_{+}\) can be chosen outside \([\eta a_{0}-x_{0},\eta a_{m}-x_{0}]\) and hence belong to the set \(D_{S}\) (see its definition before Lemma 23). Moreover, up to deforming slightly \(\gamma_{Z}^{\mathrm{new}}\) and \(\gamma_{W}^{\mathrm{new}}\) (still keeping the properties of Lemma 30 true), we may assume that \(W_{+}\) and \(Z_{-}\) are distinct from all \(\eta\,a_{i}-x_{0}\) and \(\eta\,b_{i}-x_{0}\). Fixing \(\varepsilon>0\), we have that: * Using the estimate in (46) (which holds uniformly for \(Z\) and \(W\) in compact subsets of \(D_{S}\)), for \(Z\in\gamma_{Z}^{\mathrm{new}}\setminus D(Z_{-},\varepsilon)\) and \(W\in\gamma_{W}^{\mathrm{new}}\setminus D(W_{+},\varepsilon)\), the integrand \(\operatorname{Int}_{N}(W,Z)\) is bounded by the function \(2h_{(x_{2},t_{2})}^{(x_{1},t_{1})}(W,Z)\) in (47) for all \(N\) large enough, and tends to \(0\) pointwise when \(N\) tends to \(+\infty\) because \(\mathfrak{R}S(Z)>\mathfrak{R}S(W)\) by the third property in Lemma 30 for \(\gamma_{Z}^{\mathrm{new}}\) and \(\gamma_{W}^{\mathrm{new}}\). (This is true except when \(\{Z,W\}\subseteq\{U_{c},\overline{U_{c}}\}\), where the bound by \(h_{(x_{2},t_{2})}^{(x_{1},t_{1})}(W,Z)\) still holds but the integrand \(\operatorname{Int}_{N}(W,Z)\) does not converge to zero; since this is a zero measure subset, this will not be problematic.) * Using the second and third parts of Lemma 23, the bound by \(h_{(x_{2},t_{2})}^{(x_{1},t_{1})}(W,Z)\) and the convergence of \(\operatorname{Int}_{N}(W,Z)\) to \(0\) also hold when \(Z\in D(Z_{-},\varepsilon)\) or \(W\in D(W_{+},\varepsilon)\) (or both), for \(\varepsilon\) small enough. Indeed, the extra factor \(\exp\!\left(\varepsilon\pi\sqrt{N}\right)\) in this case is compensated by the factor \(\exp\!\left(\sqrt{N}(S(W)-S(Z))\right)\) as soon as \(S(W)<S(Z)-\pi\varepsilon\), which happens for \(W\in D(W_{+},\varepsilon)\) and \(Z\in\gamma_{Z}^{\mathrm{new}}\) or for \(W\in\gamma_{W}^{\mathrm{new}}\) and \(Z\in D(Z_{-},\varepsilon)\), if \(\varepsilon\) is small enough. * The function \(h_{(x_{2},t_{2})}^{(x_{1},t_{1})}(W,Z)\) is integrable for \((W,Z)\) on the double contour \((\gamma_{W}^{\mathrm{new}},\gamma_{Z}^{\mathrm{new}})\). Indeed, it has singularities for \(W=Z=U_{c}\) and \(W=Z=\overline{U_{c}}\) but behaves as \(O((W-Z)^{-1})\) near these singularities, and a standard computation shows that \((W-Z)^{-1}\) is integrable on \(\gamma_{W}^{\rm new}\times\gamma_{Z}^{\rm new}\), since the paths cross non-tangentially. Hence, using the dominated convergence theorem, we know that the double contour integral \(\oint_{\gamma_{W}^{\rm new}}\oint_{\gamma_{Z}^{\rm new}}\operatorname{Int}_{N}(W,Z)\operatorname{d}\!Z\operatorname{d}\!W\) goes to \(0\) as \(N\) tends to infinity. Letting \(\gamma\) to be the reverse path of \(\gamma_{W}^{\rm new,left}\), we get \[\lim_{N\to+\infty}\frac{\widetilde{K}_{\lambda_{N}}^{(x_{0},t_{0})}((x_{1},t_{ 1}),(x_{2},t_{2}))}{(\sqrt{N})^{x_{2}-x_{1}}}=\frac{1}{2{\rm i}\pi}\int_{ \gamma}\!\!\frac{1}{1-t_{0}}\,{\rm e}^{\frac{W}{1-t_{0}}(t_{1}-t_{2})}\left( \frac{1-t_{0}}{W}\right)^{x_{1}-x_{2}}\operatorname{d}\!W\,. \tag{61}\] Note that \(\gamma\) is indeed a path form \(U_{c}\) to \(\overline{U_{c}}\) passing on the left of \(0\), as required in the case \(t_{1}\geq t_{2}\). Let us now consider the case \(t_{1}<t_{2}\). We recall that in this case the initial contour \(\gamma_{W}\) and \(\gamma_{Z}\) are swapped. An argument similar to the one above shows that (61) still holds, but with the path \(\gamma=\gamma_{W}^{\rm new,right}\) passing on the right of \(0\). Recall that, up to the factor \((\sqrt{N})^{x_{2}-x_{1}}\), the left-hand side of (61) is the kernel of the bead process \(\widetilde{M}_{\lambda_{N}}\). But the factor \((\sqrt{N})^{x_{2}-x_{1}}\) is a conjugation factor (see the last paragraph in Section 2.2): adding or removing it does not change the associated bead process. The expression on the right-hand side of (61) depends continuously on \((x_{1},t_{1})\) and \((x_{2},t_{2})\) and is therefore locally bounded. Moreover, all estimates above and in particular the convergence (61) are locally uniform in these variables. Therefore, from Proposition 22, the bead process \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\) converges in distribution to a determinantal point process with kernel \(K_{\infty}^{(x_{0},t_{0})}\). ### Recovering the bead kernel We now want to compare the limit kernel \(K_{\infty}^{(x_{0},t_{0})}\) from Proposition 31 with that of the random infinite bead process \(J_{\alpha,\beta}\) from Theorem 12. Recall from (9) that, given the critical point \(U_{c}\) associated to \((x_{0},t_{0})\) in the liquid region, \(\alpha=\frac{3U_{c}}{1-t_{0}}\) and \(\beta=\frac{9U_{c}}{|U_{c}|}\). **Lemma 32**.: _There exists a function \(g\) such that, for \(t_{1}\neq t_{2}\)_ \[K_{\infty}^{(x_{0},t_{0})}\big{(}(x_{1},t_{1}),(x_{2},t_{2})\big{)}=\frac{g(x _{1},t_{1})}{g(x_{2},t_{2})}\,J_{\alpha,\beta}\big{(}(x_{1},t_{1}),(x_{2},t_{2 })\big{)}. \tag{62}\] Before proving the lemma, we discuss its consequences. We recall that a conjugation factor of the form \(\frac{g(x_{1},t_{1})}{g(x_{2},t_{2})}\) in a kernel does not affect the associated point process. Also changing the kernel on a set of measure \(0\), e.g. \(\big{\{}\big{(}(x_{1},t_{1}),(x_{2},t_{2})\big{)},t_{1}=t_{2}\big{\}}\), does not change the point process either. Thus, the above lemma implies that \(K_{\infty}\) and \(J_{\alpha,\beta}\) are the kernels of the same point process, i.e. the random infinite bead process of intensity \(\alpha\) and skewness \(\beta\) introduced in Definition 13. Together with Proposition 31, this proves the first item of Theorem 15. Proof of Lemma 32.: First assume \(x_{2}\geq x_{1}\) (but we do not assume any comparison between \(t_{1}\) and \(t_{2}\), the forthcoming argument works in both cases). Then the integrand of the kernel \(K_{\infty}^{(x_{0},t_{0})}\big{(}(x_{1},t_{1}),(x_{2},t_{2})\big{)}\) in (59) has no poles, and we can replace \(\gamma\) by any path from \(\overline{U_{c}}\) to \(U_{c}\). We will take a vertical segment, which we parametrize as \[\gamma(u):=R(\cos\theta+{\rm i}u\sin\theta),\qquad u\in(-1,1),\] where \(R=|U_{c}|\) and \(\theta=\operatorname{Arg}(U_{c})\). We get \[K_{\infty}\big{(}(x_{1},t_{1}),(x_{2},t_{2})\big{)}=\frac{1}{1-t_{0}}\,\frac{ g(x_{1},t_{1})}{g(x_{2},t_{2})}\int_{-1}^{1}{\rm e}^{{\rm i}u(t_{1}-t_{2})\, \frac{R\sin\theta}{1-t_{0}}}(\cos\theta+{\rm i}u\sin\theta)^{x_{2}-x_{1}}\frac {R\sin\theta\,{\rm d}u}{2\pi},\] with \(g(y,\varepsilon)=\exp\!\left(\frac{\varepsilon R\cos\theta}{1-t_{0}}\right) (\frac{R}{1-t_{0}})^{x_{2}-x_{1}}\). Comparing with (21) where we take \(\alpha=\frac{R\sin\theta}{1-t_{0}}=\frac{3U_{c}}{1-t_{0}}\) and \(\beta=\cos\theta=\frac{9U_{c}}{|U_{c}|}\), we obtain (62) in the case \(x_{2}\geq x_{1}\). Assume now \(x_{2}<x_{1}\) and \(t_{1}>t_{2}\). In this case, in Proposition 31, we integrate over the following path \(\gamma\) going from \(\overline{U_{c}}\) to \(U_{c}\) (for \(A\geq 1\) large enough this path passes on the left of \(0\), as needed): 1. take a vertical path \(\gamma_{1}\) going down from \(\overline{U_{c}}\) to \(\mathfrak{R}U_{c}-\mathrm{i}A\); 2. take a horizontal path \(\gamma_{2}\) going left from \(\mathfrak{R}U_{c}-\mathrm{i}A\) to \(\mathfrak{R}U_{c}-(\log A)^{2}-\mathrm{i}A\); 3. take a vertical path \(\gamma_{3}\) going up from \(\mathfrak{R}U_{c}-(\log A)^{2}-\mathrm{i}A\) to \(\mathfrak{R}U_{c}-(\log A)^{2}+\mathrm{i}A\); 4. take a horizontal path \(\gamma_{4}\) going right from \(\mathfrak{R}U_{c}-(\log A)^{2}+\mathrm{i}A\) to \(\mathfrak{R}U_{c}+\mathrm{i}A\); 5. and finally take a vertical path \(\gamma_{5}\) going down from \(\mathfrak{R}U_{c}+\mathrm{i}A\) to \(U_{c}\). Since \(t_{1}-t_{2}>0\) and \(x_{1}-x_{2}>0\), the integrals over the second and fourth paths are bounded by \[\frac{(\log A)^{2}}{2\pi(1-t_{0})}\,\mathrm{e}^{\frac{\mathfrak{R}U_{c}-( \log A)^{2}}{1-t_{0}}(t_{1}-t_{2})}\left(\frac{1-t_{0}}{A}\right)^{x_{1}-x_{2 }}.\] This upper bound tends to \(0\) as \(A\) tends to infinity. Similarly, when \(t_{1}>t_{2}\) and \(x_{1}-x_{2}>0\), the integral over the third path is bounded by \[\frac{2A}{2\pi(1-t_{0})}\,\mathrm{e}^{\frac{\mathfrak{R}U_{c}-(\log A)^{2}}{1- t_{0}}(t_{1}-t_{2})}\left(\frac{1-t_{0}}{A}\right)^{x_{1}-x_{2}},\] which also tends to \(0\) as \(A\) tends to infinity. Consider the integral on the first path. Reversing its direction (which yields a minus sign), it can be parametrized by \[\gamma_{1}(u)=R(\cos\theta+\mathrm{i}u\sin\theta),\quad u\in(-A/\mathfrak{I}U_ {c},-1).\] Thus, doing the same computation as in the case \(x_{2}\geq x_{1}\) above, the integral over \(\gamma_{1}\) is equal to \[-\frac{1}{1-t_{0}}\,\frac{g(x_{1},t_{1})}{g(x_{2},t_{2})}\int_{-A/3U_{c}}^{-1} \mathrm{e}^{\mathrm{i}u(t_{1}-t_{2})\frac{R\sin\theta}{1-t_{0}}}(\cos\theta+ \mathrm{i}u\sin\theta)^{x_{2}-x_{1}}\frac{R\sin\theta\,\mathrm{d}u}{2\pi},\] As \(A\) tends to infinity, this quantity tends to \[-\frac{1}{1-t_{0}}\,\frac{g(x_{1},t_{1})}{g(x_{2},t_{2})}\int_{-\infty}^{-1} \mathrm{e}^{\mathrm{i}u(t_{1}-t_{2})\frac{R\sin\theta}{1-t_{0}}}(\cos\theta+ \mathrm{i}u\sin\theta)^{x_{2}-x_{1}}\frac{R\sin\theta\,\mathrm{d}u}{2\pi}.\] Similarly the integral over \(\gamma_{5}\) tends to \[-\frac{1}{1-t_{0}}\,\frac{g(x_{1},t_{1})}{g(x_{2},t_{2})}\int_{1}^{\infty} \mathrm{e}^{\mathrm{i}u(t_{1}-t_{2})\frac{R\sin\theta}{1-t_{0}}}(\cos\theta+ \mathrm{i}u\sin\theta)^{x_{2}-x_{1}}\frac{R\sin\theta\,\mathrm{d}u}{2\pi}.\] Comparing with (21), we obtain (62) in the case \(x_{2}<x_{1}\) and \(t_{1}>t_{2}\). The case \(x_{2}<x_{1}\) and \(t_{1}<t_{2}\) is similar, taking a path \(\gamma\) going through \(\mathfrak{R}U_{c}-\mathrm{i}A\), \(\mathfrak{R}U_{c}+(\log A)^{2}\pm\mathrm{i}A\) and \(\mathfrak{R}U_{c}+\mathrm{i}A\) (recall that for \(t_{1}<t_{2}\), the path \(\gamma\) in Proposition 31 should pass on the right of \(0\)). ## 5. Asymptotics for the kernel in the frozen region ### The small \(t\) region In this section, we fix \(x_{0}\in[\eta\,a_{0},\eta\,a_{m}]\) and we let \(t_{0}\leq t_{-}\), where \(t_{-}=t_{-}(x_{0})\) is given by Proposition 25. We first assume that \(x_{0}\in(\eta\,a_{0},0)\); the necessary modification for the cases \(x_{0}=\eta\,a_{0}\) and \(x_{0}\geq 0\) will be explained afterwards. From Proposition 25 and Remark 26, we know that in this regime the critical equation (8) has only real solutions and that the smallest two, which will be denoted by \(U_{c}\leq U_{c}^{\prime}\), are in \((-\infty,\eta\,a_{0}-x_{0})\). In Lemmas 33 and 34, we assume \(U_{c}<U_{c}^{\prime}\) and discuss the case of a double critical \(U_{c}=U_{c}^{\prime}\) only in the proof of the main result of the section, i.e. Proposition 35. **Lemma 33**.: _The critical points \(U_{c}\) and \(U_{c}^{\prime}\) are respectively a local maximum and a local minimum of the function \(u\to\mathfrak{R}S(u)\) on the real line and it holds that \(\mathfrak{R}S(U_{c})>\mathfrak{R}S(U_{c}^{\prime})\)._ Proof.: Recall from Section 3.4 that the map \(u\mapsto\mathfrak{R}S(u)\); * is well-defined and continuous on the real line; * is differentiable as a function from \(\mathbb{R}\) to \(\mathbb{R}\), except at the points \(\eta\,a_{i}-x_{0}\) (where it has a positive infinite slope) and at the points \(0\) and \(\eta\,b_{i}-x_{0}\) (where it has a negative infinite slope); * has a vanishing derivative exactly when \(u\) satisfies the critical equation (8) or the companion equation \[u\prod_{i=1}^{m}(u+x_{0}-\eta\,b_{i})=-(1-t_{0})\prod_{i=0}^{m}(u+x_{0}-\eta\,a _{i}).\] (63) This companion equation always has exactly \(m+1\) solutions, in the intervals \([\eta\,a_{i-1}-x_{0},\eta\,b_{i}-x_{0}]\) for \(i\leq i_{0}\) and \([\eta\,b_{i}-x_{0},\eta\,a_{i}-x_{0}]\) for \(i\geq i_{0}\), where \(i_{0}\) is such that \(0\in(\eta\,a_{i_{0}-1}-x_{0},\eta\,a_{i_{0}}-x_{0}]\). In particular, it does not have solutions for \(u<\eta\,a_{0}-x_{0}\), and so \(U_{c}\) and \(U_{c}^{\prime}\) are the only local extrema of \(\mathfrak{R}S(u)\) in \((-\infty,\eta\,a_{0}-x_{0})\). Observing that \(\lim_{u\to-\infty}\mathfrak{R}S(u)=-\infty\) and \(\mathfrak{R}S(u)\) has a positive infinite slope at \(\eta\,a_{0}-x_{0}\), this ends the proof of the lemma. We now recall from Eqs. (41) and (46) that the renormalized kernel \(\widetilde{K}^{(x_{0},t_{0})}_{\lambda_{N}}\) writes as \[\widetilde{K}^{(x_{0},t_{0})}_{\lambda_{N}}((x_{1},t_{1}),(x_{2},t_{2}))=- \frac{1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma_{Z}}\oint_{\gamma_{W}}\mathrm{Int }_{N}(W,Z)\,\mathrm{d}W\,\mathrm{d}Z\,, \tag{64}\] where \(\mathrm{Int}_{N}(W,Z)\) can be approximated by \((\sqrt{N})^{x_{2}-x_{1}}\mathrm{e}^{\sqrt{N}(S(W)-S(Z))}\,h^{(x_{1},t_{1})}_{ (x_{2},t_{2})}(W,Z)\) uniformly on compact subsets of \(D_{S}\) (defined above Lemma 23). We recall also that the integrand \(\mathrm{Int}_{N}(W,Z)\) has poles for \(W=Z\), for some values of \(W\) in an interval \(I_{W}=[\eta\,a_{0}-x_{0},o(1)]\) and for some values of \(Z\) in an interval \(I_{Z}=[o(1),\eta\,a_{m}-x_{0}-1]\). **Lemma 34**.: _There exist two integration contours \(\gamma_{W}^{\text{new}}\) and \(\gamma_{Z}^{\text{new}}\) (both followed in counterclockwise order) such that,_ * \(\gamma_{W}^{\text{new}}\) _lies in the interior of_ \(\gamma_{Z}^{\text{new}}\)_;_ * \(\gamma_{W}^{\text{new}}\) _(resp._ \(\gamma_{Z}^{\text{new}}\)_) contains_ \(I_{W}\) _(resp._ \(I_{Z}\)_) in its interior;_ * \(\gamma_{W}^{\text{new}}\) _(resp._ \(\gamma_{Z}^{\text{new}}\)_) lies inside the region_ \(\{\mathfrak{R}S(U)\leq\mathfrak{R}S(U_{c}^{\prime})\}\) _(resp._ \(\{\mathfrak{R}S(U)\geq\mathfrak{R}S(U_{c})\}\)_)._ These new contours are shown in the left-hand side of Fig. 14. Figure 14. **From left to right:** In green and purple, the new integration contours \(\gamma_{W}^{\text{new}}\) and \(\gamma_{Z}^{\text{new}}\) constructed in Lemmas 34, 38 and 41. The three situations correspond to the small \(t\) region \(\{0\leq t_{0}\leq t_{-}(x_{0})\}\), the large \(t\) region \(\{t_{+}(x_{0})\leq t_{0}\leq 1\}\), and the intermediate \(t\) region \(\{t_{-}(x_{0})<t_{0}<t_{+}(x_{0})\}\), respectively. In orange, we highlight the level lines considered in the proofs of the three lemmas. Proof.: We start by noting that the action \(S\) can be analytically continued in the upper half-plane on a neighborhood of \(U_{c}\in\mathbb{R}\) and behaves like \[S(U)=S(U_{c})+\frac{S^{\prime\prime}(U_{c})}{2}\,(U-U_{c})^{2}+O((U-U_{c})^{3}).\] Since \(\mathfrak{I}S(u)\) is piecewise affine and \(\mathfrak{R}S(u)\), seen as a function of a real variable \(u\), reaches a local maximum at \(U_{c}\) (Lemma 33), the coefficient \(S^{\prime\prime}(U_{c})\) must be real and negative. Since \(S^{\prime\prime}(U_{c})\) is real, by comparison with the map \(F(U)=U^{2}\), there is an imaginary level line of \(S\) leaving from \(U_{c}\) orthogonally to the real axis and going in the upper half plane; call it \(\ell\). Moreover, since \(S^{\prime\prime}(U_{c})\) is negative, the real part of \(S\) increases along \(\ell\) (recall that along an imaginary level line, the function \(\mathfrak{R}S(U)\) is strictly monotone by the open mapping theorem). In particular, \(\ell\) is included in the region \(\{\mathfrak{R}S(U)\geq\mathfrak{R}S(U_{c})\}\). Since \(S\) does not have any critical point in the upper half-plane, \(\ell\) cannot end inside the upper half-plane, hence it either reaches again the real axis or it goes to infinity. If it goes to infinity, we claim that \(\ell\) can only go to infinity in the positive real direction. Indeed, the estimate \(S(U)\sim|\log(1-t_{0})|\,U\) for large \(|U|\) (recall (48)) forces \(\mathfrak{I}U\) to stay bounded along \(\ell\) and \(\mathfrak{R}U\) to stay bounded from below along \(\ell\) (by definition \(\mathfrak{I}S(U)\) is bounded along \(\ell\) and \(\mathfrak{R}S(U)\geq\mathfrak{R}S(U_{c})\) along \(\ell\)). Similarly one can consider the imaginary line \(\ell^{\prime}\) of \(S\) leaving from \(U^{\prime}_{c}\) and going in the upper half plane. The real part of \(S\) is decreasing along \(\ell^{\prime}\) so that \(\ell^{\prime}\) stays in the region \(\{\mathfrak{R}S(U)\leq\mathfrak{R}S(U^{\prime}_{c})\}\). In particular \(\ell\) and \(\ell^{\prime}\) cannot cross. Similarly as above, one can see that \(\ell^{\prime}\) either reaches again the real axis at some point or go to infinity in the negative real direction. To determine the behavior of \(\ell\) and \(\ell^{\prime}\), it is useful to look at the imaginary part of the action of the real line. We recall from (56) and the discussion below it, that \(\mathfrak{I}S(U_{c})=\mathfrak{I}S(U^{\prime}_{c})=-\pi x_{0}\), and that \(-\pi x_{0}\in(0,\mathfrak{I}S(0))\). Since \(\mathfrak{I}(S(u))=0\) for large real positive values of \(u\), there exists \(\overline{x}>0\) such that \(\mathfrak{I}(S(\overline{x}))=-\pi x_{0}\). Considering the shape of the real map \(u\mapsto\mathfrak{I}(S(u))\) (Fig. 11), there are two possibilities: * If \(\overline{x}\) is an interval of the type \((\eta\,b_{k}-x_{0},\eta\,a_{k}-x_{0})\), then \(u\mapsto\mathfrak{I}(S(u))\) is decreasing at \(\overline{x}\), and \(\overline{x}\) is uniquely determined by the conditions \(\overline{x}>0\) and \(\mathfrak{I}(S(\overline{x}))=-\pi x_{0}\). * If \(\overline{x}\) is an interval of the type \((\eta\,a_{k-1}-x_{0},\eta\,b_{k}-x_{0})\), then \(u\mapsto\mathfrak{I}(S(u))\) is constant on that interval, and any point of this interval can be chosen as \(\overline{x}\). We then choose \(\overline{x}\) to be the unique solution of the critical equation in that interval (the existence being ensured by Lemma 24 and the uniqueness by our assumption that the critical equations has two solutions in \((-\infty,\eta\,a_{0}-x_{0})\)). _Claim: The imaginary level lines \(\ell\) and \(\ell^{\prime}\) can only come back to the real axis at \(\overline{x}\)._ First note that \(\ell\) and \(\ell^{\prime}\) can only come back to the real axis at a point \(u\in\mathbb{R}\setminus\{U_{c},U^{\prime}_{c}\}\) such that \(\mathfrak{I}S(u)=\mathfrak{I}S(U_{c})=\mathfrak{I}S(U^{\prime}_{c}).\) Moreover, let \(y\) be a non-critical point around which \(u\mapsto\mathfrak{I}S(u)\) is constant for real \(u\). Since \(y\) is non-critical, the set \(\{\mathfrak{I}S(z)=\mathfrak{I}S(y)\}\) is a single curve around \(y\) and thus coincide with the real axis. In particular \(\ell\) (or \(\ell^{\prime}\)) cannot reach the real axis at such \(y\). The case when \(y=\eta a_{i}-x_{0}\) or \(y=b_{i}-x_{0}\) is a bit more delicate. Locally around \(y\), the function \(S\) looks like \(\pm(z-y)\log(z-y)\), and thus the set \(\{\mathfrak{I}S(z)=\mathfrak{I}S(y)\}\) looks locally like a half-line. Thus it is locally included in the real line (we know from Fig. 11 that at each point \(y=\eta a_{i}-x_{0}\) or \(y=b_{i}-x_{0}\), there is a direction along the real line in which \(\mathfrak{I}S(u)\) is constant). Again, \(\ell\) (or \(\ell^{\prime}\)) cannot reach the real axis at such \(y\). Finally, we observe that the interval \((-\infty,\eta a_{0}-x_{0})\) contains no other critical point than \(U_{c}\) and \(U^{\prime}_{c}\), so, from the previous discussion, \(\ell\) and \(\ell^{\prime}\) cannot come back to the real axis inside this interval. Altogether, this proves the claim, i.e. that \(\ell\) and \(\ell^{\prime}\) can only come back to the real axis at \(\overline{x}\). A first consequence is that at most one of \(\ell\) and \(\ell^{\prime}\) can come back to the real axis. But they cannot go both to infinity, otherwise they would cross. So one of \(\ell\) or \(\ell^{\prime}\) should go to the real axis, and the other to infinity. The non-crossing constraint and the fact that \(\overline{x}>0\) forces that \(\ell\) goes to infinity, while \(\ell^{\prime}\) goes to \(\overline{x}\). The contours of the lemma are now obtained as follows. For \(\gamma_{W}^{\mathrm{new}}\), we simply follow \(\ell^{\prime}\) and its mirror image in the lower half plane. By construction, it lies in the region \(\{\mathfrak{R}S(U)\leq\mathfrak{R}S(U_{c}^{\prime})\}\). Also, since the return point \(\overline{x}\) to the real axis satisfies \(\overline{x}>0\), \(\gamma_{W}^{\mathrm{new}}\) encloses \(I_{W}\). For \(\gamma_{Z}^{\mathrm{new}}\), we follow \(\ell\) until \(\mathfrak{R}(U)\) is sufficiently large, and then go to the real axis following any smooth curve. We then go back to \(U_{c}\) with the mirror image of the first part of the contour. If we follow \(\ell\) until when \(\mathfrak{R}(U)\) is sufficiently large, we can ensure that this contour encloses \(I_{Z}\) and lies in the region \(\{\mathfrak{R}S(U)\geq\mathfrak{R}S(U_{c})\}\). Finally note that \(\gamma_{W}^{\mathrm{new}}\) lies in the interior of \(\gamma_{Z}^{\mathrm{new}}\) by construction. **Proposition 35**.: _Let \(x_{0}<0\) and \(0\leq t_{0}\leq t_{-}(x_{0})\). Then, locally uniformly for \((x_{1},t_{1})\in\mathbb{Z}\times\mathbb{R}\), we have that_ \[\lim_{N\to+\infty}\widetilde{K}_{\lambda_{N}}^{(x_{0},t_{0})}((x_{1},t_{1}),( x_{1},t_{1}))=0. \tag{65}\] _As a consequence, \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\) tends in probability to the empty set._ Proof.: We are interested in (64) for \(x_{1}=x_{2}\) and \(t_{1}=t_{2}\). In this case, the contours are such that \(\gamma_{W}\) contains \(I_{W}\), \(\gamma_{Z}\) contains \(I_{Z}\) and \(\gamma_{W}\) lies in the interior of \(\gamma_{Z}\). Therefore, we can transform the contours \(\gamma_{W}\) and \(\gamma_{Z}\) to the contours \(\gamma_{W}^{\mathrm{new}}\) and \(\gamma_{Z}^{\mathrm{new}}\) from Lemma 34 without crossing any poles. Thus, by the residue theorem, we get \[\widetilde{K}_{\lambda_{N}}^{(x_{0},t_{0})}((x_{1},t_{1}),(x_{1},t_{1}))=- \frac{1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma_{Z}^{\mathrm{new}}}\oint_{\gamma_ {W}^{\mathrm{new}}}\operatorname{Int}_{N}(W,Z)\,\mathrm{d}W\,\mathrm{d}Z\,. \tag{66}\] We recall that \(\operatorname{Int}_{N}(W,Z)\simeq(\sqrt{N})^{x_{2}-x_{1}}\mathrm{e}^{\sqrt{N}( S(W)-S(Z))}\,h_{(x_{2},t_{2})}^{(x_{1},t_{1})}(W,Z)\) on \(D_{S}\) and that \(\mathfrak{R}S(W)\leq\mathfrak{R}S(U_{c}^{\prime})<\mathfrak{R}S(U_{c})\leq \mathfrak{R}S(Z)\) for \((W,Z)\in\gamma_{W}^{\mathrm{new}}\times\gamma_{Z}^{\mathrm{new}}\) (see Lemmas 33 and 34). A difficulty comes from the fact that the right-most intersection point \(W^{+}\) of \(\gamma_{W}^{\mathrm{new}}\) with the real axis might lie outside \(D_{S}\) (\(W^{+}\) coincides with the point \(\overline{x}\) in the proof of Lemma 34). Nevertheless, the integrand can be controlled around this point thanks to third part of Lemma 23. The same arguments as in the proof of Proposition 31 shows that \(\operatorname{Int}_{N}(W,Z)\) converges pointwise to \(0\) and that the integrand is bounded by the integrable function \(h_{(x_{2},t_{2})}^{(x_{1},t_{1})}(W,Z)\). Dominated convergence applies, proving (65). The convergence in distribution to the empty set then follows from the general identity for determinantal point processes \[\mathbb{E}\Big{[}\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}(A)\Big{]}=\int_{ A}\widetilde{K}_{\lambda_{N}}^{(x_{0},t_{0})}((x_{1},t_{1}),(x_{1},t_{1}))\, \mathrm{d}\lambda\,(x_{1},t_{1}),\] for any bounded subset \(A\subset\mathbb{Z}\times\mathbb{R}\), where we recall that \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}(A)\) denotes the number of beads of \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\) contained in the set \(A\). By dominated convergence, these expected numbers of beads go to \(0\), so \((\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}(B_{i}))_{1\leq i\leq k}\) converges in probability to \((0,\dots,0)\) for any collection of bounded subsets \((B_{i})_{1\leq i\leq k}\). The discussion from Section 2.2 shows that this is equivalent to the convergence in distribution towards the empty set. We now discuss the case where the critical equation has a double root \(U_{c}=U_{c}^{\prime}\) in the interval \((-\infty,\eta\,a_{0}-x_{0})\). Then \(u\mapsto\mathfrak{R}S(u)\) is increasing on \((-\infty,\eta\,a_{0}-x_{0})\) (this is an immediate analogue of Lemma 33), and the action writes locally as \(S(z)=S(U_{c})+\frac{S^{\prime\prime}(U_{c})}{6}(z-U_{c})^{3}+O\left((z-U_{c}) ^{4}\right)\), with \(S^{\prime\prime\prime}(U_{c})>0\). Fig. 15 shows the imaginary level lines of \(S\), and the \(\{\mathfrak{R}S(U)<\mathfrak{R}S(U_{c})\}\) and \(\{\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\}\) regions around \(U_{c}\). In particular, we can find new integration contours as in Lemma 34 except that they now meet at \(U_{c}=U_{c}^{\prime}\). Note that, in this case, the integrand in (66) has a singularity at \(Z=W=U_{c}\), where it behaves as \(O((Z-W)^{-1})\). This is similar to the setting of Proposition 31 and we can therefore apply dominated convergence using the same arguments. We conclude that the renormalized correlation kernel tends also to \(0\), finishing the proof of the proposition. The case \(x_{0}\in(0,\eta\,a_{m})\) is essentially treated in the same way, with the following modifications. By Proposition 25, the two relevant real critical points \(U_{c}\) and \(U_{c}^{\prime}\) live in \((\eta\,a_{m}-x_{0},+\infty)\). To simplify the discussion we assume \(U_{c}\neq U_{c}^{\prime}\). Using the convention \(U_{c}<U_{c}^{\prime}\), Lemma 33 still holds. An analog of Lemma 34 also holds, with the important change that in this case, \(\gamma_{Z}^{\mathrm{new}}\) lies in the interior of \(\gamma_{W}^{\text{new}}\) (and not the opposite). Since the contours in (46) satisfy that \(\gamma_{W}\) is inside \(\gamma_{Z}\) (for \(t_{1}=t_{2}\)), moving them to \(\gamma_{Z}^{\text{new}}\) and \(\gamma_{W}^{\text{new}}\) yield a residue term related to the pole \(Z=W\) (as in (60), except that the residue term appears for any \(W\) in \(\gamma_{W}^{\text{new}}\)). For \(x_{1}=x_{2}\), \(t_{1}=t_{2}\), recalling (41), the residue of the integrand in (66) related to the pole \(Z=W\) is simply \(\frac{1}{1-t_{0}-\frac{t_{1}}{\sqrt{N}}}\). Therefore, Eq. (66) is replaced by: \[\widetilde{K}_{\lambda_{N}}^{(x_{0},t_{0})}((x_{1},t_{1}),(x_{1},t_{1}))=-\frac {1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma_{Z}^{\text{new}}}\oint_{\gamma_{W}^{ \text{new}}}\operatorname{Int}_{N}(W,Z)\,\mathrm{d}W\,\mathrm{d}Z-\frac{1}{2 \mathrm{i}\pi}\oint_{\gamma_{W}^{\text{new}}}\frac{1}{1-t_{0}-\frac{t_{1}}{ \sqrt{N}}}\,\mathrm{d}W\,.\] But the second integral is obviously \(0\), so that (65) holds also in this case. We conclude as above and obtain that also in this regime, the bead process \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\) tends in distribution to the empty set. The same conclusion also holds for the following remaining three cases: when \(x_{0}=0\) (in this case, by Proposition 25, the only admissible value for \(t_{0}\) is \(t_{0}=0\)), when \(x_{0}=\eta\,a_{0}\), and when \(x_{0}=\eta\,a_{m}\). We leave to the reader the details of these three remaining cases, pointing out that one needs some little modifications in the same spirit as Remark 26. Combining all the results in this section, we obtain the following final result for the small \(t\) region. **Proposition 36**.: _Let \(x_{0}\in[\eta\,a_{0},\eta\,a_{m}]\) and \(0\leq t_{0}\leq t_{-}(x_{0})\). Then the bead process \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\) tends in probability to the empty set._ ### The large \(t\) region In this section, we fix \(x_{0}\in[\eta\,a_{0},\eta\,a_{m}]\) and we let \(t_{+}\leq t_{0}\leq 1\), where \(t_{+}=t_{+}(x_{0})\) is given by Proposition 25. For the sake of brevity, we restrict ourselves to the case when there exists \(i_{0}\geq 1\) such that \(x_{0}\in(\eta\,a_{i_{0}-1},\eta\,b_{i_{0}})\). The case \(x_{0}\in(\eta\,b_{i_{0}},\eta\,a_{i_{0}})\) for some \(i_{0}\geq 1\) and the case \(x_{0}=\eta\,a_{i_{0}}\) for some \(i_{0}\geq 0\) are similar (for the case \(x_{0}=\eta\,a_{i_{0}}\), we refer to the discussion in Remark 26). Finally, for \(x_{0}=\eta\,b_{i_{0}}\), the only admissible value for \(t_{0}\) in the large \(t\) region is \(t_{0}=1\) (since Proposition 25 states that \(t_{+}(\eta\,b_{i_{0}})=1\)); also in this case we omit the simple necessary modifications. From Proposition 25, we know that the critical equation (8) has only real solutions, two of which, say \(U_{c}\leq U_{c}^{\prime}\), are in the interval \((0,\eta\,b_{i_{0}}-x_{0})\). We will discuss here the case \(U_{c}<U_{c}^{\prime}\), the case of a double critical point \(U_{c}=U_{c}^{\prime}\) being obtained with simple modifications similar to those discussed in the previous subsection. Figure 15. **Left:** The landscape of the action \(S\) around a double critical point on the real line in the case of the small \(t\) region \(\{0\leq t_{0}\leq t_{-}(x_{0})\}\). Plain lines are imaginary level lines, dotted lines are the real level lines. We also indicated the alternation of different regions: the yellow regions correspond to \(\{\mathfrak{R}S(U)<\mathfrak{R}S(U_{c})\}\), while the white regions correspond to \(\{\mathfrak{R}S(U)>\mathfrak{R}S(U_{c})\}\). **Right:** The new integration contour in the case of a double critical point (to be compared with the left-hand side of Fig. 14). **Lemma 37**.: _The critical point \(U_{c}\) and \(U_{c}^{\prime}\) are respectively a local minimum and a local maximum of the function \(u\to\mathfrak{R}S(u)\) on the real line and we have \(\mathfrak{R}S(U_{c})<\mathfrak{R}S(U_{c}^{\prime})\)._ Proof.: The lemma follows from the fact that \(u\to\mathfrak{R}S(u)\) has negative infinite slopes at the points \(0\) and \(\eta\,b_{i_{0}}-x_{0}\) and that there are no local extrema other than \(U_{c}\) and \(U_{c}^{\prime}\) in the interval \((0,\eta\,b_{i_{0}}-x_{0})\). We now see how to move integration contours. To this end, let us first remark that when \(x_{0}\in(\eta\,a_{i_{0}-1},\eta\,b_{i_{0}})\) the \(Z\) poles of \(\operatorname{Int}_{N}(W,Z)\) all lie in a strictly smaller sub-interval of the interval \(I_{Z}=[o(1),\eta\,a_{m}-x_{0}-1]\) considered so far. Indeed, from (41), one can see that the only \(Z\)-poles of \(\operatorname{Int}_{N}(W,Z)\) are a subset of the poles of \(F_{\lambda_{N}}(\sqrt{N}(Z+x_{0}))\). But, from (28) and Fig. 9, we know that the latter function has no poles in the intervals of the form \((\eta\,a_{i-1}-x_{0},\eta\,b_{i}-x_{0})\) and so, in particular, the \(Z\) poles of \(\operatorname{Int}_{N}(W,Z)\) all lie in \(\widetilde{I}_{Z}=[\eta b_{i_{0}}-x_{0},\eta\,a_{m}-x_{0}-1]\subset I_{Z}\). **Lemma 38**.: _There exist two integration contours \(\gamma_{W}^{new}\) and \(\gamma_{Z}^{new}\) (both followed in counterclockwise order) such that_ * \(\gamma_{W}^{new}\) _and_ \(\gamma_{Z}^{new}\) _have disjoint interiors;_ * \(\gamma_{W}^{new}\) _(resp._ \(\gamma_{Z}^{new}\)_) contains_ \(I_{W}\) _(resp._ \(\widetilde{I}_{Z}\)_) in its interior;_ * \(\gamma_{W}^{new}\) _(resp._ \(\gamma_{Z}^{new}\)_) lies inside the region_ \(\{\mathfrak{R}S(U)\leq\mathfrak{R}S(U_{c})\}\) _(resp._ \(\{\mathfrak{R}S(U)\geq\mathfrak{R}S(U_{c}^{\prime})\}\)_)._ These contours are shown in the middle picture of Fig. 14. Proof.: The proof is similar to that of Lemma 34. We consider the imaginary level lines \(\ell\) and \(\ell^{\prime}\) leaving from \(U_{c}\) and \(U_{c}^{\prime}\) orthogonally to the real line and in the upper-half plane. The level line \(\ell\) (resp. \(\ell^{\prime}\)) lies inside the region \(\{\mathfrak{R}S(U)\leq\mathfrak{R}S(U_{c})\}\) (resp. \(\{\mathfrak{R}S(U)\geq\mathfrak{R}S(U_{c}^{\prime})\}\)). Moreover, they cannot go back to the the real line. Indeed, the value \(\mathfrak{I}S(z)\) on the real line is maximal on the interval \((0,\eta\,b_{i_{0}}-x_{0})\) (see Section 3.4 and Fig. 11) so that if \(\ell\) or \(\ell^{\prime}\) goes back to the real line, it has to be within this interval. But this would mean that \(S\) has a third critical point in this interval, which is impossible by Lemma 24 and Proposition 25. We conclude that both \(\ell\) and \(\ell^{\prime}\) go to infinity. Since \(S(U)\sim|\log(1-t_{0})|\,U\) for large \(|U|\) by (48), necessarily \(\ell\) goes to infinity in the negative real direction, while \(\ell^{\prime}\) goes to infinity in the positive real direction. It is then possible to follow \(\ell\) long enough, and then join the negative real axis while staying in the region \(\{\mathfrak{R}S(U)\leq\mathfrak{R}S(U_{c})\}\). Completing the path by symmetry gives the contour \(\gamma_{W}^{new}\). The contour \(\gamma_{Z}^{new}\) is constructed similarly from \(\ell^{\prime}\). We finally note that the claim in the second item follows from the discussion above the lemma statement. **Proposition 39**.: _Let \(x_{0}\in[\eta\,a_{0},\eta\,a_{m}]\) and \(t_{+}(x_{0})\leq t_{0}\leq 1\). Then, locally uniformly for \((x_{1},t_{1})\in\mathbb{Z}\times\mathbb{R}\), we have that_ \[\lim_{N\to+\infty}\widetilde{K}_{\lambda_{N}}^{(x_{0},t_{0})}((x_{1},t_{1}),(x _{1},t_{1}))=0.\] _As a consequence, \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\) tends in probability to the empty set._ Proof.: As already explained, we give details only in the case when there exists \(i_{0}\geq 1\) such that \(x_{0}\in(\eta\,a_{i_{0}-1},\eta\,b_{i_{0}})\). Again, we are interested in (46) for \(x_{1}=x_{2}\) and \(t_{1}=t_{2}\). The original contours are such that \(\gamma_{W}\) lies in the interior of \(\gamma_{Z}\). As in the case \(x_{0}>0\) and \(0\leq t_{0}\leq t_{-}(x_{0})\) in the previous section, moving the contour to those of Lemma 38 yields the following: \[\widetilde{K}_{\lambda_{N}}^{(x_{0},t_{0})}((x_{1},t_{1}),(x_{1},t_{1}))=-\frac {1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma_{Z}^{new}}\oint_{\gamma_{W}^{new}} \operatorname{Int}_{N}(W,Z)\,\mathrm{d}W\,\mathrm{d}Z-\frac{1}{2\mathrm{i} \pi}\oint_{\gamma_{W}^{new}}\frac{1}{1-t_{0}-\frac{t_{1}}{\sqrt{N}}}\,\mathrm{ d}W\,.\] The second integral is identically \(0\), while the first tends to \(0\) as \(N\) tends to \(+\infty\) using the same argument as before. This ends the proof of the proposition. ### The intermediate \(t\) region Fix \(x_{0}\in[\eta\,a_{0},\eta\,a_{m}]\). As seen in Section 1.4.2, it might happen that there is some \(t_{0}\in(t_{-}(x_{0}),t_{+}(x_{0}))\) in the frozen region, i.e. such that the critical equation (8) has only real roots. In this case, thanks to the third item of Proposition 25, there are three roots - denoted by \(U_{c}\leq U^{\prime}_{c}\leq U^{\prime\prime}_{c}\) - that are either all inside a negative interval \((\eta\,b_{j_{0}}-x_{0},\eta\,a_{j_{0}}-x_{0})\) for some \(j_{0}<i_{0}\), or all inside a positive interval \((\eta\,a_{j_{0}}-x_{0},\eta\,b_{j_{0}+1}-x_{0})\) for some \(j_{0}\geq i_{0}\). These two cases are treated similarly, we will therefore assume that the three roots are in \((\eta\,b_{j_{0}}-x_{0},\eta\,a_{j_{0}}-x_{0})\) for some \(j_{0}<i_{0}\). Also, we assume \(U_{c}<U^{\prime}_{c}<U^{\prime\prime}_{c}\), and let the reader convince himself that multiple critical points do not create additional difficulties. The following lemma is proven similarly as before. **Lemma 40**.: _The critical point \(U_{c}\), \(U^{\prime}_{c}\,U^{\prime\prime}_{c}\) are respectively local minimum, maximum and minimum of the function \(u\to\mathfrak{R}S(u)\) on the real line and we have both \(\mathfrak{R}S(U_{c})<\mathfrak{R}S(U^{\prime}_{c})>\mathfrak{R}S(U^{\prime \prime}_{c})\)._ This helps us to construct integration contours as follows. With an argument similar to the one used above Lemma 38, we note that there are no \(W\)-poles in the interval \((\eta\,b_{j_{0}}-x_{0},\eta\,a_{j_{0}}-x_{0})\). A major difference is that now the \(W\) integration contour is split into two disjoint parts, the union of their interiors containing all \(W\) poles. **Lemma 41**.: _There exist integration contours \(\gamma^{\text{new}}_{W,1}\), \(\gamma^{\text{new}}_{W,2}\) and \(\gamma^{\text{new}}_{Z}\) (all followed in counterclockwise order) such that_ * \(\gamma^{\text{new}}_{W,1}\) _and_ \(\gamma^{\text{new}}_{Z}\) _have disjoint interiors, and_ \(\gamma^{\text{new}}_{W,2}\) _lies inside_ \(\gamma^{\text{new}}_{Z}\)_;_ * \(\gamma^{\text{new}}_{W,1}\) _(resp._ \(\gamma^{\text{new}}_{W,2}\) _and_ \(\gamma^{\text{new}}_{Z}\)_) contains_ \((\eta\,a_{0}-x_{0},\eta\,b_{j_{0}}-x_{0})\) _(resp._ \((\eta\,a_{j_{0}}-x_{0},0)\) _and_ \(I_{Z}\)_) in its interior;_ * \(\gamma^{\text{new}}_{W,1}\) _(resp._ \(\gamma^{\text{new}}_{W,2}\)_) lies inside the region_ \(\{\mathfrak{R}S(U)\leq\mathfrak{R}S(U_{c})\}\) _(resp._ \(\{\mathfrak{R}S(U)\leq\mathfrak{R}S(U^{\prime\prime}_{c})\}\)_);_ * \(\gamma^{\text{new}}_{Z}\) _lies inside the region_ \(\{\mathfrak{R}S(U)\geq\mathfrak{R}S(U^{\prime}_{c})\}\)_._ These contours are shown in the right-hand side of Fig. 14. Proof.: The strategy of proof is again the same. We consider the three imaginary lines \(\ell\), \(\ell^{\prime}\) and \(\ell^{\prime\prime}\) leaving from \(U_{c}\), \(U^{\prime}_{c}\) and \(U^{\prime\prime}_{c}\). One can prove that \(\ell^{\prime\prime}\) goes back to the real axis at some point \(x>0\), while \(\ell\) and \(\ell^{\prime}\) go to infinity respectively in the negative and positive directions. Symmetrizing \(\ell^{\prime\prime}\) gives the contour \(\gamma^{\text{new}}_{W,2}\). On the other hand, following \(\ell\) and \(\ell^{\prime}\) for long enough and then joining the real line (plus symmetrizing) gives \(\gamma^{\text{new}}_{W,1}\) and \(\gamma^{\text{new}}_{Z}\). **Proposition 42**.: _Let \(x_{0}\in[\eta\,a_{0},\eta\,a_{m}]\) and \(t_{0}\in(t_{-}(x_{0}),t_{+}(x_{0}))\) such that the critical equation (8) has only real roots. Then, locally uniformly for \((x_{1},t_{1})\in\mathbb{Z}\times\mathbb{R}\), we have that_ \[\lim_{N\to+\infty}\widetilde{K}^{(x_{0},t_{0})}_{\lambda_{N}}((x_{1},t_{1}),(x _{1},t_{1}))=0.\] _As a consequence, \(\widetilde{M}^{(x_{0},t_{0})}_{\lambda_{N}}\) tends in probability to the empty set._ Proof.: Again, we consider (46) for \(x_{1}=x_{2}\) and \(t_{1}=t_{2}\). The original contours are such that \(\gamma_{W}\) lies in the interior of \(\gamma_{Z}\). Moving the contour to those of Lemma 41 (note that \(\gamma^{\text{new}}_{W,1}\) and \(\gamma^{\text{new}}_{W,2}\) together enclose the same \(W\)-poles as \(\gamma_{W}\)) yields a residue term which is an integral over \(\gamma^{\text{new}}_{W,1}\). We get \[\widetilde{K}^{(x_{0},t_{0})}_{\lambda_{N}}((x_{1},t_{1}),(x_{1},t_{1}))=-\frac {1}{(2\mathrm{i}\pi)^{2}}\oint_{\gamma^{\text{new}}_{Z}}\oint_{\gamma^{\text{ new}}_{W}}\mathrm{Int}_{N}(W,Z)\,\mathrm{d}W\,\mathrm{d}Z-\frac{1}{2 \mathrm{i}\pi}\oint_{\gamma^{\text{new}}_{W,1}}\frac{1}{1-t_{0}-\frac{t_{1}}{ \sqrt{N}}}\,\mathrm{d}W\,.\] Again, the second integral is identically \(0\), while the first tends to \(0\) as \(N\) tends to \(+\infty\). This ends the proof of the proposition. Propositions 36, 39, and 42 completes the proof of the second item of Theorem 15. ## 6. The limiting height function and the continuity of the limiting surface In this section, we prove Theorems 4 and 7. Recall (from Theorem 2) that \(H^{\infty}:[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\to\mathbb{R}\) is the deterministic limiting height function of the sequence of random height functions \(\frac{1}{\sqrt{N}}H_{\lambda_{N}}(\lfloor x\sqrt{N}\rfloor,t)\) coming from the bead processes associated with a uniform random (Poissonized) Young tableaux of fixed shape \(\lambda^{0}\). We want to show that \[H^{\infty}(x,t)=\frac{1}{\pi}\int_{0}^{t}\alpha(x,s)\,\mathrm{d}s\,,\quad\text {for all }(x,t)\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1].\] Proof of Theorem 4.: We fix \((x,t)\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\). Recalling the boundary conditions in (5), we note that \[\frac{1}{\sqrt{N}}H_{\lambda_{N}}(\lfloor x\sqrt{N}\rfloor,t)\leq\frac{1}{2 \sqrt{N}}\left(\omega_{\lambda_{N}}(\lfloor x\sqrt{N}\rfloor)-||\lfloor x\sqrt {N}\rfloor|\right)=\frac{1}{2}(\omega_{\eta\lambda^{0}}(x)-|x|),\quad\text{ for all }N>0.\] Therefore, by dominated convergence theorem, the convergence in Theorem 2 implies that for every fixed \((x,t)\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\), \[H^{\infty}(x,t)=\lim_{N\to+\infty}\frac{1}{\sqrt{N}}\mathbb{E}\big{[}H_{ \lambda_{N}}(\lfloor x\sqrt{N}\rfloor,t)\big{]}.\] We will prove that the right-hand side converges to \(\frac{1}{\pi}\int_{0}^{t}\alpha(x,s)ds\), implying Theorem 4 by uniqueness of the limit. By definition, \(H_{\lambda_{N}}(\lfloor x\sqrt{N}\rfloor,t)\) is the number of beads in the bead process \(M_{\lambda_{N}}\) which lie on the thread at position \(\lfloor x\sqrt{N}\rfloor\) and with height in \([0,t]\). Since \(M_{\lambda_{N}}\) is a determinantal point process with kernel \(K_{\lambda_{N}}\), we have \[\mathbb{E}\big{[}H_{\lambda_{N}}(\lfloor x\sqrt{N}\rfloor,t)\big{]}=\int_{0}^ {t}K_{\lambda_{N}}\big{(}(\lfloor x\sqrt{N}\rfloor,s),(\lfloor x\sqrt{N} \rfloor,s)\big{)}ds.\] With the notation in (40), we get \[\frac{1}{\sqrt{N}}K_{\lambda_{N}}\big{(}(\lfloor x\sqrt{N}\rfloor,s),(\lfloor x \sqrt{N}\rfloor,s)\big{)}=\widetilde{K}_{\lambda_{N}}^{(x,s)}((0,0),(0,0)).\] Recall now that \(\alpha(x,s):=\frac{\mathfrak{I}U_{c}}{1-s}\mathds{1}_{(x,s)\in L}\), where \(L\) denotes the liquid region. If \((x,s)\) is in the liquid region, Eqs. (59) and (61) implies that \[\lim_{N\to+\infty}\widetilde{K}_{\lambda_{N}}^{(x,s)}((0,0),(0,0))=K_{\infty} ((0,0),(0,0))=\frac{1}{2i\pi}\int_{\overline{U_{c}}}^{U_{c}}\frac{dW}{1-s}= \frac{1}{\pi}\frac{\mathfrak{I}U_{c}}{1-s}=\frac{\alpha(x,s)}{\pi},\] where, for the third last equality, we used a vertical path from \(\overline{U_{c}}\) to \(U_{c}\), as in the beginning of the proof of Lemma 32. On the other hand, if \((x,s)\) is outside the liquid region, we have, by Propositions 36, 39 and 42, \[\lim_{N\to+\infty}\widetilde{K}_{\lambda_{N}}^{(x,s)}((0,0),(0,0))=0=\frac{ \alpha(x,s)}{\pi}.\] Hence, we get that \(\lim_{N\to+\infty}\widetilde{K}_{\lambda_{N}}^{(x,s)}((0,0),(0,0))=\frac{ \alpha(x,s)}{\pi}\), for all \((x,s)\in[\eta\,a_{0},\eta\,a_{m}]\times[0,1]\). Moreover, since integration contours, integrands and critical points are continuous function of \((x,s)\), and since all asymptotic estimates given are uniform on compact sets, the kernel convergence is also uniform on compact sets. We conclude that \[\lim_{N\to+\infty}\frac{1}{\sqrt{N}}\,\mathbb{E}\big{[}H_{\lambda_{N}}(\lfloor x \sqrt{N}\rfloor,t)\big{]}=\lim_{N\to+\infty}\int_{0}^{t}\widetilde{K}_{ \lambda_{N}}^{(x,s)}((0,0),(0,0))ds=\frac{1}{\pi}\int_{0}^{t}\alpha(x,s)ds,\] proving Theorem 4. We now turn to the proof of the continuity criterion for the limiting surface \(T^{\infty}\) stated in Theorem 7. Proof of Theorem 7.: Recall the definition of the quantities \(T_{-}^{\infty}(x,y)\) and \(T_{+}^{\infty}(x,y)\) from (12) and recall also the definition of \(t_{-}(x)\) and \(t_{+}(x)\) from Proposition 25, noting that \(t_{-}(x)=\inf\left\{\partial L\cap\left(\{x\}\times[0,1]\right)\right\}\) and \(t_{+}(x)=\sup\left\{\partial L\cap\left(\{x\}\times[0,1]\right)\right\}\). From Remark 6, we know that the limiting surface \(T^{\infty}\) is continuous in its domain \(D_{\lambda^{0}}\) if and only if \(T_{-}^{\infty}(x,y)=T_{+}^{\infty}(x,y)\) for all \((x,y)\in D_{\lambda^{0}}\). The latter condition is equivalent to the following property of the liquid region \(L\): for all \(x\in[\eta\,a_{0},\eta\,a_{m}]\), \[[t_{-}(x),t_{+}(x)]\setminus\{L\cap\left(\{x\}\times[0,1]\right)\}\text{ does not contain any non-empty open interval}. \tag{67}\] Indeed, if this is not true, i.e. for some \(x\in[\eta\,a_{0},\eta\,a_{m}]\) there exists an non-empty open interval \(I\) contained in \([t_{-}(x),t_{+}(x)]\setminus\{L\cap\left(\{x\}\times[0,1]\right)\}\), then there exists some constant \(c>0\) such that \(H^{\infty}(x,t)=c\) for all \(t\in I\) and so \(T_{-}^{\infty}(x,2c-|x|)<T_{+}^{\infty}(x,2c-|x|)\). Now recall the parametrization of the frozen boundary curve \(\partial L\) given in Proposition 27. We observe that a necessary and sufficient condition such that (67) holds for all \(x\in[\eta\,a_{0},\eta\,a_{m}]\) is that the tangent vector \((\dot{x}(s),\dot{t}(s))\) at the cusp points12 of \(\partial L\) is vertical. Requesting that the tangent vector \((\dot{x}(s),\dot{t}(s))\) is vertical at a certain point \((x(s),t(s))\) of the curve \(\partial L\) is equivalent to impose that its (absolute) slope is zero, that is \(|G(s)|=+\infty\). The solutions to the latter equation are \(s=\eta\,a_{i}\) for \(i=0,\dots,m\). Moreover, the cusp points of \(\partial L\) are given by the solutions to the equations: Footnote 12: We recall that for a plane curve defined by analytic parametric equations \(x(t)=f(t)\) and \(y(t)=g(t)\), a _cusp_ is a point where both derivatives of \(f\) and \(g\) are zero, and the directional derivative, in the direction of the tangent, changes sign. \[\begin{cases}\dot{x}(s)=1+\frac{\dot{\Sigma}(s)}{\dot{\Sigma}(s)^{2}}=0,\\ \dot{t}(s)=G(s)\left(1+\frac{\Sigma(s)}{\Sigma(s)^{2}}\right)=0,\end{cases}\] where the directional derivative, in the direction of the tangent, changes sign. One can check that \((x(s),t(s))\), for \(s=\eta\,a_{0}\) and \(s=\eta\,a_{m}\), are not cusp points. Indeed, these two points are the two points where the frozen boundary curve \(\partial L\) is tangent to the two vertical boundaries of \([\eta\,a_{0},\eta\,a_{m}]\times[0,1]\). Hence, it is enough to impose that for all \(i=1,\dots,m-1\), \[\begin{cases}\dot{x}(\eta\,a_{i})=0,\\ \dot{t}(\eta\,a_{i})=0.\end{cases}\] With some standard computation, one can note that \(\dot{x}(\eta\,a_{i})=\lim_{s\to\eta a_{i}}1+\frac{\dot{\Sigma}(s)}{\dot{ \Sigma}(s)^{2}}=0\) for all \(i=1,\dots,m-1\). Therefore, the only non-trivial condition is to impose that \(\dot{t}(\eta\,a_{i})=0\) for all \(i=1,\dots,m-1\). Again, with some standard computations, one can check that \[\dot{t}(\eta\,a_{i})=\lim_{s\to\eta\,a_{i}}G(s)\left(1+\frac{\dot{\Sigma}(s)}{ \dot{\Sigma}(s)^{2}}\right)=\frac{2\left(\sum_{j=0,j\neq i}^{m}\frac{1}{\eta\,a _{i}-\eta\,a_{j}}-\sum_{j=1}^{m}\frac{1}{\eta\,a_{i}-\eta\,b_{j}}\right)\prod _{j=1}^{m}(\eta\,a_{i}-\eta\,b_{j})}{\prod_{j=0,j\neq i}^{m}(\eta\,a_{i}-\eta \,a_{j})},\] concluding that the necessary and sufficient conditions for the continuity of \(T^{\infty}\) are the ones in (15). ## 7. Applications In this section we prove the applications discussed in Section 1.4. Proof of Proposition 8.: We first prove the formula for \(H_{r}^{\infty}(x,t)\). From Theorem 4, we have \[H_{r}^{\infty}(x,t)=\frac{1}{\pi}\int_{0}^{t}\alpha_{r}(x,s)\,\mathrm{d}s\,,\] where \(\alpha_{r}(x,s)=\frac{\mathcal{U}_{r}}{1-s}\mathds{1}_{(x,s)\in L}\). In this case the critical equation becomes \[U\,\big{(}x-\sqrt{r}+\tfrac{1}{\sqrt{r}}+U\big{)}=(1-t)\big{(}x+\tfrac{1}{ \sqrt{r}}+U\big{)}\big{(}x-\sqrt{r}+U\big{)}.\] Solving this quadratic equation and integrating the imaginary part of the solution gives the formula (16) for \(H_{r}^{\infty}\). The fact that the limiting surface \(T_{r}^{\infty}(x,y)\) is continuous for all \(r>0\) is a consequence of Theorem 7. Proof of Proposition 10.: From Theorem 7 we know that the surface \(T_{p,q,r}^{\infty}\) is continuous in its domain if and only if \[\frac{1}{a_{1}-a_{0}}+\frac{1}{a_{1}-a_{2}}=\frac{1}{a_{1}-b_{1}}+\frac{1}{a_{1 }-b_{2}},\] with the coefficients \(a_{0},a_{1},a_{2},b_{1},b_{2}\) as in Eq. (20). That is, \[\frac{1}{1+p}-\frac{2}{2+p-q}-\frac{2}{p+q-2r}+\frac{1}{p-r}=0.\] Solving the above equation, one gets the solutions claimed in the proposition statement. ## 8. Local limits for random Young tableaux The goal of this section is to complete the proof of Corollary 18. ### Local topology for standard Young tableaux We start with some definitions. We recall that a _marked standard Young tableau_ is a triple \((\lambda,T,(x,y))\) where \((\lambda,T)\) is a standard Young tableau of shape \(\lambda\) and \((x,y)\) are the coordinates of a distinguished box in \(\lambda\) (see the left-hand side of Fig. 16 for an example). A _marked Poissonized Young tableau_ is defined analogously. We introduce a family of _restriction functions_ for marked standard Young tableaux/Poissonized Young tableaux. Fix \(h\in\mathbb{N}\). Given a marked standard Young tableau/Poissonized Young tableau \((\lambda,T,(x,y))\), we denote by \(r_{h}\left(\lambda,T,(x,y)\right)\) the standard Young tableau \((\diamond_{h},R)\), where \(\diamond_{h}\) is a square Young diagram of size \(h^{2}\) and \(R\) is a filling of the boxes of \(\diamond_{h}\) obtained as follows:13 Footnote 13: We highlight the fact that by definition the restriction functions \(r_{h}\) give always back a standard Young tableau (even in the case of Poissonized Young tableaux). Note that the rescaling procedure in step (2) is well-defined also when \((\lambda,T,(x,y))\) is a marked Poissonized Young tableau since we restrict to the case where the mapping \(T\) has distinguished values. 1. first, we define \(\widetilde{R}(x^{\prime},y^{\prime})=T(x+x^{\prime},y+y^{\prime})\) for all \((x^{\prime},y^{\prime})\in\diamond_{h}\), where we set \(R(x^{\prime},y^{\prime})=*\) if \(T(x+x^{\prime},y+y^{\prime})\) is not well-defined (see the middle picture of Fig. 16); 2. second, we rescale the values of the map \(\widetilde{R}\) obtaining a new map \(R\) so that \((\diamond_{h},R)\) is a standard Young tableau and the values of \(R\) have the same relative order as the values of \(\widetilde{R}\), and the boxes filled by \(*\) are kept as they are (see the right-hand side of Fig. 16). Figure 16. **Left:** A marked standard Young tableau \((\lambda,T,(x,y))\) with the marked box highlighted in red at position \((x,y)=(4,5)\). **Middle:** The pair \((\diamond_{4},\widetilde{R})\) obtained from the first step in the definition of the restriction function \(r_{4}\left(\lambda,T,(x,y)\right)\). **Right:** The pair \((\diamond_{4},R)\) obtained from the second step in the definition of the restriction function \(r_{4}\left(\lambda,T,(x,y)\right)\). An _infinite standard Young tableau_ is a pair \((\diamond_{\infty},T)\) where \(\diamond_{\infty}\) is the infinite Young diagram formed by all the boxes at positions \((x,y)\in\mathbb{Z}^{2}\) such that \(x+y\) is odd and \(y>|x|\) (recall we are using the Russian notation; see the left-hand side of Fig. 1, p. 3) and \(T:\diamond_{\infty}\to\mathbb{Z}_{\geq 1}\) is a bijection that is increasing along rows and columns. An infinite standard Young tableau is always (implicitly) marked at the box \((0,1)\). An example of an infinite standard Young tableau is given in Fig. 17. **Definition 43**.: _Given a sequence of marked standard Young tableaux/Poissonized Young tableaux \((\lambda_{n},T_{n},(x_{n},y_{n}))_{n}\) and an infinite standard Young tableau \((\diamond_{\infty},T)\), we say that \((\lambda_{n},T_{n},(x_{n},y_{n}))_{n}\) locally converges to \((\diamond_{\infty},T)\) if for every \(h\in\mathbb{N}\),_ \[r_{h}(\lambda_{n},T_{n},(x_{n},y_{n}))\xrightarrow[n\to\infty]{}r_{h}\left( \diamond_{\infty},T\right).\] The space of marked standard Young tableaux/Poissonized Young tableaux and infinite standard Young tableaux with the topology induced by the local convergence defined in Definition 43 is a metrizable Polish space, so one can consider convergence in distribution with respect to this local topology. ### Local convergence for Poissonized Young tableaux We first prove that the random infinite standard Young tableau from Definition 16 is well-defined. Recall the definitions of the functions \(H_{\beta}\) and \(R_{\beta}\) of the infinite bead process \(M_{\beta}\) from the discussion above Definition 16. **Proposition 44**.: _Fix a parameter \(\beta\in(-1,1)\). Then the heights_ \[\{H_{\beta}(x,y),\,(x,y)\in\diamond_{\infty}\}\] _are a.s. all distinct and they have a.s. no accumulation points._ Proof.: The infinite bead process \(M_{\beta}\) is a determinantal point process. Hence the distribution of the heights has a joint density with respect to the Lebesgue measure and so, it puts zero measure on vectors with two equal coordinates. Therefore the heights \(\{H_{\beta}(x,y),\,(x,y)\in\diamond_{\infty}\}\) are a.s. all distinct. We now prove that there are a.s. no accumulation points. It is enough to prove that \(H_{\beta}(k,k+1)\) tends to \(+\infty\) a.s., as \(k\) tends to infinity. Indeed, assume this is true. By symmetry \(H_{\beta}(-k,k+1)\) also tends to \(+\infty\). But the interlacing condition implies that there are at most \(k^{2}\) pairs \((x,y)\) in \(\diamond_{\infty}\) with \(H_{\beta}(x,y)\leq\min(H_{\beta}(k,k+1),H_{\beta}(-k,k+1))\). Therefore, if \(\min(H_{\beta}(k,k+1),H_{\beta}(-k,k+1))\) tends to \(+\infty\), then the set \(\{H_{\beta}(x,y),\,(x,y)\in\diamond_{\infty}\}\) cannot have accumulation points. To prove that \(H_{\beta}(k,k+1)\) tends to \(+\infty\) a.s., we write \[H_{\beta}(k,k+1)-H_{\beta}(0,1)=\sum_{j=1}^{k}H_{\beta}(j,j+1)-H_{\beta}(j-1,j).\] Figure 17. An example of an infinite standard Young tableau. Since bead models are translation invariant, we can use Birkhoff ergodic theorem and conclude that, a.s., \[\lim_{k\to\infty}\frac{H_{\beta}(k,k+1)-H_{\beta}(0,1)}{k}=\mathbb{E}\big{[}H_{ \beta}(1,2)-H_{\beta}(0,1)\big{]}>0.\qed\] We now complete the proof of of Corollary 18. Proof of Corollary 18.: Fix \(h\in\mathbb{N}\). In order to prove the result it is enough to show that \(r_{h}\left(\lambda_{N},T_{N},\square_{N}\right)\) converges in distribution to \(r_{h}\left(\diamond_{\infty},R_{\beta}\right)\). Recall from Section 1.5.3 that \((\diamond_{\infty},R_{\beta})\) is constructed starting from an infinite bead process \(M_{\beta}\) of skewness \(\beta\) (and intensity \(\alpha=1\)). Let \(X_{h}\) be the height of the \(h\)-th bead above \(0\) in the thread of index \(0\) in \(M_{\beta}\). We note that the restriction \(r_{h}(\diamond_{\infty},R_{\beta})\) is completely determined by the relative positions of beads in the window \(\{-h+1,\ldots,-1,0,1,\ldots,h-1\}\times[0,X_{h}]\). Using Skorohod representation theorem, let us assume that the convergence in Theorem 15 holds almost surely. Then, for any fixed \(A>0\), the positions of the beads of \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\) in \(\{-h+1,\ldots,-1,0,1,\ldots,h-1\}\times(0,A)\) converge to the positions of the same beads of \(M_{\beta}\). If \(X_{h}<A\), the relative positions of these latter beads a.s. determine (thanks to Proposition 44), for \(N\) large enough (but random), both tableaux \[r_{h}(\lambda_{N},T_{N},\square_{N})\quad\text{and}\quad r_{h}(\diamond_{ \infty},R_{\beta}),\] where we recall that \(\square_{N}\) is the box corresponding to the first bead of \(M_{\lambda_{N}}\) above height \(t_{0}\) in the \(x_{0}\sqrt{N}\)-th thread, i.e. the first bead of \(\widetilde{M}_{\lambda_{N}}^{(x_{0},t_{0})}\) above height zero in the zero thread. Therefore, conditionally on \(X_{h}<A\), for \(N\) large enough (but random), we a.s. have \[r_{h}(\lambda_{N},T_{N},\square_{N})=r_{h}(\diamond_{\infty},R_{\beta}).\] Since this holds for all \(A>0\) and since \(X_{h}\) is a.s. finite, we have the almost sure convergence of \(r_{h}(\lambda_{N},T_{N},\square_{N})\) to \(r_{h}(\diamond_{\infty},R_{\beta})\) in the probability space constructed by Skorohod representation theorem. Almost sure convergence implies convergence in distribution of \(r_{h}(\lambda_{N},T_{N},\square_{N})\) to \(r_{h}(\diamond_{\infty},R_{\beta})\), which concludes the proof.
2308.08589
Searching for axions with kaon decay at rest
We describe a novel search strategy for axions (or hadronically coupled axion-like particles) in the mass range of $m_a \lesssim 350\,{\rm MeV}$. The search relies on kaon decay at rest, which produces a mono-energetic signal in a large volume detector (e.g.\ a tank of liquid scintillator) from axion decays $a\rightarrow \gamma\gamma$ or $a\rightarrow e^+e^-$. The decay modes $K^+\to \pi^+ a$ and $a \to \gamma \gamma$ are induced by the axion's coupling to gluons, which is generic to any model which addresses the strong CP problem. We recast a recent search from MicroBooNE for $e^+e^-$ pairs, and study prospects at JSNS$^2$ and other near-term facilities. We find that JSNS$^2$ will have world-leading sensitivity to hadronically coupled axions in the mass range of $40\,\mathrm{MeV} \lesssim m_a \lesssim 350\,\mathrm{MeV}$.
Yohei Ema, Zhen Liu, Ryan Plestid
2023-08-16T18:00:00Z
http://arxiv.org/abs/2308.08589v2
# Searching for axions with kaon decay at rest ###### Abstract We describe a novel search strategy for axions (or hadronically coupled axion-like particles) in the mass range of \(m_{a}\lesssim 350\,\text{MeV}\). The search relies on kaon decay at rest, which produces a mono-energetic signal in a large volume detector (e.g. a tank of liquid scintillator) from axion decays \(a\to\gamma\gamma\) or \(a\to e^{+}e^{-}\). The decay modes \(K^{+}\to\pi^{+}a\) and \(a\to\gamma\gamma\) are induced by the axion's coupling to gluons, which is generic to any model which addresses the strong CP problem. We recast a recent search from MicroBooNE for \(e^{+}e^{-}\) pairs, and study prospects at JSNS2 and other near-term facilities. We find that JSNS2 will have world-leading sensitivity to hadronically coupled axions in the mass range of \(40\,\text{MeV}\lesssim m_{a}\lesssim 350\,\text{MeV}\). + Footnote †: preprint: UMN-TH-4221/23, FTPI-MINN-23-13, CALT-TH/2023-028 **Introduction:** The neutron's electron dipole moment (EDM), \(d_{n}\lesssim 2\times 10^{-26}\)\(e\) cm [1; 2], is ten orders of magnitude smaller than its naive estimate of \(5\times 10^{-16}e\) cm. This is unexpected since charge-parity (CP) is violated within the Standard Model (SM), and therefore no fundamental symmetry forbids a neutron EDM. This is the strong CP problem, and its microscopic origin can be traced to the minuscule value of the QCD \(\bar{\theta}\) parameter, \(\bar{\theta}\lesssim 10^{-10}\), which controls the unique CP-violating QCD coupling in the SM. Axions are a popular solution that provide a dynamical mechanism for the relaxation of \(\bar{\theta}\)[3; 4; 5; 6]; if the axion's potential is generated exclusively by QCD, it naturally aligns the axion's ground state with \(\bar{\theta}=0\). Axions generically suffer from the so-called quality problem [7; 8; 9; 10] wherein high energy contributions to the axion potential can (and often do) displace the minimum away from \(\bar{\theta}=0\). These problem does not arise if the QCD potential is "strenghtened", for instance by introducing a mirror QCD sector. These non-minimal axion models produce a potential that is robust against the high energy corrections discussed above and often predict heavier axions as compared to minimal axion models [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. Recent investigations have reignited interest in these so-called heavy axion models with \(m_{a}\gtrsim 1\,\text{MeV}\). More generally, there is a broad interest in axion like particles (ALPs) [32; 33] which may serve as IR messengers of UV completions such as a string landscape [34; 35]. A wide range of search strategies have been proposed ranging from beam dumps, to flavor facilities, and collider experiments [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60]. In this work, we point out that kaon decay at rest (KDAR) offers a powerful probe of axions (or hadronically coupled ALPs) in the mass range of \(m_{a}=40-350\,\text{MeV}\). Unlike previous studies of KDAR [61; 62] the axions we consider are naturally coupled to quarks and/or gluons such that the hadronic decays of kaons serve as a powerful axion factory. The signal is a visible decay of either \(a\to\gamma\gamma\) or \(a\to e^{+}e^{-}\). The dominant SM kaon decay modes are \(K^{+}\to\mu^{+}\nu_{\mu}\), as well as \(K^{+}\to\pi^{0}e^{+}\nu_{e}\);1 and both channels produce a neutrino flux that can be measured [63; 64]. In addition to predictions of hadronic cascade simulations, this provides an experiment with an _in situ_ measurement of their KDAR population. The axion rate is then calculable, and a counting experiment can be performed. Footnote 1: Negatively charged \(K^{-}\) decays are subdominant to \(K^{-}\) capture on nuclei. This cannot occur for \(K^{+}\) because it does not form bound states with nuclei. For concreteness, we consider an axion coupled to Standard Model (SM) gauge bosons via \[\begin{split}\mathcal{L}_{a}&=\frac{1}{2}(\partial a )^{2}-\frac{m_{a}^{2}}{2}a^{2}+c_{GG}\frac{\alpha_{s}}{4\pi}\frac{a}{f}G_{\mu \nu}\tilde{G}^{\mu\nu}\\ &+c_{WW}\frac{\alpha_{2}}{4\pi}\frac{a}{f}W_{\mu\nu}\tilde{W}^{ \mu\nu}+c_{BB}\frac{\alpha_{Y}}{4\pi}\frac{a}{f}B_{\mu\nu}\tilde{B}^{\mu\nu}, \end{split} \tag{1}\] at a high scale, where \(a\) is the axion, and \(\alpha_{s}=g_{s}^{2}/4\pi\), \(\alpha_{2}=g^{2}/4\pi\) and \(\alpha_{Y}=g^{\prime 2}/4\pi\). The coupling \(c_{GG}\) can be absorbed into \(f\) and we define the axion decay constant as \(f_{a}=f/2c_{GG}\). As two useful benchmarks, we focus on the so-called "gluon dominance" (\(c_{GG}\neq 0\), \(c_{WW}=c_{BB}=0\)), and "co-dominance" (\(c_{GG}=c_{WW}=c_{BB}\neq 0\)) scenarios following [65; 66; 67] (using the relative normalization of \(c_{BB}\) in [66; 67]). We do not consider, e.g. axion couplings to quarks, though their inclusion is straightforward. In what follows, we discuss the theory of \(K^{+}\to\pi^{+}a\), and project sensitivities for JSNS2 and a MicroBooNE \(\gamma\gamma\) search. Footnote 2: We note that the axion decay rate is proportional to the axion decay rate \(\mathcal{L}_{a}\), which is proportional to the axion decay rate \(\mathcal{L}_{a}\). **Kaon decay at rest:** In models with a hadronically coupled axion, \(K^{+}\to\pi^{+}a\) serves as a powerful axion production channel. Since axions are long-lived, they decay in flight to visible final states. The resulting signal is a mono-energetic peak at an energy of \[E_{a}=\frac{m_{K}^{2}+m_{a}^{2}-m_{\pi}^{2}}{2m_{K}}. \tag{2}\] The branching ratio for \(K^{+}\to\pi^{+}a\) can be reliably predicted in chiral perturbation theory [68], with the result [neglecting terms of \(\mathcal{O}(m_{a}^{2}/m_{K}^{2})\) and \(\mathcal{O}(m_{\pi}^{2}/m_{K}^{2})\)] \[\text{BR}(K^{+}\to\pi^{+}a)=\frac{\tau_{K^{-}}}{\tau_{K_{S}}}\frac{f_{\pi}^{2}}{ 8f_{a}^{2}}\times\text{BR}(K_{S}\to\pi^{+}\pi^{-}). \tag{3}\] Equation (3) receives corrections from finite axion and pion mass effects in both the matrix element and phase space, both of which are taken into account in our numerical estimates (see [68] for the complete expression). In the co-dominance case, the axion coupling to the \(W\)-boson induces an extra contribution [46]. However, this contribution is negligible for \(c_{WW}=c_{GG}\) and we do not consider it further in what follows. Axions produced from KDAR can decay inside nearby detectors, either to \(\gamma\gamma\) or \(e^{+}e^{-}\). The number of axions produced at a KDAR source is given by \[N_{a}=\frac{\text{BR}(K^{+}\to\pi^{+}a)}{\text{BR}(K^{+}\to\mu^{+}\nu_{\mu})}N _{\nu_{\mu}}\, \tag{4}\] where \(N_{\nu_{\mu}}\) is the number of muon neutrinos from KDAR that are produced in the beam stop. In the limit of a long decay length, \(\lambda_{a}\gg L\) where \(L\) is the distance from the KDAR source to the detector, the number of axions that decay in the detector is given by2 Footnote 2: In our numerical computation, we do not rely on this approximation. Instead, we include the finite axion decay length properly to obtain the upper limit of the sensitivity correctly. \[N_{a\to\text{vis}}=N_{a}\times\frac{1}{4\pi L^{2}}\frac{V}{\lambda_{a}(m_{a})}\, \tag{5}\] where \(V\) is the volume of the detector, \(\lambda_{a}=\beta_{a}\gamma_{a}\tau_{a}\) is the decay length of the axion in the lab frame, and we have assumed all final states of the axion decay are visible. Assuming \(a\to\gamma\gamma\) dominates over \(a\to ee\), which is generically true for theories without direct axion-electron couplings, the lifetime of the axion is given by \[\tau_{a}^{-1}=\frac{\alpha^{2}m_{a}^{3}}{256\pi^{3}f_{a}^{2}}\left|c_{\gamma \gamma}^{\text{eff}}\right|^{2}\,, \tag{6}\] where \(\alpha\) is the electromagnetic fine structure constant. The effective coupling to the photon is given by [79, 32] \[c_{\gamma\gamma}^{\text{eff}}\approx c_{\gamma\gamma}-\left(\frac{5}{3}+\frac{ m_{\pi}^{2}}{m_{\pi}^{2}-m_{a}^{2}}\frac{m_{d}-m_{u}}{m_{u}+m_{d}}\right)\,, \tag{7}\] where \(c_{\gamma\gamma}=0\) in the gluon dominance case and \(c_{\gamma\gamma}=2\) in the co-dominance case, and our constraints are expressible in terms of \(1/f_{a}^{4}\). Equation (7) assumes a two-flavor approximation, which is reasonably accurate for the masses we consider, \(m_{a}\lesssim 350\text{ MeV}\). For these masses we never approach regions of resonant mixing with \(\eta\) and \(\eta^{\prime}\); more complete three flavor expressions can be found in Appendix B of [79]. The constraints can be easily re-scaled for other model-dependent choices (e.g. with the axion couplings to the up and down quarks). It is also straightforward to project sensitivities for models where \(a\to e^{+}e^{-}\) is the dominant decay channel. For \(m_{a}\) close to \(m_{\pi}\) cancellations can occur such that \(c_{\gamma\gamma}^{\text{eff}}\) vanishes. In this case the decay length of the axion is set by \(a\to e^{+}e^{-}\). Such a coupling is always generated by radiative effects. For our numerical estimates, we include the \(a\to e^{+}e^{-}\) decay channel, taking \(g_{aee}=0.37\times 10^{-3}\)\(c_{GG}\) (see Eq. (62) of [83]), although the effect of this coupling is almost invisible in our plots. Substantially stronger coupling to electrons is possible if it arises at tree-level in the UV or is induced by top-quarks [83]; in these cases our constraints would strengthen. In principle for \(m_{a}\geq 2m_{\mu}\) the muon decay channel can also be included, but this only affects the ceiling of our constraint by an \(\mathcal{O}(1)\) factor since \(\Gamma(a\to\gamma\gamma)\) is comparable to \(\Gamma(a\to\mu^{+}\mu^{-})\). If a detector can measure muon tracks, then \(a\to\mu^{+}\mu^{-}\) would present an additional signal channel with much lower backgrounds. However, since the impact of muon couplings on our results is relatively modest for the co-dominance and gluon dominance scenarios, and this region can be probed by other experiments [60, 59], we do not discuss it further. Figure 1: (**Gluon Dominance**) Sensitivities of MicroBooNE and JSNS\({}^{2}\) compared with existing limits and other projected sensitivities when all couplings are induced by a gluon coupling \(c_{GG}\) at a high scale. The MicroBooNE sensitivity is cut at 210 MeV because that is the range that appears in [69]. Existing limits include constraints from SN1987A [70, 71] and cosmology [72] adapted from [65], Kaon decays (E949 [73] and NA62 [74]), and beam dump searches (CHARM [75] and NuCal [76]), with data adapted from [66]. We also show projected sensitivity of DUNE [65]. Other projected sensitivities not shown here include DarkQuest [77, 78, 79], FASER [80], KOTO [81, 67], and SHiP [82]. We have re-derived constraints from E949 and NA62 and our results agree with [67] (but disagree with [71] and therefore the curves in [65]). **MicroBooNE search for heavy scalars:** Using the above formulae, we can recast a recent search by MicroBooNE [69]. Their search channel was \(K\to\pi h_{D}\) followed by \(h_{D}\to e^{+}e^{-}\) with \(h_{D}\) a dark Higgs [62]. If we consider the \(a\to e^{+}e^{-}\) search, then mapping their result to a heavy axion is an immediate constraint on \(g_{aee}\). Using the induced \(g_{aee}\) from gluon couplings mentioned above, we then obtain a limit on \(f_{a}\). We find that the constraints on \(f_{a}\) obtained in this gluon-dominance scenario are very weak, excluding \(f_{a}\lesssim 1\) TeV, which is already ruled out by other experiments, and so we do not include this in our summary plot (see [61] for ALPs coupled exclusively to electroweak bosons). For the \(a\to\gamma\gamma\) channel, it is not possible to interpret the MicroBooNE result as a constraint. The search in [69] made use of a boosted decision tree (BDT) in classifying their events and should reject \(\gamma\gamma\) topologies.3 As a crude estimate of the sensitivity, we may assume that a dedicated analysis for \(\gamma\gamma\) final states is performed with comparable BDT performance. Then the sensitivity to \(f_{a}\) (from the same dataset) would be given by equating \(\Gamma(K\to\pi h_{D})\times\Gamma(h_{D}\to ee)\), evaluated with the upper bound on the mixing angle in [69], to our \(\Gamma(K\to\pi a)\times\Gamma(a\to\gamma\gamma)\times e^{-L/\lambda_{e}}\) with \(L\simeq 100\,\)m the distance between MicroBooNE and the NuMI absorber (i.e. the KDAR source). This treatment is valid since the decay length of the dark Higgs is much longer than \(L\) for the mixing angles in [69], and because the MicroBooNE detector is much smaller than \(L\). Footnote 3: In principle there should be some probability of a \(\gamma\gamma\) pair contaminating the BDT-tagged \(e^{+}e^{-}\) sample, but this would require collaboration input. In Figure 1 (gluon dominance) and Figure 2 (co-dominance), we plot the sensitivity of MicroBooNE estimated in this way by the orange lines. MicroBooNE may be able to explore certain small regions of parameter space not covered by existing experiments if a dedicated search for \(\gamma\gamma\) final states is performed; this is qualitatively similar to the situation with a dark Higgs [62, 69]. As alluded to above, searches for dimuon final states may also be of interest for \(m_{a}\geq 2m_{\mu}\) since MicroBooNE can easily reconstruct \(\mu^{+}\mu^{-}\) pairs. **Searches at JSNS\({}^{2}\):** Next, we consider JSNS\({}^{2}\)[84, 85, 86], which is designed to test the excess of events seen at LSND [87]. A crucial difference between JSNS\({}^{2}\) and LSND is the proton beam energy, 3 GeV vs 0.8 GeV, such that JSNS\({}^{2}\) serves as both a \(\pi\)DAR and KDAR facility, whereas LSND had much fewer KDAR events (if any) [88]. We, therefore, find that JSNS\({}^{2}\) offers much more compelling sensitivity to heavy axions and do not consider LSND further in what follows. To estimate the sensitivity at JSNS\({}^{2}\) we take the number of KDAR neutrinos per proton on target (\(N_{\nu_{\mu}}=0.0034/\text{POT}\)) from [84]. Although the kaon production rate is uncertain (\(N_{\nu_{\mu}}\) can be twice as large [89]), as we mentioned in the Introduction, in the actual experiment the KDAR neutrino flux can be measured _in situ_. Using estimates from Fig. 3 of [90], we find that the search will be nearly background free \(N_{\text{bkg}}\leq 1\) in three years of operation, except for \(E_{a}\leq 238\) MeV (corresponding to \(m_{a}<104\,\)MeV) where \(N_{\text{bkg}}\approx 2.5\) over three years. Ref. [90] assumes an additional lead shielding of the detector, which has not been put in place at JSNS\({}^{2}\). Cosmic backgrounds are therefore underestimated in that work. However, the axion signal we consider benefits from the high energy signal region, \(E>227\,\)MeV, where cosmic events are suppressed. Further reductions in the cosmic muon rate can be achieved using coincident tagging with daughter Michel electrons [91]. Since studies of cosmic background mitigation are ongoing, we plot projections using a 5-event contour. Backgrounds will be smooth above 227 MeV, and a sideband analysis can be used to estimate their size _in situ_ such that a search will always be statistics rather than systematics limited. As we discuss below, if backgrounds are high at JSNS\({}^{2}\) a search could be performed using their second detector [86, 92]. To compute the expected number of axion events, we take \(L=24\) m as the distance between the KDAR source and the JSNS\({}^{2}\) detector, and the detector volume \(V=(17\) tonnes)/(\(0.852\) g/ml) = 20.35 m\({}^{3}\)[85, 93]. We assume \(10^{23}\) POT, corresponding to roughly three years of live time. This results in \(\sim 3\times 10^{20}\) stopped \(K^{+}\) in total, which is much larger than the \(10^{12}\,\)-\(\,10^{13}\)\(K^{+}\) decay events at NA62 [74]. In Figure 1 (gluon dominance) and Figure 2 (co-dominance), we plot the sensitivity of JSNS\({}^{2}\) by the blue lines, together with the existing constraints and future sensitivities. The figures demonstrate that JSNS\({}^{2}\) has Figure 2: (**Co-Dominance**) The same figure as Figure 1 but with \(c_{WW}=c_{BB}=c_{GG}\). The sensitivity for \(m_{a}\ll m_{\pi}\) is worse than the gluon dominance case since \(|c_{\gamma\gamma}^{\text{eff}}|\ll 1\) for this specific choice of the parameters. excellent sensitivity to heavy axions. Note that the event number scales as \(f_{a}^{-4}\). Therefore, even if we instead require e.g. 50 events, the sensitivity only weakens by \(10^{1/4}\simeq 1.8\), which does not alter our main conclusion. In our analysis, we have focused on the JSNS\({}^{2}\) near detector. However, the JSNS\({}^{2}\) collaboration recently installed another detector at a far location [86; 92], 48 m away from the source. This second detector is expected to start taking data soon and contains 32 tonnes of liquid scintillator as a fiducial volume; the larger volume partially compensating for the longer baseline. Together with possibly smaller backgrounds due to its location, the far detector may be better suited for axion searches once it starts its operation. **Conclusions:** KDAR provides a clean smoking gun signature of hadronically coupled axions. A \(K^{+}\) production target, coupled with a large volume detector placed \(\sim 10\)-100 m away, allows for a powerful probe of visibly decaying particles lighter than the kaon (e.g. dark scalars or heavy neutral leptons [94; 62]). In the context of KDAR, axions are particularly compelling due to their well-motivated hadronic couplings, which are necessary in any model that addresses the strong CP problem. We have focused on two benchmark scenarios (gluon dominance and co-dominance) for ease of comparison with the literature. It is interesting to understand how constraints vary with model-dependent coupling textures. The visible decays we consider here are governed both by BR(\(K\to\pi a\)) and the axion decay length and scale as \(1/f_{a}^{4}\). By way of contrast, the constraints from NA62 depend only on BR(\(K\to\pi a\)) and scale as \(1/f_{a}^{2}\). Stronger hadronic couplings will therefore favor NA62 over JSNS\({}^{2}\). Conversely, weaker hadronic couplings and/or a larger \(c_{\gamma\gamma}^{\rm eff}\) will favor JSNS\({}^{2}\) over NA62. Beam dump searches scale the same way as JSNS\({}^{2}\). We find that JSNS\({}^{2}\) will have world-leading sensitivity to heavy axions. One might imagine a competitive experimental landscape of modern high-intensity low-energy proton beams. Notably, we find that JSNS\({}^{2}\) is likely to provide unsurpassed sensitivity, since other facilities suffer from low \(K^{+}\) yields (e.g. at a PIP-II beam dump [95], LANSCE [96], or the SNS [97]) or do not have competitive intensity (e.g. the SBN beam dump concept [98]). Experiments with detectors far downstream also suffer from large \(1/L^{2}\) geometric suppressions _c.f._ Eq. (5). Modified experimental designs, e.g. a PIP-II beam dump with a proton beam energy \(T_{p}\gtrsim 2\) GeV, could allow for competitive KDAR rates. A large volume detector placed near the DUNE hadron absorber or coupled with a high-intensity 8 GeV beam for a muon collider demonstrator would both offer promising future sensitivity. Nevertheless, with JSNS\({}^{2}\) already taking data [86], there is an immediate opportunity to shed light on heavy axion models in currently unprobed regions of parameter space. We encourage the JSNS\({}^{2}\) collaboration to incorporate axion searches into their central physics program. **Acknowledgments:** This project arose from discussions at the Aspen Center for Physics (supported by National Science Foundation grant PHY-2210452) and we would like to collectively thank the center for their hospitality and support. We thank Josh Spitz and Takasumi Maruyama for useful correspondence and feedback, and Bertrand Echenard for suggestions involving future muon facilities. YE and ZL are supported in part by the DOE grant DE-SC0011842. RP is supported by DOE Award Number DE-SC0011632 and by the Walter Burke Institute for Theoretical Physics. RP's research is supported by the Neutrino Theory Network Program Grant under Award Number DEAC02-07CH11359 and the US DOE under Award Number DE-SC0020250.
2304.09224
Quantum machine learning for image classification
Image classification, a pivotal task in multiple industries, faces computational challenges due to the burgeoning volume of visual data. This research addresses these challenges by introducing two quantum machine learning models that leverage the principles of quantum mechanics for effective computations. Our first model, a hybrid quantum neural network with parallel quantum circuits, enables the execution of computations even in the noisy intermediate-scale quantum era, where circuits with a large number of qubits are currently infeasible. This model demonstrated a record-breaking classification accuracy of 99.21% on the full MNIST dataset, surpassing the performance of known quantum-classical models, while having eight times fewer parameters than its classical counterpart. Also, the results of testing this hybrid model on a Medical MNIST (classification accuracy over 99%), and on CIFAR-10 (classification accuracy over 82%), can serve as evidence of the generalizability of the model and highlights the efficiency of quantum layers in distinguishing common features of input data. Our second model introduces a hybrid quantum neural network with a Quanvolutional layer, reducing image resolution via a convolution process. The model matches the performance of its classical counterpart, having four times fewer trainable parameters, and outperforms a classical model with equal weight parameters. These models represent advancements in quantum machine learning research and illuminate the path towards more accurate image classification systems.
Arsenii Senokosov, Alexandr Sedykh, Asel Sagingalieva, Basil Kyriacou, Alexey Melnikov
2023-04-18T18:23:20Z
http://arxiv.org/abs/2304.09224v2
# Quantum machine learning for image classification ###### Abstract Image recognition and classification are fundamental tasks with diverse practical applications across various industries, making them critical in the modern world. Recently, machine learning models, particularly neural networks, have emerged as powerful tools for solving these problems. However, the utilization of quantum effects through hybrid quantum-classical approaches can further enhance the capabilities of traditional classical models. Here, we propose two hybrid quantum-classical models: a neural network with parallel quantum layers and a neural network with a quanvolutional layer, which address image classification problems. One of our hybrid quantum approaches demonstrates remarkable accuracy of more than 99% on the MNIST dataset. Notably, in the proposed quantum circuits all variational parameters are trainable, and we divide the quantum part into multiple parallel variational quantum circuits for efficient neural network learning. In summary, our study contributes to the ongoing research on improving image recognition and classification using quantum machine learning techniques. Our results provide promising evidence for the potential of hybrid quantum-classical models to further advance these tasks in various fields, including healthcare, security, and marketing. ## Introduction Image classification is a critical task in the modern world due to its wide range of practical applications in various fields [1]. For instance, in medical imaging, image classification algorithms have been shown to significantly improve the accuracy and speed of diagnoses of many diseases [2; 3]. In the field of autonomous vehicles, image classification plays a crucial role in object detection, tracking, and classification, which is necessary for safe and efficient navigation. Deep learning approaches [4] like deep convolutional neural networks (CNNs) have emerged as powerful tools for image classification and recognition tasks [5; 6], achieving state-of-the-art performance on various benchmark datasets [7; 8]. However, as the amount of visual data increases, modern neural networks are facing significant computational challenges. Quantum technologies, on the other hand, offer the potential to overcome this computational limitation by harnessing the power of quantum mechanics to perform computations in parallel [9]. Quantum machine learning (QML) is a rapidly evolving field that combines the principles of quantum mechanics and classical machine learning [10; 11]. This field has the potential to revolutionize various areas of computing, including image classification [12; 13]. It has attracted significant attention due to its potential to solve computational problems that classical computers are unable to solve efficiently [9]. This potential arises from the unique features of quantum computing, such as superposition and entanglement, which can provide an exponential speedup for specific machine learning tasks [14]. Moreover, QML algorithms produce probabilistic results, which is very natural for classification problems [15] and also act in an exponentially bigger search space, which greatly increases their performance [16; 17; 18]. However, the real-world implementation of quantum algorithms faces significant challenges, such as the need for error correction and the high sensitivity of quantum systems to external disturbances [19]. Despite these challenges, QML has shown promising results in several applications [20]. In the context of image classification, QML algorithms can process large datasets of images more efficiently than classical algorithms, leading to faster and more accurate classification [21]. A promising area of research within QML for image classification is the hybrid quantum neural network (HQNN) [14]. HQNNs combine classical deep learning architectures with QML algorithms [22; 23; 24; 25; 26], namely Variational Quantum Circuits (VQCs), creating a hybrid system that leverages the strengths of both classical and quantum computing. This approach allows for the processing of large datasets with greater efficiency than classical deep learning architectures alone [27]. HQNNs have shown promise in a variety of tasks including image classification [28], regression problems [29], even satellite mission planning [30] and personalized medicine [31]. Further research is needed to explore the full potential of HQNNs in image classification and to develop more robust and scalable algorithms. In this article, we propose two approaches to leverage quantum computing in the field of image recognition. The first approach involves applying parallel VQCs after classical deep convolutional layers, while the second approach involves using a HQNN with a quanvolutional layer. We evaluate the performance of these hybrid models on the MNIST dataset of hand-written digits, which is described in Section A, and demonstrate their ability to classify images. The first model (described in Section B) combines classical convolutional layers with parallel quantum layers (HQNN-Parallel). The quantum part is analogous to a classical fully connected layer. We compare the hybrid model with its most closely corresponding classical counterpart (in terms of the architecture and the number of layers) and observe that the hybrid model outperforms the classical model in accuracy (achieving 99.21% accuracy) despite having eight times fewer parameters. In the second model (described in Section C), we introduce HQNN with quanvolutional layer (HQNN-Quanv), which is a filter that applies a convolution to the input image and reduces its resolution. The HQNN-Quanv achieves a similar accuracy to the classical model (67% accuracy) despite having four times fewer trainable parameters in the first layer compared to the classical counterpart. Additionally, the hybrid model outperforms the classical model with the same number of weights. Every parameter in both of our models is trainable, which allows us to achieve such remarkable accuracy. This highlights the potential of quantum computing and quantum machine learning (QML) in advancing the field of image recognition. Our results contribute to the ongoing research in this area and demonstrate the exciting possibilities for the future of QML in other fields. ## Results ### Dataset This section describes the dataset we use which is called Modified National Institute of Standards and Technology (MNIST) [32]. The MNIST database consists of a large collection of gray-scale handwritten numbers, ranging from 0 to 9. Sample images from the dataset are presented in Fig. 1. Each image has a resolution of \(28\times 28\) pixels, and the main objective is to classify each image by assigning a class label using a neural network. In other words, the task is to recognize which digit is present in the image. This dataset is widely used for making first steps in the sphere of machine learning. Nevertheless, it is worth-studying as it helps test the performance of various neural networks models [33; 34], especially models with VQCs. The MNIST dataset used in this study comprises a total of 70000 images, with 60000 images reserved for training and 10000 images for testing. However, in certain cases, it may be advantageous to reduce the number of images in order to expedite the training process and gain immediate insights into the model's performance. Despite being a widely used dataset, the MNIST database contains a few images that are broken or ambiguous, making it challenging for even humans to make a clear judgment. Fig. 2 provides examples of such images. However, our introduced hybrid model can determine what the number is in the image with over 99% accuracy. ### Hybrid Quantum Neural Network with parallel quantum dense layers, HQNN-Parallel This section describes our first proposed model, the Hybrid Quantum Neural Network with parallel quantum dense layers, each of which is a VQC. Section B.3 shows the results and comparison of the hybrid model with its classical counterpart, CNN B.4. The HQNN-Parallel consists of two main components: a classical convolutional block B.1 and a combination of classical fully connected and parallel quantum layers B.2. The primary objective of the classical convolutional block is to reduce the dimensionality of the input data and prepare it for subsequent processing. The classical fully connected and parallel quantum layers constitute the core of the HQNN-Parallel, and are responsible for prediction tasks of the model. Further details on the architecture and implementation of the HQNN-Parallel are presented in subsequent sections. #### ii.2.1 Classical Convolutional Layers Fig. 3 depicts the general structure of the classical convolutional part of the proposed HQNN-Parallel. The convolutional part of the network is comprised of two main blocks, followed by fully-connected layers. In this study, we utilized Rectified Linear Unit (ReLU) as the activa Figure 1: Examples of images from the MNIST dataset Figure 2: Examples of ambiguous images from the MNIST dataset. tion function [35]. Batch Normalization [36] is employed in the network as it stabilizes the training process and improves the accuracy of the model. The first block of the convolutional part of the HQNN-Parallel comprises a convolutional layer with one input channel and 16 output channels, utilizing a square kernel of size \(5\times 5\). The layer operates with a stride of one pixel and applies a two-pixel padding to the input data. Batch Normalization is applied to the output of the convolutional layer, followed by an activation function (ReLU) and MaxPooling [37] with a kernel size of two pixels. The resulting feature map has dimensions of \(16\times 14\times 14\) pixels. The second block contains a convolutional layer with 16 input channels and 32 output channels, utilizing the same kernel size and padding as the previous layer. The MaxPooling parameters remain unchanged, resulting in a feature map with dimensions of \(32\times 7\times 7\) pixels, which will become an input for the fully connected part of the network. #### ii.1.2 Hybrid Dense Layers Following the convolutional part, the HQNN- Parallel, continues with a hybrid dense part, as shown in Fig. 3. The \(32\times 7\times 7\) feature map produced by the convolutional part serves as input for the first dense layer, which transforms the feature map from 1568 to \(m\) features. The value of \(m\) is determined by the chosen quantum part and represents the total number of encoding parameters in the quantum layers. Each quantum layer is designed to maintain the number of input and output features, and the output of the quantum layer is fed into the second classical fully connected layer. This layer performs the final transformation and maps the \(m\) input features to 10 output features, corresponding to the number of classes into which the images can be classified. After each classical dense layer, Batch Normalization and ReLU activation are applied. It is worth noting that the structure of the HQNN-Parallel, including the number of layers and the number of features, can be adjusted to optimize the performance on a specific task. #### ii.1.3 Structure of Quantum Layer The quantum component of the proposed HQNN-Parallel, depicted in the Fig. 3, consists of \(c\) parallel quantum layers, each of which is a VQC composed of three parts: embedding, variational gates, and measurement. The input data to the quantum layers are \(m\) features from the previous classical fully connected layer, divided into \(c\) parts, with each part being a vector of \(q\) values, \(x=(\phi_{1},\phi_{2},...,\phi_{q})\in\mathbb{R}^{q}\). To encode these classical features into quantum Hilbert space, we use the "angle embedding" method, which rotates each qubit in the ground state around the X-axis on the Bloch sphere [38] by an angle proportional to the corresponding value in the input vector: \(\ket{\psi}=R_{x}^{emb}(x)\ket{\psi_{0}}\), where \(\ket{\psi_{0}}=\ket{0}^{\otimes q}\). This operation encodes the input vector into quantum space, and the resulting quantum state represents the input data from the previous classical layer. It is important to note that \(m\) is divisible by \(q\), since the input data vector is divided into \(c=m/q\) parts, with each part serving as input to a VQC. The encoding part for each VQC is followed by a variational part, which consists of two parts: rotations with trainable parameters and subsequent CNOT operations [39]. The rotations serve as quantum gates that transform the encoded input data according to the variational parameters, while the CNOT operations entangle the qubits in the VQC. The depth of the variational part, denoted as \(i\), is a hyperparameter that determines the number of iterations of the rotations and CNOT op Figure 3: Architecture of the proposed HQNN-Parallel. The input data samples are transformed by a series of convolutional layers, which extract relevant features from the input and reduce the input dimensionality. The output channels of the convolutional layers are then flattened into a single vector and fed into the dense part of the HQNN-Parallel. The hybrid dense part contains both classical and quantum layers. The quantum layers are implemented using parallel VQCs, which allow for simultaneous execution, reducing the total computation time. The output of the last classical fully connected layer is a predicted digit between 0 and 9. erations in the VQC. It is important to note that the variational parameters for each VQC are different in each of the \(i\) repetitions and for each of \(c\) quantum circuits. Thus, the total number of weights in the quantum part of the HQNN-Parallel is calculated as \(q\cdot 3i\cdot c\). After performing these operations, measurement in the Pauli basis matrices is performed, resulting in \[v^{(j)}=\bra{0}R_{x}^{emb}{(\phi_{j})}^{\dagger}U(\theta)^{\dagger}Y_{j}U( \theta)R_{x}^{emb}(\phi_{j})\ket{0}, \tag{1}\] where \(Y_{j}\) is the Pauli-Y matrix for the \(j^{\text{th}}\) qubit, \(R_{x}^{emb}(\phi_{j})\) and \(U(\theta)\) are operations, performed by the embedding and trainable parts of the VQC, respectively, and \(\theta\) is a vector of trainable parameters. After this operation, we have the vector \(v\in\mathbb{R}^{q}\). The outputs of all the VQCs would be concatenated to form new vector \(\hat{v}\in\mathbb{R}^{m}\) that is the input data for a subsequent classical fully-connected layer. This layer, being the final layer in the classification pipeline, produces an output in the form of probability distribution over the set of classes. In our case, each input image is associated with one of the ten possible digits from 0 to 9, and the output of each neuron represents the probability that the image belongs to that class. The neuron with the highest output probability is selected as the predicted class for the image. #### ii.2.4 Training and results As described above, the HQNN-Parallel will be trained on MNIST dataset A. No preprocessing is applied, so the entire collection is used for training (60000 images are in the training set and 10000 are in the test set). In the context of training the proposed HQNN-Parallel, the ultimate objective is to minimize the loss function during the optimization process. The cross-entropy function is employed as the loss function, given by: \[l=-\sum_{c=1}^{k}y_{c}\log p_{c}, \tag{2}\] where \(p_{c}\) is the prediction probability, \(y_{c}\) is either 0 or 1, determining respectively if the image belongs to the prediction class, and \(k\) is the number of classes. The parameters of the classical layers are optimized using the backpropagation algorithm [40], which is automatically implemented in the PyTorch library [41]. The backpropagation algorithm is used to calculate the gradients of the loss function with respect to the parameters of the network, allowing for their optimization via gradient descent. However, the use of quantum layers in this task Figure 4: Train and test results for the HQNN-Parallel and the CNN. The HQNN has a 99.21% accuracy on the test data and outperforms the CNN which has a 98.71% accuracy even though the classical model has 8 times more variational parameters than the hybrid one. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Model & train loss & test loss & test acc & param num \\ \hline CNN & 0.0205 & 0.0449 & 98.71 & 372234 \\ \hline HQNN & 0.0204 & 0.0274 & 99.21 & 45194 \\ \hline \end{tabular} \end{table} Table 1: Summary of the results for the HQNN-Parallel and its classical analogue, CNN. Figure 5: Test accuracies for the HQNN-Parallel and its classical analogue CNN. is more complex than classical methods for computing gradients. To overcome this challenge, we employ the PennyLane framework [42], which provides access to a variety of optimization techniques. We utilize the parameter shift rule [43], which is compatible with physical implementations of quantum computing [44]. This method involves evaluating the gradient of a quantum circuit by shifting the parameters in the circuit and computing the corresponding change in the circuit's output. The resulting gradient can then be used to update the circuit's parameters and iteratively minimize the loss function. By using the parameter shift rule, we are able to efficiently optimize the variational parameters in the quantum layers of the HQNN, enabling the network to learn complex patterns in the input data and achieve accurate results. In the process of solving the problem, we tried various architectures of quantum layers. The most successful architecture for the HQNN-Parallel used a quantum layer with 5 qubits and 3 repetitions of the strongly entangling layers. The number of quantum layers equals 4. The HQNN-Parallel managed to achieve a 99.21% accuracy. In order to compare the performance of the HQNN with a classical CNN, the convolutional part of the HQNN was held constant, while the quantum part was replaced with a classical dense layer containing \(m\) neurons. This modified CNN was then trained on the same MNIST dataset. A comparison of the training outcomes is depicted in Fig. 4. The trainable parameters, as well as the primary training and testing results, for both the HQNN-Parallel and the CNN are summarized in Table 1 and illustrated in Fig. 5. From these results, it is evident that the most successful implementation of the HQNN-Parallel surpasses the performance of a CNN that possesses approximately 8 times more parameters. ### Hybrid Quantum Neural Network with quanvolutional layer, HQNN-Quanv In this section, we give a detailed description of our second hybrid quantum approach for solving the problem of recognizing numbers from the MNIST dataset, based on the combination of a quanvolutional layer and classical fully connected layers. The scheme of this network is presented in Fig. 6. Also, we compare our hybrid model with its classical analogue CNN, investigate the relationship between quanvolutional and convolutional layers as well as their dependence on the number of output channels. #### iii.3.1 Quanvolutional layer The general architecture of a quanvolutional layer [45] is shown in Fig. 6. Similar to classical convolutional layers, the quanvolutional layer comprises a filter of size \(n\times n\) pixels that convolves the input image, producing a lower-resolution output image. However, the quanvolutional layer is unique in a sense that its filter is implemented using a quantum circuit consisting of \(n\) qubits. The circuit can be decomposed into three distinct parts: classical-to-quantum data encoding, variational gates, and a quantum measurement. These parts work together to determine the filter's action on the input image. There are plenty of encoding (embedding) methods to transfer classical data into quantum states. In this section, as in the previous one, we use the "angle embedding" technique. It is achieved by rotating the qubits from their initial \(\ket{0}\) value with the \(R_{y}(\varphi)\) unitaries, where \(\varphi\) is determined by the value of the corresponding pixel. After the classical data is encoded, the quantum states undergo unitary transformations, defined by the Figure 6: Architecture of the HQNN-Quanv. The quanvolutional kernel encodes a chunk of pixels in the input image into the quantum circuit via angle embedding with \(R_{y}\) gates. Then some rotation gates with trainable parameters and CNOTs are applied to the qubits, carrying image data. Measuring each wire with \(\sigma_{z}\) expectation value yields an output pixel. As we have 4 qubits, the amount of output channels is also 4. All of output channels are flattened into one vector and fed into the fully connected layer, which gives a predicted digit from 0 to 9. variational part. The variational part in the quanvolutional layer usually consists of arbitrary single-qubit rotations and CNOT gates, arranged in a particular way determined by the researcher. The unitaries in the VQC are parameterized by a set of variational parameters, which are learned via training the neural network. The ultimate goal of the model training is to find a measurement basis (via tweaking variational gate parameters) that tells us the most information about a fragment of a picture confined by the quantum filter. Finally, for each wire, the expectation value of an arbitrary operator is calculated to obtain the classical output. As it is a real number, it represents the filter's output pixel, while each wire yields a different image channel. For instance, a quanvolutional filter of size \(2\times 2\) has 4-qubit circuit, which transforms 1 input image into 4 images of reduced size. #### iii.2.2 Structure of HQNN-Quanv This subsection details the architecture of the HQNN-Quanv, which is shown in Fig. 6. At first, a simple angle embedding of the classical data via \(R_{y}(\varphi)\) single-qubit rotations on each wire is used, where the original pixel value \([0,1]\) is scaled to \(\varphi\in[0,\pi]\). Then, we have a variational circuit part, which consists of 4 single-qubit rotations, parameterized with trainable weights, as well as three CNOT gates. At the end of the circuit, we measure the expectation value \(\langle\sigma_{z}\rangle\) of the Pauli-Z operator on each qubit. Each channel is a picture with 4 \(\times\) 4 pixels. After that, four output channels are flattened and fed into fully connected layer, which yields a digit's probability. #### iii.2.3 Training and results In this section, we describe the training process. In order to reduce the training time of the HQNN-Quanv, only 600 images from the MNIST dataset A are used with 500 of them acting as training data and 100 as test data. We also use PyTorch's resize transform with bilinear interpolation to downscale images from \(28\times 28\) to \(14\times 14\) pixels. We still use a cross-entropy loss function. Figure 8: Test accuracies for HQNN-Quanv (\(67\pm 1\%\)), CNN1 (\(53\pm 2\%\)) and CNN4 (\(66\pm 2\%\)). The HQNN outperforms the CNN1, which has the same number of variational parameters. The HQNN’s accuracy score is equivalent to CNN4’s, which has 4 times many weights in its filter. Figure 7: Train and test accuracies for the CNN and HQNN-Quanv models with stride set to 4. The models differ only in the kernel and the number of output channels. (a) HQNN: Quanvolutional kernel with 1 input channel, 4 output channels; (b) CNN1: Convolutional kernel with 1 input channel, 1 output channel; (c) CNN4: Convolutional kernel with 1 input channel, 4 output channels. The HQNN-Quanv with an accuracy of \(67\pm 1\%\) on test data outperforms the CNN1 which has an accuracy of \(53\pm 2\%\) and the CNN4 which has an accuracy of \(66\pm 2\%\), although the CNN1 has the same number of weights in the filter as in the hybrid model and CNN4 has 4 times more weights than in the hybrid model. While the classical model has only one way of training weights via backpropagation, the HQNN has several options, such as the parameter-shift rule, adjoint differentiation [46] or backpropagation (which, of course, is impossible on a real quantum computer). Adjoint differentiation seems to have the most favorable scaling with both layers and wires [47], but on this particular circuit (Fig. 6) backpropagation proved to be quicker. Considering everything stated above, let us see the results of the training. We trained two CNN's with different numbers of output channels and one HQNN for 20 epochs (Fig. 7). The models were intentionally made simple and had sufficiently few parameters so as to avoid overfitting on the relatively small dataset. The test accuracy of these models is presented in Fig. 8. For each epoch, the accuracy is averaged over 10 models with random initial weights. The error bars depict one standard deviation. Hence, for each epoch we have a mean accuracy over 10 equivalent models and an error bar for the standard deviation. At the end of the training, the HQNN-Quanv had a test accuracy of \(0.67\pm 0.01\), which is close enough to the CNN4 result of \(0.66\pm 0.02\), while CNN1 had \(0.53\pm 0.02\). The HQNN model has only 4 trainable weights in its quanvolutional kernel, which parameterize rotation gates in the VQC. CNN1 and CNN4 have 4 and 16 trainable parameters in their convolutional kernels, respectively. Therefore, the HQNN's performance based on the accuracy score is equivalent to CNN4's, which has 4 times many weights in its filter. ## Discussion In this work, we introduced two hybrid approaches to image classification. The first approach was a HQNN-Parallel. This method allowed us to classify handwritten images of digits from the MNIST dataset with an accuracy of more than 99%. It should be noted that the number of weights in the quantum model was 8 times less than in its classical counterpart, which achieved an accuracy of only 98.71%. Also the successful implementation of parallel variational quantum circuits in the hybrid model was demonstrated, which led to such remarkable results. Our proposed architecture is a unique combination of classical and quantum layers, which we believe to be a breakthrough in solving image classification problems. The second approach we presented was a HQNN-Quanv. The quanvolutional layer needs significantly fewer weights, 4 times less than the classical analogue, to achieve approximately the same classification accuracy (\(67\pm 1\%\) for the hybrid model versus \(66\pm 2\%\) for the classical one on the test samples when averaged over 10 models), while the classical analogue with the same number of variational parameters as the hybrid model achieves an accuracy of \(53\pm 2\%\). Further research is needed to explore the full potential of HQNNs for image classification, including testing on larger datasets and more complex architectures. Additionally, the development of more efficient optimization techniques for training VQCs and the implementation of larger-scale quantum hardware could lead to even more significant performance improvements. In summary, our developments provide two hybrid approaches to image classification that demonstrate the power of combining classical and quantum methods. Our proposed models show improved performance over classical models with similar architectures while using significantly fewer parameters. We believe that these results pave the way for further research in developing hybrid models that utilize the strengths of both classical and quantum computing.
2306.07470
Reviving Shift Equivariance in Vision Transformers
Shift equivariance is a fundamental principle that governs how we perceive the world - our recognition of an object remains invariant with respect to shifts. Transformers have gained immense popularity due to their effectiveness in both language and vision tasks. While the self-attention operator in vision transformers (ViT) is permutation-equivariant and thus shift-equivariant, patch embedding, positional encoding, and subsampled attention in ViT variants can disrupt this property, resulting in inconsistent predictions even under small shift perturbations. Although there is a growing trend in incorporating the inductive bias of convolutional neural networks (CNNs) into vision transformers, it does not fully address the issue. We propose an adaptive polyphase anchoring algorithm that can be seamlessly integrated into vision transformer models to ensure shift-equivariance in patch embedding and subsampled attention modules, such as window attention and global subsampled attention. Furthermore, we utilize depth-wise convolution to encode positional information. Our algorithms enable ViT, and its variants such as Twins to achieve 100% consistency with respect to input shift, demonstrate robustness to cropping, flipping, and affine transformations, and maintain consistent predictions even when the original models lose 20 percentage points on average when shifted by just a few pixels with Twins' accuracy dropping from 80.57% to 62.40%.
Peijian Ding, Davit Soselia, Thomas Armstrong, Jiahao Su, Furong Huang
2023-06-13T00:13:11Z
http://arxiv.org/abs/2306.07470v1
# Reviving Shift Equivariance in Vision Transformers ###### Abstract Shift equivariance is a fundamental principle that governs how we perceive the world -- our recognition of an object remains invariant with respect to shifts. Transformers have gained immense popularity due to their effectiveness in both language and vision tasks. While the self-attention operator in vision transformers (ViT) is permutation-equivariant and thus shift-equivariant, patch embedding, positional encoding, and subsampled attention in ViT variants can disrupt this property, resulting in inconsistent predictions even under small shift perturbations. Although there is a growing trend in incorporating the inductive bias of convolutional neural networks (CNNs) into vision transformers, it does not fully address the issue. We propose an adaptive polyphase anchoring algorithm that can be seamlessly integrated into vision transformer models to ensure shift-equivariance in patch embedding and subsampled attention modules, such as window attention and global subsampled attention. Furthermore, we utilize depth-wise convolution to encode positional information. Our algorithms enable ViT, and its variants such as Twins to achieve 100% consistency with respect to input shift, demonstrate robustness to cropping, flipping, and affine transformations, and maintain consistent predictions even when the original models lose 20 percentage points on average when shifted by just a few pixels with Twins' accuracy dropping from 80.57% to 62.40%. ## 1 Introduction Inductive bias refers to a set of assumptions or beliefs that guide the design of machine learning algorithms, aiming to reduce the search space for an optimal model. Humans possess the remarkable ability to perceive and recognize objects despite variations such as deformations, occlusions, and translations. The success of convolutional neural networks (CNNs) can be attributed to their inductive bias of shift equivariance, which allows them to mimic this human ability. For instance, when we observe an image of a cat, shifting the cat's position within the image does not affect our recognition of the cat, and we remain aware of the shift that has occurred. Transformers have emerged as strong alternatives to CNNs in computer vision, and their success in natural language processing further highlights their potential as a research area. A key element of transformers is the self-attention module, which exhibits permutation-equivariance. However, despite this strong inductive bias, vision transformers are neither shift-equivariant nor permutation-equivariant. This is due to the patch embedding, positional embedding, and subsampled attention (in the case of ViT variants), which disrupt shift equivariance as input shifts lead to different pixels within each image patch or different tokens in each window, resulting in different computations. Integrating CNNs with vision transformers can partially address the lack of shift equivariance, but it does not fully resolve the issue. The original design of vision transformers already incorporates convolution within its architecture; the patch embedding layer is functionally equivalent to strided convolution, but its downsampling operation disrupts shift equivariance. Although CoAtNet proposes a relative attention method that combines depthwise convolution with attention to achieve shift equivariance, this approach still requires computing full global attention, and shift equivariance is not maintained when downsampled attention is required for computational efficiency [3; 13; 7]. Both MaxViT [13] and Twins transformer [3] utilize depth-wise convolution to encode positional information, but their block attention (or window attention) and strided convolution are not shift equivariant. In this work, we seek to explicitly incorporate the inductive bias of CNNs into vision transformers, fully addressing the issue of shift equivariance. We achieve this by introducing replaceable modules that enable a wide range of vision transformer models to be shift-equivariant [9; 8; 3; 13; 4; 14; 16]. Our proposed solution is a nonlinear operator -- the polyphase anchoring algorithm. By consistently selecting the maximum polyphase as anchors for computing strided convolution and subsampled attentions, we ensure shift equivariance. Furthermore, we employ depthwise convolution with circular padding to encode positional information, as opposed to using the absolute positional embedding in Dosovitskiy et al. [9] or the relative positional embedding in Liu et al. [11; 12], which are not shift equivariant. Our primary contributions in this work are as follows: * We present versatile and adaptable modules that seamlessly integrate with vision transformer models, leading to improved performance in vision transformer models. * Our innovative approach incorporates an input-adaptive nonlinear operator that guarantees shift-equivariance, resulting in more robust and reliable outcomes across a range of visual tasks. * We equip vision transformers with complete shift-equivariance capabilities, substantiating our approach with both theoretical geometric guarantees and empirical evidence that underscore its effectiveness and practical applicability. ## 2 Preliminaries To effectively restore shift-equivariance in vision transformers, it is necessary to examine each individual module and identify the components responsible for disrupting the shift-equivariance property. To achieve this, we must first establish formal definitions for equivariance and self-attention. Figure 1: The maximum polyphase is colored in blue. Each shape with distinct color represents a token. We also illustrate the concept of _anchors_ — the top left coordinate in each window. The red and yellow shapes indicate that window attention produces inconsistent predictions on shifted image whereas the composition of polyphase anchoring and window attention does not. ### Equivariance Equivariance serves as a formal concept of consistency under transformations [10]. A function \(f:V_{1}\to V_{2}\) is considered equivariant to transformations from a symmetry group \(G\) if applying the symmetry to the input of \(f\) produces the same result as applying it to the output: \[\forall g\in G:f(g\cdot x)=g^{\prime}\cdot f(x) \tag{1}\] Here, \(\cdot\) denotes the linear mapping of the input by the representation of group elements in \(G\). Throughout this paper, all instances of \(\cdot\) adhere to this definition. When \(g=g^{\prime}\), the function is referred to as G-equivariant. If \(g^{\prime}\) is the identity, the function is G-invariant. For cases where \(g\neq g^{\prime}\), the function is considered generally equivariant. General equivariance is a valuable concept when the input and output spaces have different dimensions. When \(G\) represents the translation group, the above definition yields shift-equivariance. ### Self-attention A self-attention operator \(A_{s}\) exhibits permutation-equivariance. Let \(X\) represent the input matrix, and \(T_{\pi}\) denote any spatial permutation. We can express this as: \[A_{s}(T_{\pi}(X))=T_{\pi}(A_{s}(X)). \tag{2}\] \(A_{s}\) is the self-attention operator with parameter matrices \(W_{q}\in\mathbb{R}^{d\times d_{k}}\), \(W_{k}\in\mathbb{R}^{d\times d_{k}}\), and \(W_{v}\in\mathbb{R}^{d\times d_{v}}\): \[A_{s}=\text{SoftMax}(XW_{q}(XW_{k})^{T})XW_{v}=\text{SoftMax}(QK^{T})V. \tag{3}\] ## 3 Approach To achieve model-wise shift-equivariance, we first detect the modules that lack shift equivariance. We then introduce a polyphase anchoring algorithm to ensure shift-equivariance for strided convolution, window attention, and global subsampled attention. Finally, we use depthwise convolution with circular padding to guarantee shift-equivariance in positional encoding. As the composition of shift-equivariant functions remains shift-equivariant, we obtain a shift-equivariant model. ### Detecting modules lacking shift equivariance Vision transformers consist of a patch embedding layer, positional encoding, transformer blocks, and MLP layers. We analyze each of these modules in ViT and its variants, discovering the following: * Patch embedding layer (strided convolution) is not shift-equivariant due to downsampling. * Absolute positional encoding [9] and relative positional embedding [11; 12] are not shift-equivariant. * Normalization, global self-attention, and MLP layers are shift-equivariant. * Subsampled attentions such as window attention [11; 12; 13; 3] and global subsampled attention [3] are not shift-equivariant. Patch embeddingconverts image patches into sequence vector representations through strided convolution. However, strided convolution is not shift-equivariant due to downsampling, as addressed by Zhang [17], Chaman and Dokmanic [2]. Figures 1 illustrate that the image patching layer, or strided convolution, is not shift-equivariant. When an image is shifted by a pixel, the pixels in each window change, leading to different computations. Positional encodingis a method for incorporating spatial location information into tokens, the representations of input image patches. Popular positional encoding techniques such as absolute positional encoding in ViT [9] and relative positional encoding in Swin [11; 12] are not shift-equivariant. Absolute positional encoding [9] adds the absolute positional information to input tokens by considering an input image as a sequence or a grid of patches [9; 14]. Trivially, absolute positional embedding is not shift-equivariant because the same absolute positional information is added to the input tokens regardless of shift, as shown in Figure 2 The relative positional embedding introduced by Liu et al. [11] is the following: \[\text{Attention}(Q,K,V)=\text{SoftMax}(\frac{QK^{T}}{\sqrt{d}}+B)V, \tag{4}\] where \(B\in R^{M^{2}\times M^{2}}\) is the relative position bias term for each head; \(Q,K,V\in R^{M^{2}\times d}\) are the query, key and value matrices; \(d\) is the query/key dimension, and \(M^{2}\) is the number of patches in a window. Although self-attention is permutation equivariant, self-attention with relative position bias is not shift-equivariant. Further mathematical deductions are provided in the Appendix. Normalization layers standardize the input data or activations of preceding layers to stabilize training and enhance model performance by ensuring consistent scales and distributions. Both batch normalization and layer normalization are shift-equivariant. Trivially, normalizing a shifted input along batch and feature dimensions is equivalent to shifting the normalized input. **MLP layers**, or Multi-Layer Perceptron layers, are a sequence of feedforward neural network layers that perform a linear transformation followed by a non-linear activation function. An MLP layer is shift-equivariant. In layer \(l\) of an MLP model, we have: \[h^{(l)}=\phi(xW^{(l)}+b^{(l)}) \tag{5}\] where \(x\) is a row vector, \(W\) is a weight matrix, and \(b\) is a bias term. Given an input matrix \(X\) whose row vectors are tokens, it is obvious that the MLP layer is shift equivariant with respect to input tokens. In the ViT architecture, we have identified that MLP layers and normalization layers are shift-equivariant, while patch embedding and positional encoding are not. ViT variants [11, 12, 4, 8, 14, 13] introduce additional challenges for shift-equivariance, as they typically employ subsampled attention operations to reduce the quadratic computational complexity with respect to the number of tokens in global self-attention. s Subsampled attentionsare streamlined versions of global self-attentions that can be classified into two categories: local and global [11, 12, 3, 16, 13]. Local attention is typically employed in conjunction with subsampled global attention to encode substantial spatial information while avoiding excessive computational costs [13, 3, 4, 16]. However, the use of subsampled attentions often results in a lack of shift-equivariance due to downsampling. Consequently, addressing the shift-equivariance issue in these subsampled attentions is crucial. The most prevalent local attention mechanism is window attention, while a popular subsampled global attention variant is the global subsampled self-attention (GSA) introduced by Chu et al. [3]. To directly tackle the lack of shift-equivariance, we propose the polyphase anchoring algorithm as a Figure 2: The left figure illustrates that the identical absolute positional encoding is applied to both the input and its circularly shifted counterpart, resulting in a lack of shift-equivariance. The right figure highlights that, in the context of ViT, patch embedding and positional encoding do not exhibit shift-equivariant properties. solution. This approach is designed to maintain spatial information while reducing computational complexity, thus promoting shift-equivariance in subsampled attention mechanisms. ### Polyphase anchoring algorithm #### 3.2.1 Introduction Inspired by the concept of adaptive polyphase sampling presented in Chaman and Dokmanic [2], we propose the polyphase anchoring algorithm, an efficient technique that can be seamlessly integrated with various types of subsampled attention operators [13, 3, 11, 12, 8, 7] to ensure shift-equivariance. The algorithm is implemented as an autograd function in PyTorch, making it simple to incorporate into deep learning models. Polyphase anchoring identifies the maximum \(L_{p}\) norm polyphase and shifts the input accordingly, so that the maximum polyphase aligns with the anchor positions of window attention, as illustrated in Figure 1. _Anchors_ of window attention represent the set of coordinates at the top-left of each window, as depicted in Figure 1. \[\text{anchors}=\{(i,j)\mid i\equiv 0\pmod{s},\quad j\equiv 0\pmod{s},\quad i \leq H,j\leq W,i,j\in\mathbb{Z}\}. \tag{6}\] Here, \(s\times s\) denotes the size of the window in window attention, \((i,j)\) is a coordinate on a 2D grid. Algorithm 1 demonstrates polyphase ordering. For the brevity in Algorithm 1, we define the polyphase \(X_{pq}\) mathematically here. Let \(S_{pq}\) be some polyphase whose first element is at coordinate \((p,q)\) on the 2D grid, \(S_{pq}=\{(i+p,j+q)\mid i\equiv 0\pmod{s},\quad j\equiv 0\pmod{s}\quad i \leq H,j\leq W,i,j\in\mathbb{Z}\}\). Let \(X_{pq}\) be all the tokens on the polyphase \(S_{pq}\). \(X_{pq}=\{x=f(m,n)\in\mathbb{R}^{C}\mid\forall(m,n)\in S_{pq}\}\), where \((m,n)\) is a coordinate of some token in \(S_{pq}\) and \(f\) is a mapping from a coordinate to its corresponding token. ``` 1:Input\(X\in R^{\dots\times H\times W}\), stride size \(s\in\mathbb{Z}\) 2:Maximum polyphase is \(\hat{X}_{pq}=\arg\max_{X_{pq}|p,q\in\{0,\dots,s-1\}}||X_{pq}||\) 3:\(\hat{X}=g_{pq}\cdot(X)\) where \(g_{pq}\) circularly shifts \(X\) by \((-p,-q)\) along the last two dimensions. 4:Output:\(\hat{X}\) ``` **Algorithm 1** Polyphase anchoring The polyphase anchoring algorithm is a nonlinear operator that conditionally shifts the input based on its maximum \(L_{p}\) norm polyphase. This guarantees shift-equivariance in strided convolution, subsampled attention like window attention, and GSA. As a result, the lack of shift-equivariance in patch embedding modules and subsampled attention modules in ViT variants, such as Twins [3], is addressed. **Lemma 3.1**.: _Polyphase anchoring operator \(P\) is general equivariant with respect to \(\forall g\in G\), where \(G\) is the symmetry group of translations, and \(P:V\to V\) is a nonlinear operator that conditionally shift the input \(X\). \(\forall g\in G\), \(\exists g^{\prime}\in G\) s.t:_ \[P(g\cdot X)=g^{\prime}\cdot P(X) \tag{7}\] **Corollary 3.1**.: \(P(g\cdot X)=g^{\prime}\cdot P(X)\) _where \(g^{\prime}\) translates \(P(X)\) by an integer multiple of stride size \(s\). Stride size is the distance between two consecutive tokens in the same polyphase on a 2D grid._ #### 3.2.2 Addressing Shift Equivariance in Patch Embedding and Subsampled Attention As discussed in Section 3.1, strided convolution and subsampled attention methods, such as window attention and GSA, inherently lack shift equivariance. To tackle this issue, we employ the polyphase anchoring algorithm, which conditionally shifts the input based on the maximum polyphase, ensuring shift-equivariance in these modules. Utilizing Lemma 3.1, which establishes that polyphase anchoring is generally shift-equivariant, i.e., \(P(g\cdot X)=g^{\prime}\cdot P(X)\), and Corollary 3.1, which states that \(g^{\prime}\) translates \(P(X)\) by an integer multiple of stride size \(s\), _we show that the composition of polyphase anchoring with strided convolution, window attention, and global subsampled attention results in shift-equivariant operations_. Consequently, we effectively address the lack of shift equivariance in patch embedding modules and subsampled attention modules for ViT variants such as Twins [3]. Detailed proofs are provided in Appendix. **Lemma 3.2**.: _Let \(P\) be the polyphase anchoring operator and \(*_{*}\) represent the strided convolution operator. \(\forall g\in G\), where \(G\) is the translation group, and \(\forall s_{1},s_{2}\in\mathbb{Z}\) s.t \(s_{1}=s_{2}\), where \(s_{1}\) is the stride size in the polyphase and \(s_{2}\) is the stride size of convolution, the composition of strided convolution and polyphase anchoring is general shift-equivariant:_ \[P(g\cdot X)*_{*}h=g^{\prime}\cdot(P(X)*_{*}h) \tag{8}\] Here, \(X\) denotes the input signal, \(h\) is the convolution filter, and \(\cdot\) signifies the linear mapping of the input by the representation of group elements in \(G\). Furthermore, \(P:V\to V\) is a nonlinear operator acting on the input space \(V\). **Lemma 3.3**.: _Given a window attention operator \(A_{w}\) and polyphase anchoring operator \(P\), the composition of these operators is general shift-equivariant \(\forall g\in G\), where \(G\) is the translation group, and for \(s,w\in\mathbb{Z}\) such that \(s=w\), where \(s\) is the stride size in the polyphase and \(w\times w\) is the window size. This can be expressed as:_ \[A_{w}(P(g\cdot X))=g^{\prime}\cdot A_{w}(P(X)) \tag{9}\] In the window attention operator \(A_{w}\), we have: \[A_{w}(X)=\left[\begin{array}{ccc}A_{s}(X_{00})&\cdots&A_{s}(X_{0n})\\ \vdots&\vdots&\vdots\\ A_{s}(X_{m0})&\cdots&A_{s}(X_{mn}),\end{array}\right] \tag{10}\] where \(A_{s}\) is the self-attention operator defined in section 2.2, and \(X_{ij}\) contains all the tokens in a window. **Lemma 3.4**.: _For a global subsampled attention operator \(A_{g}\)[3] combined with a polyphase anchoring operator \(P\), general shift-equivariance is achieved for \(\forall g\in G\), where \(G\) is the translation group, and for \(s_{1},s_{2}\in\mathbb{Z}\) such that \(s_{1}=s_{2}\). Here, \(s_{1}\) is the stride size in the polyphase, and \(s_{2}\) is the stride size in the global subsampled attention. This can be expressed as:_ \[A_{g}(P(g\cdot X))=g^{\prime}\cdot A_{g}(P(X)) \tag{11}\] In global subsampled attention, we have: \[A_{g}(X)=\text{SoftMax}(QK_{s}^{T})V_{s}, \tag{12}\] where \(K_{s}\) and \(V_{s}\) are subsampled from the full keys \(K\) and values \(V\) using strided convolution. ### Positional Embedding Positional embedding is another factor that can cause models to lose shift-equivariance. As demonstrated previously, both absolute positional embedding [9] and relative positional embedding [11; 12] are not shift-equivariant. However, the conditional positional embedding introduced by Chu et al. [4] promotes shift-equivariance by employing zero-padded depthwise convolution to encode positional information. By replacing positional encoding with circularly-padded depthwise convolution, we achieve shift-equivariance. Formally, given an input tensor \(\mathbf{X}\) of shape \((C_{\text{in}},H_{\text{in}},W_{\text{in}})\) and a set of depthwise filters \(\mathbf{W}=\{W_{1},W_{2},\ldots,W_{C_{\text{in}}}\}\) of size \((C_{\text{in}},k,k)\), where \(C_{\text{in}}\) denotes the number of input channels and \(k\) represents the kernel size, the depthwise convolution operation can be defined as: \[\mathbf{Y}_{i}=W_{i}\odot\mathbf{X}_{i},\quad\forall i\in\{1,2,\ldots,C_{ \text{in}}\} \tag{13}\] Here, \(\mathbf{X}_{i}\) denotes the \(i\)-th input channel, \(\mathbf{Y}_{i}\) represents the corresponding output channel, and \(\odot\) denotes the standard convolution operation. Assuming circular padding, convolution at each channel is shift-equivariant, making depthwise convolution shift-equivariant as well. Since CNNs utilize convolution to encode positional information, it is plausible that transformers can also employ depthwise convolution to encode information, as demonstrated in [5; 4]. By ensuring shift-equivariance in patch embedding, positional embedding, and subsampled attention (in the case of ViT variants), we can achieve a truly shift-equivariant model. Furthermore, by incorporating a shift-invariant pooling operation in the classification head, we can obtain a truly shift-invariant model. Experiments In this section, we demonstrate that we can construct a \(100\%\) truly shift-equivariant ViT using adaptive polyphase anchoring and positional embedding. Additionally, we show that we can make the Twins transformer [3] truly shift-equivariant, a vision transformer variant that incorporates two types of attention modules--window attention and global subsampled self-attention. Models employing our algorithm exhibit superior accuracy in fair comparisons, improved robustness under shifting, cropping, flipping, and random patch erasing, 22.4% relative percentage point gain (or 41.6% increase) from ViT small under worst-of-30 shift attack, and 100% consistency under shift attacks. ### Classification Accuracy and Consistency on ImageNet-1k Settings.We evaluate six architectures, including ViT base, ViT small, Twins [3] base, and their shift-equivariant counterparts using polyphase anchoring on ImageNet-1k, which contains 1.28M training images and 50K validation images from 1,000 classes. For simplicity, we refer to ViT and Twins using polyphase anchoring and circular depthwise convolution as ViT-poly and Twins-poly. There are two types of Twins models introduced by Chu et al. [3], and we default Twins to Twins-svt because it demonstrates superior performance. Training.To ensure fair comparisons, we conduct a controlled experiment by training each model and its polyphase counterpart with the same hyperparameters from scratch on ImageNet-1k. All training instances involve using the same data augmentation strategy. Evaluation.We measure performance using accuracy, consistency, and accuracy under small random shift from 0 to 15 pixels. Consistency [2] measure the likelihood of the model assigning the image and its shifted copy to the same class. Results.Table 1 shows the comparison of ViT and Twins transformer models against its shift equivariant counterpart using models training from scratch with exactly the same hyperparameters. Poly version models demonstrate comparable raw accuracy, superior accuracy under random shifts, and 100% consistency. ### Robustness tests on ImageNet 1k Settings.We assess the robustness of the models trained from scratch in Section 4.1 under various types of transformations, using ImageNet-1K for our experiments. Evaluations.We evaluate the accuracy and consistency of the models under random cropping, horizontal flipping, random patch erasing, and random affine transformations. Additionally, we perform a worst-of-N shift attack for each batch of images, we keep the shift within a small range of \((-15,15)\) range, to keep it inconsequential to human perception, and use the worst-case shift from the grid search. Some of these metrics, especially the worst-of-N shift are sensitive to the batch size used since the worst shift is chosen per batch. We use a batch size of 64 for all metrics and additionally evaluate worst-of-30 with a batch size of 1 for 2000 samples to show the lowest performance. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Model** & **image size** & **\#param.** & **Epochs** & **Acc. IN1K** & **Consis.** & **Acc. Rand. S** \\ \hline ViT\_S & \(224^{2}\) & 22M & 300 & 75.52 & 86.61 & 74.98 \\ ViT\_S-poly & \(224^{2}\) & 22M & 300 & **76.37** & **100** & **76.37** \\ \hline ViT\_B & \(224^{2}\) & 87M & 300 & 73.85 & 85.60 & 73.01 \\ ViT\_B-poly & \(224^{2}\) & 86M & 300 & **74.62** & **100** & **74.62** \\ \hline Twins\_B & \(224^{2}\) & 56M & 300 & 80.57 & 91.25 & 79.90 \\ Twins\_B-poly & \(224^{2}\) & 56M & 300 & **80.59** & **100** & **80.59** \\ \hline \hline \end{tabular} \end{table} Table 1: ImageNet1K training from scratch Results.As shown in Table 2, ViT_B-poly and ViT_S-poly obtain comparable or better accuracy than their respective counterparts, ViT_B and ViT_S, under all considered transformations. Under the worst-of-k shift attack, our models achieve 100% consistency and significantly improved accuracy, while having slight-to-high gains on the other transformations. ### Stability and shift-equivariance tests on ImageNet-1K Settings.We measure stability of output logits under small shifts as well as shift equivariance in the feature space using models trained from scratch in 4.1. Output logits variancemeasures the variability of the model's logits predictions with respect to a range of small random shifts from -5 to 5. It quantifies the spread or dispersion of the logits (\(L\)) as a function of the input shift. Mathematically, the output logits variance can be calculated as follows: \[\text{Variance}=\frac{1}{N}\sum_{i=1}^{N}\left(L(x_{i})-\bar{L}\right)^{2} \tag{14}\] where Variance represents the output logits variance, \(N\) is the total number of samples, \(x_{i}\) denotes the input sample, \(L(x_{i})\) corresponds to the logits prediction for the input \(x_{i}\), \(\bar{L}\) represents the mean logits prediction for the given range of input shifts, the range of input shifts from -5 to 5 is denoted as \(\Delta x\). As demonstrated in Figure 3, the output logits variance concerning small shift perturbations is almost zero for ViT_S/16-poly and Twins_B-poly, indicating that the output logits remain unchanged under small input shifts. Conversely, the output logits variance is nonzero for nearly 50% of the input images, suggesting that the model's assigned probability for the input label alters in response to minor pixel shifts. Shift-equivariance testsare unit tests that measure if the features are shift-equivariant. Let \(\mathcal{M}\) be a machine learning model that takes an input \(\mathbf{X}\) and produces a feature map \(\mathbf{F}=\mathcal{M}(\mathbf{X})\). For a given translation \(g\in G\), define the shifted input \(\mathbf{X}^{\prime}\) as \(\mathbf{X}^{\prime}=g\cdot\mathbf{X}\). Let \(\mathbf{F}^{\prime}=\mathcal{M}(\mathbf{X}^{\prime})\) be the feature map obtained by applying the model to the shifted input. The feature shift-equivariance test can be defined as follows: \[\text{shift-equivariance}(\mathcal{M})=\begin{cases}0,&\text{if }\mathbf{F}^{ \prime}=g^{\prime}\cdot\mathbf{F}\\ \|\mathbf{F}^{\prime}-\mathbf{F}\|,&\text{otherwise}\end{cases} \tag{15}\] where \(g^{\prime}\in G\) is translation in the feature space, \(\|\cdot\|\) is \(L_{2}\) norm. As demonstrated in Figure 3, both the polyphase models of ViT and Twins successfully pass all shift equivariance tests in the feature space, while the original ViT and Twins models fail to do so and exhibit substantial norm differences in the feature space. ## 5 Related Work Data augmentationencourages shift-equivariance by adding shifted copies of images to the training set but lacks guarantees. In CNNs, it has been shown that models learn invariance to transformations only for images similar to typical training set images [1]. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Model** & **Acc. Crop** & **Acc. Flip** & **Acc. Affine** & **Worst-of-30 Batch 1** & **Worst-of-30** \\ \hline ViT\_S & 75.09 & 75.50 & 69.85 & 53.80 & 68.90 \\ ViT\_S-poly & **76.08** & **76.34** & 69.54 & **76.20** & **76.02** \\ \hline ViT\_B & 73.31 & 73.83 & 68.64 & 53.20 & 67.42 \\ ViT\_B-poly & **74.36** & **74.64** & **70.46** & **74.40** & **74.20** \\ \hline Twins\_B & 80.51 & 80.60 & 75.88 & 62.40 & 73.86 \\ Twins\_B-poly & 80.43 & 80.56 & **76.12** & **80.78** & **80.78** \\ \hline \hline \end{tabular} \end{table} Table 2: Robustness experiments on ImageNet1K Regularization during trainingencourages shift-equivariance and invariance by imposing soft constraints. A pretraining objective during self-supervised learning can be added to predict transformations applied to the input [6]. A loss function based on cross-correlation of embedded features encourages equivariance [15]. However, these regularizations do not guarantee shift-equivariance. Architectural designcan also result in shift-equivariance and invariance. For CNNs, anti-aliasing strategies [17] and adaptive polyphase sampling (APS) address the lack of shift-equivariance due to downsampling. The former lacks guarantees, while the latter is computationally expensive, requiring full convolution computations that can be problematic when applied to subsampled attentions. Subsampled attentions address the computational complexity of global attention; computing global attention before subsampling not only increases computation but also defeats the purpose of subsampled attentions. ## 6 Conclusions and Discussions In this paper, we revive shift equivariance in vision transformers by presenting versatile and adaptable modules that seamlessly integrate with vision transformer models, leading to improved performance. We introduce an input-adaptive nonlinear operator that guarantees shift-equivariance, resulting in more robust and dependable outcomes across a range of visual transformations. We provide theoretical guarantees of shift equivariance for strided convolution, window attention, and GSA that utilize polyphase anchoring. Through carefully controlled experiments, we demonstrate that vision transformers with polyphase anchoring and depthwise convolution achieve 100% consistency in classification, obtain an average of 20% percentage point gain (or 37% increase) under the worst-of-30 shift attack, and reach comparable or superior performance under cropping, horizontal flipping, and affine transformations. Due to the high computational cost of training from scratch, this paper, conducted in an academic setup, focuses on conducting controlled experiments for comparative analysis rather than optimizing for higher accuracy. Although the performance of the models may not achieve state-of-the-art standards, the primary goal of this paper is to investigate whether the geometric guarantees hold in practice and whether shift-equivariance results in more robust models. We defer the work of using industrial-scale computing resources to obtaining state-of-the-art performance to the future. Figure 3: The top figures display the output logits variance for ViT_S, Twins_B, and their shift-equivariant counterparts, while the bottom figures provide a comparison of shift-equivariance tests between ViT_S, Twins_B, and their respective shift-equivariant versions.
2310.15792
Three dimensional quotient singularity and 4d $\mathcal{N}=1$ AdS/CFT correspondence
We systematically study the AdS/CFT correspondence induced by D3 branes probing three dimensional Gorenstein quotient singularity $\mathbb{C}^3/G$. The field theory is given by the McKay quiver, which has a vanishing NSVZ beta function assuming that all the chiral fields have the $U(1)_R$ charge $\frac{2}{3}$. Various physical quantities such as quiver Hilbert series, superconformal index, central charges, etc are computed, which match exactly with those computed using the singularity. We also study the relevant deformation of those theories and find the dual geometry, therefore generate many new interesting AdS/CFT pairs. The quiver gauge theory defined using finite subgroups of $SO(3)$ group has some interesting features, for example, its Seiberg duality behavior is quite interesting.
Yuanyuan Fang, Jing Feng, Dan Xie
2023-10-24T12:42:21Z
http://arxiv.org/abs/2310.15792v1
# Three dimensional quotient singularity and 4d \(\mathcal{N}=1\) AdS/CFT correspondence ###### Abstract We systematically study the AdS/CFT correspondence induced by D3 branes probing three dimensional Gorenstein quotient singularity \(\mathbb{C}^{3}/G\). The field theory is given by the McKay quiver, which has a vanishing NSVZ beta function assuming that all the chiral fields have the \(U(1)_{R}\) charge \(\frac{2}{3}\). Various physical quantities such as quiver Hilbert series, superconformal index, central charges, etc are computed, which match exactly with those computed using the singularity. We also study the relevant deformation of those theories and find the dual geometry, therefore generate many new interesting AdS/CFT pairs. The quiver gauge theory defined using finite subgroups of \(SO(3)\) group has some interesting features, for example, its Seiberg duality behavior is quite interesting. ## 1 Introduction One of the best way to get AdS/CFT pairs is to use D branes or M2/M5 branes to probe a singularity \(X\) of manifold with special holonomy [1], and one gets the following dual pair: \[\mathcal{T}\longleftrightarrow AdS_{d+1}\times L\,.\] The superconformal field theory (SCFT) \(\mathcal{T}\) is realized as the IR field theories living on the branes, and the internal geometry \(L\) on the string/M theory side is given by the link of the singularity \(X\)[2]1. When \(X\) is smooth, one gets the maximal supersymmetric dual pairs studied by Maldacena [3]. An early example of using singularity was the dual pair studied by Klebanov and Witten [4] where they used three dimensional conifold singularity. Footnote 1: The link is often defined as the intersection of \(X\) with a sphere with radius one. The space of \(X\) which would give rise to the dual pair is very large if one is interested in the theory with the minimal number of supersymmetry. For example, it was conjectured in [5] that one can use three dimensional Gorenstein canonical singularity \(X\)[6] to get 4d \(\mathcal{N}=1\) AdS/CFT pair, and the space of such 3d singularities is very large, see [5]. Further studies of those dual pairs would be quite valuable for us to better understand gauge/gravity duality. However, we face two major problems in these studies: a): First, not every 3d canonical singularities would give us an AdS/CFT pair, as there are obstructions [7; 8] to the existence of a Ricci flat conic metric on \(X\), which is a necessary and sufficient condition for the existence of AdS/CFT pair; b): Even if \(X\) would give rise to a AdS/CFT pair, the field theory is quite difficult to define and most of them do not admit a conventional Lagrangian theory (i.e. the SCFT is not realized as the IR limit of a usual non-abelian gauge theory). When \(X\) is a three dimensional toric 2 singularity, it was shown in [9] that \(X\) is indeed stable and one can find the gauge theory by using the dimer technique [10; 11; 12]. Footnote 2: A 3d toric singularity admits three dimensional automorphism group. Another class of stable singularities is the so-called Gorenstein quotient singularities \(\mathbb{C}^{3}/G\), where \(G\) is a finite subgroup of \(SU(3)\). Such singularities were classified in [13], see Table 1, and the corresponding link \(L\) is an orbifold \(S^{5}/G\)3. The SCFT also admits a simple description by using the representation theory of the group \(G\), namely the so-called McKay quiver [14; 15; 16; 17]. So one can always get a nice AdS/CFT pair (see [18; 19; 20; 21; 22] for the early studies). Comparing with toric theories, these pairs are often simpler both on the geometry side and field theory side, i.e. many singularities \(X\) are given by hypersurface and the field theory might have just a few simple gauge groups. The McKay quiver also has quite nice features: the one loop beta function all vanishes (finite theory) assuming all the chiral fields do not receive anomalous dimension, and so it is similar to \(\mathcal{N}=4\) SYM and \(\mathcal{N}=2\) affine ADE quiver gauge theory. So those AdS/CFT provide an excellent class of examples from which one can learn deep lessons about field theory and string theory. Footnote 3: In general, the space \(S^{5}/G\) is not a smooth manifold. In this paper, we will perform a systematical study of those AdS/CFT pairs by using various mathematical and physical tools newly developed in the last few years: 1. One can compute a so-called quiver matrix Hilbert series [23; 24; 25; 26], from which one can extract a Hilbert series \(H_{00}(t)\). \(H_{00}(t)\) can be used to extract the dual geometry by identifying it as the Hilbert series of the singularity \(X\). 2. The superconformal index has a quite simple form in the large \(N\) limit [23; 27], and one can get the exact formula for those theories, from which the information of protected spectrum is derived. 3. The Seiberg duality [28] behaviors of the McKay quivers are richer, and the dual theory is no longer finite (the chiral fields receive anomalous dimensions). Geometrically, Seiberg duality is related to the flop of the crepant resolution of the singularity. Given the simple nature of the geometry, those pairs might teach us more lessons about Seiberg duality. 4. We also study the relevant deformation of these theories, and find the dual geometry by computing the quiver Hilbert series. In particular, the mass deformed \(\mathcal{N}=2\) affine ADE quiver is studied, where the dual geometry is proven to be: \[f_{ADE}(x,y,z)+w^{h}=0\,.\] Here \(f_{ADE}\) is the ADE singularity and \(h\) is the Coxeter number. This is the generalization of the conifold example [4] (see also [29; 30] for the early discussion). The dual geometry is nicer as it has only isolated singularity so the link \(L\) is smooth; and various geometric properties of \(L\) such as homology groups are computed. We also compute the Hilbert series and index for those theories. Deformed theory for theories defined using finite subgroup of \(SO(3)\) group is also studied, where the dual geometry is also written down. This paper is organized as follows: section 2 reviews the classification of finite subgroup \(G\) of \(SU(3)\) and how to get a McKay quiver from the representation theory of \(G\); section 3 reviews how to compute various physical quantities from the field theory and geometry side; section 4 studies the theory when \(G\) is a subgroup of \(SU(2)\); section 5 studies the theory when \(G\) is a finite subgroup of \(SO(3)\); Finally a conclusion is given in section 6. Appendix studies other sporadic theories when \(G\) is other finite subgroups of \(SU(3)\). ## 2 Finite subgroups of \(Su(3)\) and McKay quiver ### Finite subgroups of \(Su(3)\) The finite subgroups of \(SU(3)\) are classified in [13; 31], see the list in Table 1. Here are some comments: 1. \(\mathbb{C}^{3}/G\) has an isolated singularity at the origin if and only if \(G\) is an abelian group and \(1\) is not an eigenvalue for every element of \(G\). 2. Finite groups of \(SU(2)\) are contained in type \(B\), and the elements of \(G\) take the form \[g=\left(\begin{array}{cc}1&0\\ 0&g^{{}^{\prime}}\end{array}\right)\,.\] (1) Here \(g^{{}^{\prime}}\) is the element of finite subgroup of \(SU(2)\). They have an ADE classification. 3. Finite groups of \(SO(3)\) are identified as follows: 1. The cyclic group takes the form \(A(n,1)\). 2. The dihedral group is contained in the \(B\) series. 3. The tetrahedral group is identified as \(\Delta(12)\). 4. The octahedral group is isomorphic to \(\Delta(24)\). 5. The icosahedral group is isomorphic to \(H\). ### McKay quiver The quiver gauge theory associated with \(N\)\(D3\) branes probing the singularity \(\mathbb{C}^{3}/G\) can be derived using the McKay correspondence [14]. Let's denote the irreducible representation of \(G\) by \(\rho_{i}\) whose dimension is \(d_{i}\). The McKay quiver is found as follows: 1. First the quiver nodes are identified with the irreducible representations of \(G\): for an irreducible representation, assign a quiver node \(Q_{i}\) whose gauge group is \(SU(Nd_{i})\). 2. The number of quiver arrows from \(Q_{i}\) to \(Q_{j}\) is found from the following tensor product decomposition \[\rho_{i}\otimes\rho=\oplus_{j}a_{ij}\rho_{j}\,.\] Here \(\rho\) is the natural representation of \(G\): It is the three-dimensional representation on which \(G\) acts naturally and could be reducible or irreducible. The number of arrows is just \(a_{ij}\), and they point from quiver node \(i\) to node \(j\). We might also have adjoint chirals on \(i\)-th quiver node if \(a_{ii}\) is nonzero. \begin{table} \begin{tabular}{|c|c|} \hline Group & Order \\ \hline \(A(m,n)\sim\mathbb{Z}_{n}\times\mathbb{Z}_{m}\), \(n\) divides \(m\) & \(mn\) \\ \hline \(B\) Finite subgroups of \(U(2)\) & No general formulas \\ \hline \(C(n,a,b)\) & No general formulas \\ \hline \(D(n,a,b;d,r,s)\) & No general formulas \\ \hline \(\Delta(3n^{2})\sim C(n,0,1),\ n\geq 2\) & \(3n^{2}\) \\ \hline \(\Delta(6n^{2})\sim D(n,0,1;2,1,1),n\geq 2\) & \(6n^{2}\) \\ \hline \(T_{n}\sim C(n,1,a),\ (1+a+a^{2})mod\ n=0\) & \(3n\) \\ \hline \(E\) & \(108\) \\ \hline \(F\) & \(216\) \\ \hline \(G\) & \(648\) \\ \hline \(H\) & \(60\) \\ \hline \(I\) & \(168\) \\ \hline \(J\) & \(180\) \\ \hline \(K\) & \(504\) \\ \hline \(L\) & \(1080\) \\ \hline \end{tabular} \end{table} Table 1: Finite groups of \(SU(3)\). 3. The superpotential can often be determined by requiring the gauge invariance, conformal invariance, and the total \(R\) charge to be 2 for each term in the superpotential. For our theory, the \(U(1)_{R}\) charges for the chiral fields are all free (\(\frac{2}{3}\)), so the marginal interaction (total \(R\) charge 2) comes from oriented triangle (required by gauge invariance) in the quiver diagram. We simply add all possible such cubic terms to the superpotential. **Example**: Let's look at the group \(H\), and the required representation theory data is given in [32]. There are a total of 5 irreducible representations with label \(1_{0},3_{1},3_{2},4,5\), where the number of the label denotes the dimension of the representation. The tensor product is given by \[1_{0}\otimes\rho=3_{1},\ \ 3_{1}\otimes\rho=1_{0}+3_{1}+5,\ \ 3_{2} \otimes\rho=4+5,\] \[4\otimes\rho=3_{2}+4+5,\ \ 5\otimes\rho=3_{1}+3_{2}+4+5\,.\] So there are a total of 5 quiver nodes with rank \((1,3,3,4,5)\) for a single D3 brane probe. The number of arrows is \[a_{1_{0}3_{1}}=1,\ \ a_{3_{1}1_{0}}=1,\ \ a_{3_{1}3_{1}}=1,\ \ a_{3_{1}5}=1,\ \ a_{3_{2}4}=1,\ \ a_{3_{2}5}=1,\] \[a_{43_{2}}=1,\ \ a_{44}=1,\ \ a_{45}=1,\ \ a_{53_{1}}=1,\ \ a_{53_{2}}=1,\ \ a_{54}=1,\ \ a_{55}=1\,.\] The resulting quiver gauge theory is shown in the Figure 1. For \(N\) D3 branes, one just changes the gauge group ranks to \((N,3N,3N,4N,5N)\). #### 2.2.1 General properties of McKay quiver Let's discuss some general facts about the McKay quiver. We will show that the quiver gives rise to a 4d \(\mathcal{N}=1\) theory which does not have gauge anomaly, and the one loop beta function is zero if all the chiral fields have the \(U(1)_{R}\) charge \(\frac{2}{3}\) (no anomalous dimension). First, let's take a single D3 brane probe, so the McKay quiver has the dimension vectors \(d_{1},d_{2}\dots\), corresponding to the dimension of the irreducible representation of finite group \(G\). There is always a node corresponding to the trivial representation, and its rank is just 1. We now show that the one loop beta function of McKay quiver is zero. Given a finite subgroup \(G\) of \(SU(3)\) group, one get a quiver by looking at the following branching rule for the irreducible representations \[\rho_{i}\otimes\rho=\oplus_{j}a_{ij}\rho_{j},\] Figure 1: McKay quiver corresponding to group \(H\). where \(a_{ij}\) gives the number of arrows between \(i\)th and \(j\)th quiver node. One defines a matrix \(B\) as follows \[B_{ij}=3\delta_{ij}-a_{ij}\,.\] We then define a matrix \[A=B+B^{t}\,.\] Let's now multiply \(A\) by the dimension vector \(d^{T}=[d_{1},d_{2},\ldots]\), and so \[(Ad)_{i}=\sum_{j}(6\delta_{ij}-a_{ij}-a_{ji})d_{j}\,. \tag{2}\] We now show that the \(i\)th entry of \(Ad\) is proportional to the one-loop beta function of the \(i\)th gauge group, assuming the \(U(1)_{R}\) charge of all the chiral fields are \(\frac{2}{3}\) (the R charge of a free chiral). The reason is the following: as shown in Figure 2, for the \(i\)th gauge group with rank \(d_{i}\), there are \(d_{j}a_{ji}\) bi-fundamental chiral fields coming into it, \(a_{ij}d_{j}\) number of bi-fundamental chiral fields coming out of it, and \(a_{ii}\) number of adjoint chiral fields, so the one loop NSVZ beta function (if all the chiral fields has \(R\) charge \(\frac{2}{3}\)) is \[\beta_{1}=d_{i}+\sum_{j\neq i}[\frac{1}{2}(a_{ji}+a_{ij})d_{j}( \frac{2}{3}-1)]+a_{ii}d_{i}(\frac{2}{3}-1)\] \[=\frac{1}{6}[6d_{i}-\sum_{j\neq i}(a_{ij}+a_{ji})d_{j}-2a_{ii}d_{ i}]\] \[=\frac{1}{6}\sum_{j}(6\delta_{ij}-a_{ij}-a_{ji})d_{j}\,.\] Here we used the fact that the index for the fundamental representation is \(\frac{1}{2}\), and that for the adjoint representation is \(d_{i}\). Comparing with (2), we see that \((Ad)_{i}\) is just the one loop \(\beta\) function. We now prove that \(d\) is the eigenvector of \(A\), and the eigenvalue is zero. So \(Ad=0\), which means that the one-loop beta function all vanishes, if one assume all the chiral fields have \(R\) charge \(\frac{2}{3}\). **Proof**: Let's first recall some facts about characters [33]. One can define an inner product on the space of characters, and the characters for irreducible representations form an ortho-normal basis: \[\sum_{C}\zeta_{C}^{-1}\chi_{i}(C)\chi_{j}(C^{-1})=\delta_{ij},\] \[\sum_{j}\chi_{j}(C^{{}^{\prime}})\chi_{j}(C^{-1})=\delta_{CC^{{}^ {\prime}}}\zeta_{C}\,.\] Figure 2: The \(i\)th quiver node (with rank \(d_{i}\)) has \(a_{ij}\) chiral fields coming out of node, and \(a_{ji}\) chiral fields coming into the node, and \(a_{ii}\) adjoint chirals. Here the first sum is over the conjugacy class, and \(\zeta_{C}\) is the order for the centralizer of one of the elements in \(C\). For an arbitrary representation \(R\) of finite group \(G\), define a matrix \(b_{ij}\) by using the tensor product \[R\otimes\rho_{i}=\oplus b_{ij}\rho_{j}\,.\] Here \(\rho_{i}\) is the irreducible representation. Now \(b_{ij}\) can be represented by the inner product of the representation by using the orthogonal property of the character of the irreducible representation \[b_{ij}=\langle R\otimes\rho_{i},\rho_{j}\rangle=\sum_{C}\zeta_{C}^{-1}\chi_{R}( C)\chi_{i}(C)\chi_{j}(C^{-1})\,.\] We compute the action of maxtrix \(b\) on vector \(\chi(C)=[\chi_{1}(C),\chi_{2}(C),\ldots]\), here \(C\) is a fixed conjugacy class. \[\sum b_{ij}\chi_{j}(C)=\sum_{j,C^{{}^{\prime}}}\zeta_{C^{{}^{ \prime}}}^{-1}\chi_{R}(C^{\prime})\chi_{i}(C^{\prime})\chi_{j}(C^{{}^{\prime} -1})\chi_{j}(C)\] \[=\sum_{C^{{}^{\prime}}}\chi_{R}(C^{\prime})\chi_{i}(C^{\prime}) \delta_{CC^{{}^{\prime}}}=\chi_{R}(C)\chi_{i}(C)\,.\] So \(\chi(C)\) is the eigenvector of \(b\) with eigenvalue \(\chi_{R}(C)\). Let's now take \(C\) to be the conjugacy class of identity element, and so \(\chi(C)=d=[d_{1},d_{2},\ldots]\) are the dimension vector, and furthermore \(R=3-\rho\) (\(\rho\) is the natural representation of \(G\) on \(\mathbb{C}^{3}\) ) so that \(b_{ij}=B\), and the eigenvalue is just zero! Similarly, if one take \(R=3-\rho^{*}\) (where \(\rho^{*}\) is the conjugate representation of \(\rho\)), then the matrix \(b_{ij}=(B^{T})_{ij}\), and we find the dimension vector is also the eigenvector of the matrix \(B^{T}\) whose eigenvalue is zero. The above proof also shows that the Mckay quiver does not have non-abelian gauge anomaly, as \[0=(Bd-B^{T}d)_{i}=\sum_{j\neq i}a_{ij}d_{j}-\sum_{j\neq i}a_{ji}d_{j}\,.\] Here the first sum counts the number of chirals coming out of the gauge group (fundamental representation), and the second term counts the number of chiral coming into the gauge group (anti-fundamental representation). The above formula shows that the number of fundamental chirals are equal to anti-fundamental chirals, and so the gauge anomaly vanishes. ## 3 AdS/CFT correspondence We will review various physical correspondence appeared in the 4d \(\mathcal{N}=1\) AdS/CFT correspondence, and emphasize many explicit formulas. The correspondence is summarized in Table 2. ### Field theory data **Deformation of \({\cal N}=1\) SCFT**: The bosonic symmetry group of a \({\cal N}=1\) superconformal algebra is \(SO(2,4)\times U(1)_{R}\times G_{F}\). Here \(SO(2,4)\) is the conformal group of four-dimensional Minkowski spacetime, \(U(1)_{R}\) is the R symmetry group, and \(G_{F}\) are other continuous global symmetry groups. A highest weight representation is labeled as \(|\Delta,r,j_{1},j_{2}\rangle\), where \(\Delta\) is the scaling dimension, \(r\) is \(U(1)_{R}\) charge, \(j_{1}\) and \(j_{2}\) are left and right spin. These states might also carry quantum numbers of flavor symmetry group \(G_{F}\). Two important short representations are chiral multiplets and multiplets for conserved currents: \[\begin{array}{l}{\cal B}_{r,(j_{1},0)},\ \Delta=\frac{3}{2}r,\\ \hat{\cal C}_{(j_{1},j_{2})},\ r=j_{1}-j_{2},\ \Delta=2+j_{1}+j_{2}\,.\end{array} \tag{11}\] Among the multiplets for conserved currents, \(\hat{\cal C}_{(0,0)}\) contains conserved currents for the global symmetry group \(G_{F}\), \(\hat{\cal C}_{(\frac{1}{2},0)}\) contains other supersymmetry currents, \(\hat{\cal C}_{(\frac{1}{2},\frac{1}{2})}\) contains energy-moment tensor and \(U(1)_{R}\) current, thus \(\hat{\cal C}_{(\frac{1}{2},\frac{1}{2})}\) exists for any \({\cal N}=1\) SCFT.[5] There are only two kinds of supersymmetric deformations for \(4d\)\({\cal N}=1\) SCFT. They can be described as integrating chiral operators over half of the superspace or integrating generic operators over all of the superspace.[34] For chiral multiplets, we have the following F term relevant or marginal deformation: \[\delta S=\int d^{2}\theta\lambda{\cal B}_{r,(0,0)}+c.c,\quad r\leq 2\,. \tag{12}\] Here \(\lambda\) is a coupling constant. By using conserved current, we also have the following D-term deformations: \[\delta S=\int d^{2}\theta d^{2}\bar{\theta}\Lambda\hat{\cal C}_{(0,0)}\,. \tag{13}\] This is the only relevant or marginal deformation that can be derived from the \(\hat{\cal C}\) type operator. Chiral operators form a ring called the Chiral ring. One can also have continuous moduli space of vacua whose structure is largely determined by the chiral ring [35]. \begin{table} \begin{tabular}{|c|c|} \hline Field theory \({\cal T}\) & Geometry \(X\) \\ \hline Quiver & Non-commutative crepant resolution (NCCR) \\ \hline Quiver Hilbert series \(H_{00}(t)\) & Hilbert series \(H(t)\) \\ \hline Central charges \(a,c\) & Asymptotical expansion of \(H(t)\) \\ \hline Superconformal index & Kluza-Klein (KK) analysis \\ \hline Seiberg duality & Derived equivalence of NCCR \\ \hline Relevant deformation & Modifying geometry \\ \hline Exact marginal deformation & Moduli space of Ricci-flat metric on \(X\) \\ \hline \end{tabular} \end{table} Table 2: AdS/yCFT correspondence for the field theory data and geometry data. **Central charges \(a,c\)**: The central charges \(a\) and \(c\) of the SCFT's can be computed from the 't Hooft anomalies \(\operatorname{Tr}R\) and \(\operatorname{Tr}R^{3}\)[36]: \[a=\frac{3}{32}(3\operatorname{Tr}R^{3}-\operatorname{Tr}R),\quad c=\frac{1}{32} (9\operatorname{Tr}R^{3}-5\operatorname{Tr}R)\,.\] For example, the 't Hooft anomalies of a gauge theory with \(SU(N)\) gauge group, and coupled with \(n_{i}\) copies of chiral fields in a representation with dimension \(r_{i}\). If the \(U(1)_{R}\) charge of the matter in \(r_{i}\) representation is \(R_{i}\), then the anomaly is given as [37]: \[\operatorname{Tr}R =(N^{2}-1)+\sum_{i}n_{i}r_{i}(R_{i}-1), \tag{10}\] \[\operatorname{Tr}R^{3} =(N^{2}-1)+\sum_{i}n_{i}r_{i}(R_{i}-1)^{3}\,.\] Here the first term is the contribution from the fermion in vector multiplets, and the second term counts the contribution from the chiral multiplets. The \(U(1)_{R}\) charge of the chiral multiplet is further constrained by the vanishing of the one-loop beta function, and each term in the superpotential should have \(U(1)_{R}\) charge two. For the McKay quiver described in last section, the \(R\) charges of all chiral multiplets are equal to \(\frac{2}{3}\), then the anomalies are \[\operatorname{Tr}R =\sum_{i}(d_{i}^{2}N^{2}-1)+\sum_{i\neq j}(\frac{2}{3}-1)a_{ij}d_ {i}d_{j}N^{2}+\sum_{i}(\frac{2}{3}-1)a_{ii}(d_{i}^{2}N^{2}-1),\] \[\operatorname{Tr}R^{3} =\sum_{i}(d_{i}^{2}N^{2}-1)+\sum_{i\neq j}(\frac{2}{3}-1)^{3}a_{ ij}d_{i}d_{j}N^{2}+\sum_{i}(\frac{2}{3}-1)^{3}a_{ii}(d_{i}^{2}N^{2}-1)\,.\] The \(R\) charge condition for the vanishing one-loop beta function is \[6d_{i}-\sum_{j}(a_{ij}+a_{ji})d_{j}=0\,.\] Thus \[\operatorname{Tr}R =\sum_{i}(d_{i}^{2}N^{2}-1)+\frac{1}{2}\sum_{i\neq j}(\frac{2}{3 }-1)(a_{ij}+a_{ji})d_{i}d_{j}N^{2}+\sum_{i}(\frac{2}{3}-1)a_{ii}(d_{i}^{2}N^{2 }-1)\] \[=\sum_{i}(d_{i}^{2}N^{2}-1)+\frac{1}{2}\sum_{i\neq j}(\frac{2}{3 }-1)((a_{ij}+a_{ji})d_{i}d_{j}+2a_{ii}d_{i}^{2})N^{2}+\sum_{i}(\frac{2}{3}-1) a_{ii}(-1)\] \[=\sum_{i}(d_{i}^{2}N^{2}-1)+\frac{1}{2}\sum_{i}(\frac{2}{3}-1)(6 d_{i})d_{i}N^{2}+\sum_{i}(\frac{2}{3}-1)a_{ii}(-1)\] \[=\sum_{i}(d_{i}^{2}N^{2}-1)-\sum_{i}d_{i}^{2}N^{2}+\frac{1}{3}L= -Q_{0}+\frac{1}{3}L,\] \[\operatorname{Tr}R^{3} =\sum_{i}(d_{i}^{2}N^{2}-1)-\sum_{i}\frac{1}{9}d_{i}^{2}N^{2}+ \frac{1}{27}L=\frac{8}{9}\sum_{i}d_{i}^{2}N^{2}-Q_{0}+\frac{1}{27}L\,.\] Here \(L\) is the number of adjoint multiplets and \(Q_{0}\) is the number of gauge groups. So the central charges are \[\boxed{a=\frac{3}{32}(3\operatorname{Tr}R^{3}-\operatorname{Tr}R)= \frac{1}{4}\sum_{i}d_{i}^{2}N^{2}-\frac{1}{48}(L+9Q_{0}),}\] \[c=\frac{1}{32}(9\operatorname{Tr}R^{3}-5\operatorname{Tr}R)= \frac{1}{4}\sum_{i}d_{i}^{2}N^{2}-\frac{1}{24}(L+3Q_{0})\,.\] **Example:** Let's consider the quiver gauge theory shown in Figure 3. * Vanishing of NSVZ one-loop beta function: \[N+\frac{1}{2}(R(A)-1)3N+\frac{1}{2}(R(a)-1)3N=0,\] \[N+\frac{1}{2}(R(B)-1)3N+\frac{1}{2}(R(b)-1)3N=0,\] \[N+\frac{1}{2}(R(C)-1)3N+\frac{1}{2}(R(c)-1)3N=0,\] \[3N+3N(R(u)-1)+3N(R(v)-1)+\frac{1}{2}(R(A)-1+R(a)-1)N\] \[+\frac{1}{2}(R(B)-1+R(b)-1)N+\frac{1}{2}(R(C)-1+R(c)-1)N=0\,.\] So if all the \(R\) charges for the chiral fields are \(\frac{2}{3}\), the \(R\) charge condition for the vanishing one-loop beta function and the superpotential are automatically satisfied. * Let's now compute the 't Hooft anomaly by using the above \(U(1)_{R}\) charges: \[\operatorname{Tr}R=3(N^{2}-1)+(3N)^{2}-1+3N^{2}(\frac{2}{3}-1) \times 6+(9N^{2}-1)(\frac{2}{3}-1)\times 2=-\frac{10}{3},\] \[\operatorname{Tr}R^{3}=3(N^{2}-1)+((3N)^{2}-1)+3N^{2}(\frac{2}{3 }-1)^{3}\times 6+(9N^{2}-1)(\frac{2}{3}-1)^{3}\times 2\] \[=\frac{2(144N^{2}-53)}{27}\,.\] So the central charges \(a,c\) are: \[a=\frac{3}{32}(3\operatorname{Tr}R^{3}-\operatorname{Tr}R)=3N^{2}-\frac{19}{ 24},\quad c=\frac{1}{32}(9\operatorname{Tr}R^{3}-5\operatorname{Tr}R)=3N^{2} -\frac{7}{12}\,.\] Figure 3: McKay quiver corresponding to Tetrahedral group. \(a\) **maximization and \(U(1)_{R}\) charge**: To compute the central charges using the anomaly, one needs to identify the correct \(R\) symmetry. Often the IR \(U(1)_{R}\) symmetry can not be simply identified with the \(U(1)_{R}\) symmetry of the UV theory, it might mix with other abelian global symmetries. One can find out the correct IR \(R\) symmetry by using \(a\) maximization [37]. The idea is to write the trial \(R_{trial}\) as follows: \[R_{trial}=R_{UV}+\sum s_{I}F_{I},\] and then maximize the trial central charge \(a_{trial}\) to determine the coefficient \(s_{I}\). One must be careful that it is crucial to include all anomaly free symmetries including those which can not be seen from the UV theory, i.e. accidental symmetry. #### 3.1.1 Superconformal index One can define a superconformal index for 4d \(\mathcal{N}=1\) theory which can be used to extract information about the protected operators [38]. Here we summarize the simple large \(N\) index for the quiver gauge theory proposed in [23; 27], and we will clarify the issue for the choice of \(U(N)\) or \(SU(N)\) gauge group. First, we have the single letter indices for \(\mathcal{N}=1\) vector multiplet, boson and fermion contribution in a chiral multiplet: \[\begin{split} i_{V}(t,y)&=\frac{2t^{6}-t^{3}\left(y +\frac{1}{y}\right)}{\left(1-t^{3}y\right)\left(1-t^{3}y^{-1}\right)}\,,\\ i_{\phi(r)}(t,y)&=\frac{t^{3r}}{\left(1-t^{3}y \right)\left(1-t^{3}y^{-1}\right)}\,,\\ i_{\bar{\psi}(r)}(t,y)&=\frac{-t^{3(2-r)}}{\left(1 -t^{3}y\right)\left(1-t^{3}y^{-1}\right)}\,,\end{split} \tag{3.5}\] where \(r\) is the R charge for chiral multiplet (the \(R\) charge of top component). Then given a quiver gauge theory, we can write down a single letter index matrix \(i(t,y)\), whose \((ij)\) entry contains the contribution of the chiral fields in bi-fundamental representation (\(i\neq j\)), and those chirals in adjoint representation, and contribution of vector multiplets: \[i(t,y)_{ij} =\sum_{a_{ij}}i_{\phi(r)}(t,y)+\sum_{a_{ji}}i_{\bar{\psi}(r)}(t,y ),\text{ when }i\neq j\] \[i(t,y)_{ii} =i_{V}+\sum_{a_{ii}}i_{\phi(r)}(t,y)+\sum_{a_{ii}}i_{\bar{\psi}(r )}(t,y)\,.\] One can find the single trace index from matrix \(i(t,y)\) as follows [27] (assuming the gauge group is \(U(N)\)): \[\mathcal{I}_{\text{s.t.}}=-\sum_{k=1}^{\infty}\frac{\varphi(k)}{k}\log[\det \Bigl{(}1-i(t^{k},y^{k})\Bigr{)}], \tag{3.6}\] where \(\varphi(n)\) is the Euler Phi function and we can use the following property to calculate it \[\sum_{k=1}^{\infty}\frac{\varphi(k)}{k}\log\Bigl{(}1-x^{k}\Bigr{)}=\frac{-x}{ 1-x}\,. \tag{3.7}\] The matrix \(i(t,y)\) could be simplified as follows [39]. Notice that \[1-i_{V}(t,y)=\frac{1-t^{6}}{\left(1-t^{3}y\right)\left(1-t^{3}y^{-1}\right)},\] so the matrix \(1-i(t,y)\) can be written as \[1-i(t,y)=\frac{\chi(t)}{\left(1-t^{3}y\right)\left(1-t^{3}y^{-1}\right)}\] with \(\chi(t)\) the matrix \[\boxed{\chi(t)=1-M_{Q}(t)+t^{6}M_{Q}^{T}(t^{-1})-t^{6}}\,, \tag{3.8}\] and \(M_{Q}(t)\) is the matrix counting the \(R\) charge of the chiral fields: 1. If \(i\neq j\), the matrix element \(M_{ij}\) of \(M_{Q}\) is \[M_{ij}=\sum_{\text{bifundamental chiral fields in}(\mathbb{N}_{i},\bar{ \mathbb{N}}_{j})}t^{3R_{ij}}\,.\] 2. If \(i=j\), the matrix element \(M_{ii}\) is \[M_{ii}=\sum_{\text{adjoint chiral fields}}t^{3R_{ii}}\,.\] The matrix \(\chi(t)\) was also discovered by Ginzburg in [23]. The single trace is simplified as \[\mathcal{I}_{\text{s.t.}}=-\sum_{k=1}^{\infty}\frac{\varphi(k)}{k}\log[ \text{det}\Big{(}\chi(t^{k})\Big{)}]+Q_{0}i_{V}\,. \tag{3.9}\] Here \(Q_{0}\) is the number of quiver nodes. Next, we would like to take the gauge group as \(SU\) type instead of \(U\) type, and we need to subtract the contributions of the free fields in the IR. Firstly, there is a \(U(1)\) vector multiplet for each gauge group, so the second term in (3.9) is subtracted; Secondly, there might be free chirals (For example, if there are adjoint chirals, then the traceless part of it is free and decoupled); Thirdly, there could also be some other free chirals which one need to subtract (sometimes one can find them from the explicit \(U(1)_{R}\) charge in the quiver gauge theory), and the best way to find them is to expand the first term in 3.9 and detect the number of free chirals. So the final single trace index takes the form \[\boxed{\mathcal{I}_{\text{s.t.}}=-\sum_{k=1}^{\infty}\frac{\varphi(k)}{k}\log [\text{det}\Big{(}\chi(t^{k})\Big{)}]-\text{free chirals}}\,. \tag{3.10}\] An even subtle situation is that one finds from the \(U\) type index the contribution of fields violating the unitarity bound, then one needs to be more careful. **Example**: Let's consider the quiver gauge theory in Figure 3. The matrix \(M_{Q}\) is \[M_{Q}=\begin{pmatrix}0&0&t^{2}&0\\ 0&0&t^{2}&0\\ t^{2}&t^{2}&2t^{2}&t^{2}\\ 0&0&t^{2}&0\end{pmatrix}\,.\] Here the \(R\) charges of the chiral fields are all \(\frac{2}{3}\), and the matrix \(\chi(t)\) is \[\chi(t)=\begin{pmatrix}1-t^{6}&0&t^{4}-t^{2}&0\\ 0&1-t^{6}&t^{4}-t^{2}&0\\ t^{4}-t^{2}&t^{4}-t^{2}&1-t^{6}+2(t^{4}-t^{2})&t^{4}-t^{2}\\ 0&0&t^{4}-t^{2}&1-t^{6}\end{pmatrix}\,.\] Then we can calculate the single trace index (for \(U\) type gauge group) (see 3.9) \[\begin{split}&\det(\chi(t))=(1-t^{2})^{2}(1-t^{4})^{2}(1-t^{6})^{ 2}\,,\\ &\mathcal{I}_{\text{s.t.}}=-\sum_{k=1}^{\infty}\frac{\varphi(k)}{ k}\log[(1-t^{2k})^{2}(1-t^{4k})^{2}(1-t^{6k})^{2}]+4i_{V}\\ &=2\frac{t^{2}}{1-t^{2}}+2\frac{t^{4}}{1-t^{4}}+2\frac{t^{6}}{1-t^{6 }}+4i_{V}\,.\end{split} \tag{3.11}\] To get the index for \(SU\) type gauge group, we notice that there are two free chirals from the two adjoints, so the index for \(SU\) type theory is \[\mathcal{I}_{\text{s.t.}}=2\frac{t^{2}}{1-t^{2}}+2\frac{t^{4}}{1-t^{4}}+2 \frac{t^{6}}{1-t^{6}}-2\frac{t^{2}-t^{4}}{(1-t^{3}y)(1-t^{3}y^{-1})}\,.\] The full index can be computed by calculating plethystic exponential of the single trace index, i.e. \[\mathcal{I}(t,y)=PE(\mathcal{I}_{\text{s.t.}}(t,y))=\exp^{\sum_{n}\frac{1}{n} \mathcal{I}_{\text{s.t.}}(t^{n},y^{n})},\] and the first fewer terms in the expansion could tell us the information of relevant and exact marginal deformations. For example, the expansion of the full index of the above theory is \[\mathcal{I}_{\text{full}}=1+6t^{4}-2t^{5}\left(y+\frac{1}{y}\right)+4t^{6}+O \left(t^{7}\right)\,.\] So there should be 6 chiral scalar operators with \(U(1)_{R}\) charge \(\frac{4}{3}\) (those can be identified as \(\text{Tr}(Aa),\text{Tr}(Bb)\) etc, and 4 marginal operators. Notice that the conserved current for the global symmetry and the marginal chiral scalar operator give opposite contributions to the index [34; 40], so one can get the information of the exact marginal operators (one needs to include the flavor fugacity to the full index to determine exactly the number of exact marginal operators). #### 3.1.2 Quiver Hilbert series and dual geometry Given a quiver gauge theory, we can calculate a matrix Hilbert series [24] as follows: \[H(Q,t)=\frac{1}{1-M^{\prime}_{Q}(t)+t^{2}{M^{\prime}_{Q}}^{T}(t^{-1})-t^{2}}\,.\] Here adjacent matrix \(M^{\prime}_{Q}\) can be read from the quiver and R-charges: 1. If \(i\neq j\), the matrix element \(M^{\prime}_{ij}\) of \(M^{\prime}_{Q}\) is \[M^{\prime}_{ij}=\sum_{\text{bifundamental chiral fields in}(\mathbb{N}_{i}, \bar{\mathbb{N}}_{j})}t^{R_{ij}}\,.\] 2. If \(i=j\), the matrix element \(M^{\prime}_{ii}\) is \[M^{\prime}_{ii}=\sum_{\text{adjoint chiral fields}}t^{R_{ii}}\,.\] Here \(R_{ij}\) is the \(R\) charge of a bifundamental chiral field in \((\mathbb{N}_{i},\bar{\mathbb{N}}_{j})\). The meaning of the entry \(H_{ij}\) counts the oriented path from the node \(i\) to node \(j\) with the \(R\) charge grading. The Hilbert series \(H_{00}(Q,t)\) is the 00-component of the matrix \(H(Q,t)\), which counts the gauge invariant scalar operators (closed loop in the quiver which passes through node 0). In our case, there is always a node corresponding to the trivial representation, which we take as the node 0. The \(H_{00}\) is conjectured to be identified with the Hilbert series of the dual geometry, from which one may guess the dual geometry. **Example**: Let's look at the quiver shown in Figure 3, the matrix \(M^{\prime}_{Q}\) is \[M^{\prime}_{Q}=\left(\begin{array}{cccc}0&0&t^{2/3}&0\\ 0&0&t^{2/3}&0\\ t^{2/3}&t^{2/3}&2t^{2/3}&t^{2/3}\\ 0&0&t^{2/3}&0\end{array}\right)\] and so the matrix Hilbert series is \[H_{00}=\frac{1-t^{8}}{(1-t^{2})(1-t^{8/3})(1-t^{4})(1-t^{4/3})}\,.\] \(H_{00}\) takes the same form as that of a three dimensional hypersurface singularity, and the weights read \((d;x,y,z,w)=(1;\frac{1}{2},\frac{1}{3},\frac{1}{4},\frac{1}{6})\). The geometry might take the following form: \[x^{2}+y^{3}+z^{4}+w^{2}y^{2}+w^{2}z^{2}+wyz^{2}=0\,.\] Of course, one cannot completely determine the singularity just from the Hibert series. More information is needed to find out the dual geometry. The quiver Hilbert series \(H_{00}\) counts chiral scalar operators formed by loops passing through node 0. For example, the first fewer terms in the \(t\) expansion of \(H_{00}\) would tell us some information about the relevant and marginal deformations: \[H_{00}(t)=1+t^{4/3}+t^{2}+2t^{8/3}+O\left(t^{10/3}\right)\,.\] This tells us that the theory has a relevant operator with \(R\) charge \(\frac{4}{3}\), and a marginal operator with \(R\) charge 2. #### 3.1.3 Seiberg duality Seiberg duality identifies the IR SCFT with two different UV non-abelian gauge theories. The original example involves \(SU(N_{c})\) gauge theory coupled with \(N_{f}\) fundamental matter \((Q,\tilde{Q})\). The dual theory is \(SU(N_{f}-N_{c})\) coupled with \(N_{f}\) fundamental matter, and additionally there is uncharged chiral multiplet \(M\) coupled through a cubic superpotential. The basic idea of Seiberg duality can be generalized to quiver gauge theories. The rules of Seiberg duality for the quiver are summarized as follows [28, 41, 42]. Here we consider the Seiberg duality acting on a node \(k\) with gauge group \(SU(N)\): 1. Add a new arrow \([ab]\) from node \(i\) and node \(j\) for each oriented path \(i\to k\to j\), and the corresponding chiral fields are labeled as \(\begin{array}{c}\includegraphics[width=14.226378pt]{images/1.eps}\end{array}\). 2. Change the rank of gauge group as \(N\to F-N\). Here \(F\) is the number of bi-fundamental fields on node \(k\) divided by two. 3. Change the direction of all the arrows attached to this node \(k\): \(a\to a^{*},b\to b^{*}\). 4. The superpotential is changed as \(W^{\prime}=[W]+\Delta\): \([W]\) is defined by replacing all fields combination \(ab\) appearing in \(W\) by the new field \([ab]\), and \(\Delta=\sum[ab]b^{*}a^{*}\). 5. Do the reduction by integrating out massive fields. Let's give a simple example first. The original quiver is shown in Figure 4. The superpotential is \(W=abc\). Now we do the Seiberg duality on node \(N\). We should replace \(a,b\) by \(a^{*},b^{*}\), and add a new field \([ab]\). The rank of gauge group is changed to \(\frac{F1+F2}{2}-N\). The resulting quiver diagram is shown in Figure 5. The new superpotential is \[W=abc\to W^{\prime}=[W]+\Delta=[ab]c+[ab]b^{*}a^{*}\,.\] Finally, we do the reduction by solving the fields \([ab]\) and \(c\) in terms of other fields \[\frac{\partial W^{\prime}}{\partial[ab]}=c+b^{*}a^{*}=c+b^{*}a^{* }=0,\] \[\frac{\partial W^{\prime}}{\partial c}=[ab]=0\,.\] The final quiver is shown in Figure 6 and the dual superpotential is \(W=0\). Figure 4: A quiver with superpotential \(W=abc\). Seiberg duality is done on the node with rank \(N\). Figure 5: The dual quiver after doing Seiberg duality on the central node of Figure 4. For a more complicated example, we take a look at quiver gauge theory \(\mathbf{Q}\) in Figure 7. We consider the Seiberg duality acting on the node \(*\). The superpotential is \[W=uAa+\omega uBb+\omega^{2}uCc-\frac{1}{3}u^{3}-vAa-\omega^{2}vBb-\omega vCc+ \frac{1}{3}v^{3}, \tag{3.12}\] where \(\omega=e^{2\pi i/3}\) is a constant. According to the procedure outlined above, we first add a new adjoint field \([Bb]\) on the middle node, and change the rank of \(*\) to \(2N\). Then change the direction of \(B\) and \(b\), and denote them as \(B^{*},b^{*}\) respectively. Now to get the new superpotential \(W^{\prime}\), we replace the term \(Bb\) in \(W\) by \([Bb]\) and add \([Bb]B^{*}b^{*}\) term: \[W^{\prime}=[W]+\Delta=uAa+\omega u[Bb]+\omega^{2}uCc-\frac{1}{3}u^{3}-vAa- \omega^{2}v[Bb]-\omega vCc+\frac{1}{3}v^{3}+[Bb]B^{*}b^{*}\,.\] Next we do the reduction by integrating out massive superfields. By doing partial differentiation, we can get linear terms of \([Bb],v\): \[\begin{split}\frac{\partial W^{\prime}}{\partial[Bb]}& =\omega u-\omega^{2}v+B^{*}b^{*}=0,\\ \frac{\partial W^{\prime}}{\partial v}&=-Aa-\omega^ {2}[Bb]-\omega Cc+v^{2}=0\,.\end{split} \tag{3.13}\] Then we can replace \([Bb]\) and \(v\) by other terms \[\begin{split} v&=\omega^{2}u+\omega B^{*}b^{*}\,, \\ [Bb]&=-\omega Aa-\omega^{2}Cc+\omega v^{2}\,,\end{split} \tag{3.14}\] and rename \(B^{*},b^{*}\) by \(B,b\). Thus the final result is \[W^{\prime}\to W^{\prime}=(1-\omega^{2})Aau-wAaBb+(\omega^{2}-1)Ccu- \omega^{2}CcbB+\omega^{2}Bbu^{2}+\omega(Bb)^{2}u+\frac{1}{3}(Bb)^{3}\,.\] The whole process is shown in Figure 7. Figure 6: The final quiver after the reduction on the quiver depicted in Figure 5. Figure 7: Seiberg duality of the quiver of the Tetrahedral group. Here the duality is done on the star node. ### Gravity side Let's now turn to the gravity side where much information is contained in the geometry of the singularity \(X\). We constrain \(X\) to be a Gorenstein canonical singularity with a \(C^{*}\) action, and furthermore it is required to be stable [8; 43]. Since the theory is determined by the local behavior of the singularity, one just needs to know the affine ring attached to the singularity. \(U(1)_{R}\) **symmetry and \(\mathbb{C}^{*}\) action**: 4d \(\mathcal{N}=1\) SCFT has a \(U(1)_{R}\) symmetry, which is identified with the \(C^{*}\) action on \(X\), therefore one gets a graded affine ring from the dual geometry. Since the embedding coordinates would give rise to chiral operators of the field theory, the weights of the \(C^{*}\) action on the embedding coordinates should be positive. To identify the normalization constant, one needs to impose the generators for the volume form \(\Omega\) to have \(R\) charge two \[[\Omega]=2\,.\] Such homogeneous generator \(\Omega\) exists as \(X\) is assumed to be Gorenstein. **Hilbert series and central charges**: Given a graded ring \(R\), one can define a Hilbert series as follows \[H(t)=\sum\text{dim}H_{\alpha}t^{\alpha}\,.\] Here \(\text{dim}(H_{\alpha})\) denotes the dimension of the elements with \(C^{*}\) charge \(\alpha\). If \(R\) is a complete intersection, the Hilbert series takes a simple form: \[H(t)=\frac{(1-t^{d_{1}})\cdots(1-t^{d_{n}})}{(1-t^{w_{1}})\cdots(1-t^{w_{s}})}\] where \(d_{i},i=1,\cdots,n\) are degrees of defining ideal \(f_{i}\), and \(w_{j},j=1,\cdots,s\) are weights of the coordinates \(x_{j}\). For example, let's take \(X\) to be given by the hypersurface singularity \(x^{2}+y^{2}+z^{4}+w^{4}=0\). Then the coordinate ring of \(X\) is \(R_{X}=\frac{\mathbb{C}[x,y,z,w]}{\{x^{2}+y^{2}+z^{4}+w^{4}=0\}}\). The weights of \(x,y,z,w\) are \((x,y,z,w)=(2,2,1,1)\), and the weight of the hypersurface is 4. The Hilbert series is \(H(t)=\frac{1-t^{4}}{(1-t)^{2}(1-t^{2})^{2}}\). The operators with different degrees in the graded ring X are listed as follows: \[\begin{array}{ll}H_{1}:z,w&\text{dim}H_{1}=2\\ H_{2}:z^{2},w^{2},zw,x,y&\text{dim}H_{2}=5\\ H_{3}:z^{3},w^{3},zw^{2},z^{2}w,xz,xw,yz,yw&\text{dim}H_{3}=8\end{array}\] Then we can expand the Hilbert series and check the first few terms: \[\begin{array}{l}H(t)=\frac{1-t^{4}}{(1-t)^{2}(1-t^{2})^{2}}=1+2t+5t^{2}+8t^{ 3}+\cdots\\ =1+\text{dim}H_{1}t+\text{dim}H_{2}t^{2}+\text{dim}H_{3}t^{3}+\cdots=\sum_{ \alpha}\text{dim}H_{\alpha}t^{\alpha}\,.\end{array}\] The central charge \(a,c\) can be extracted from the Hilbert series as follows [44]. Let's define \(t=\exp(-s)\), then \(H(t)\) has the expansion \[H(\exp(-s))=\frac{a_{0}}{s^{3}}+\frac{a_{1}}{s^{2}}+\ldots\,.\] Then the central charge of the dual theory is given as \[a=c=\frac{27}{32}\frac{1}{a_{0}}N^{2}\,.\] Notice that this central charge is just the value in the leading order of \(N\). When there are more than one candidate \(U(1)_{R}\) symmetry, one needs to minimize the trial central charge \(a_{0}(R_{i})\) to find the correct \(U(1)_{R}\) symmetry [44], where \(R_{i}\) is some unfixed \(R\) charge. When \(X\) defines an isolated singularity whose link \(L\) gives the internal geometry of the dual SCFT, the elements in the coordinate ring of \(X\) would give the chiral scalar operators, see [5]. **Example**: Consider a quasi-homogeneous hypersurface \(f\) whose weights are \((w_{1},w_{2},w_{3},w_{4};d)\), and the top form is \[\Omega=\frac{dx\wedge dy\wedge dz\wedge dw}{df}\,.\] To require the weight of \(\Omega\) to be two, the normalization constant must be \[(w_{1}+w_{2}+w_{3}+w_{4}-d)\delta=2\ \ \rightarrow\ \delta=\frac{2}{w_{1}+w_{2}+w_{3}+w_{4}-d }\,. \tag{3.15}\] By computing the Hilbert series and doing the expansion, we find the coefficient \(a_{0}\) is \[a_{0}=\frac{d}{w_{1}w_{2}w_{3}w_{4}}\delta^{-3}\,.\] So the central charge is given as \[a=c=\frac{27}{32}\frac{w_{1}w_{2}w_{3}w_{4}}{d}\delta^{3}N^{2}\,. \tag{3.16}\] **Exact marginal deformations**: One can find exact marginal deformations from the geometry side as follows: a): there is always a type IIB axio-dilaton field \(\tau\); b) There are ones coming from the \(H^{2}(L)\) of the link \(L\), which can be computed using the data of \(X\); c): If there is a moduli space for the Ricci-flat metric, then one gets exact marginal deformations from those moduli; If \(X\) is defined as the hypersurface \(f\), one can easily find out those deformations by counting deformations with the same weight as \(f\). **Baryonic symmetry**: There are other global symmetries besides the \(U(1)_{R}\) symmetry. On the gravity side, this comes from the massless gauge field on the bulk. A simple source of such symmetry is also a nonzero value of \(H^{2}(L)\). **Index and KK analysis**: To get information of \(\frac{1}{4}\) BPS operators, one needs to perform the detailed Kluza-Klein (KK) analysis, see [39; 45; 46] for the case when \(L\) is smooth. In our case, most of the dual geometry \(S_{5}/G\) have singularities so extra care is needed. The details will appear elsewhere. #### 3.2.1 Smoothing of the singularity **Deformation and new AdS/CFT pair**: One can smooth the singularity \(X\) ( making it less singular) by changing the defining equation. In some cases, one might find a new geometry \(X^{{}^{\prime}}\) which also has a nice dual SCFT. Sometimes the change of the geometry can be described by the relevant deformation of the original field theory. Typical example is the affine \(A_{1}\)\(\mathcal{N}=2\) theory, whose dual geometry is \(x^{2}+y^{2}+z^{2}=0,w\); after turning on the mass deformation, one gets a conifold quiver, and the dual geometry \(X^{{}^{\prime}}\) is given by \(x^{2}+y^{2}+z^{2}+w^{2}=0\) which is regarded as the deformation of \(X\). We will discuss more examples in this paper. Quiver and crepant ResolutionOne can smooth the singularity by doing the resolution of singularity. To preserve supersymmetry, one needs to study a special kind of resolution called crepant resolution [6]. The end of crepant resolution might still have singularities (so that there is only partial resolution). The quiver gauge theory is related to a closed related resolution called non-commutative crepant resolution (NCCR)[47]. In fact, it seems that the physical quiver gauge theory should be regarded as the mathematically studied quiver. ## 4 Finite subgroups of \(Su(2)\) The finite subgroups of the special unitary group \(SU(2)\) have an \(ADE\) classification. For each subgroup \(G\), we list the equation corresponding to \(\mathbb{C}^{2}/G\times\mathbb{C}\) and the physical data such as leading order of central charge \(a,c\) and Hilbert series \(\frac{1-t^{d}}{(1-t^{a_{1}})(1-t^{a_{2}})(1-t^{a_{3}})(1-t^{a_{4}})}\) from the geometry side in Table 3. We draw the quiver diagrams in Table 4 and list the corresponding field theory data in Table 5. The corresponding quiver gauge theory was well-known [48], and here the index and the quiver Hilbert series are computed. The \(R\) charges cannot be fixed by equation (3.15) solely, which is different from the example we have discussed in section 3.2 and Table 6. The \(R\) charge of \(w\) can be fixed by minimizing the \(a_{0}(R_{w})\) as we mentioned in section 3.2, and the result is \(R_{w}=\frac{2}{3}\). Given the branching rules of the irreducible representations of \(ADE\) groups, one can figure out the quiver diagrams as shown in Table 4 by McKay correspondence as we introduced in section 2.2.1. Field theory data such as central charges \(a,c\), quiver Hilbert series, and single trace index can be computed using the \(R\) charges. The calculation method is introduced in section 3.1 and the results are summarized in Table 5. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Group \(G\) & Order \(|G|\) & Name & Equation & \(R\) charge \((d,w,w_{w},w_{w},w_{w})\) & Hilbert series \(H_{0}\) & Leading order of \(a,c\) \\ \hline Cyclic \(\mathbb{Z}_{n+1}\) & \(n+1\) & \(A_{n}\) & \(x^{2}+y^{2}+z^{2}+w^{2}=0\) & \(\frac{(x^{2}+y^{2})+(x^{2}+y^{2})}{(x^{2}+y^{2})+(x^{2}+y^{2})}\) & \(H_{0}-\frac{1-t^{a_{1}}(x^{2}+y^{2})}{(x^{2}+y^{2})+(x^{2}+y^{2})}\) & \(\frac{1}{2}N^{2}\) \\ \hline Binary threshold \(\mathbb{D}_{n}\) & \(a(n\)-2) & \(D_{n}\) & \(x^{2}+y^{-1}+y^{2}=0\) & \(\frac{(x^{2}+y^{2})+(x^{2}+y^{2})}{(x^{2}+y^{2})+(x^{2}+y^{2})}\) & \(\frac{1}{2}N^{2}\) & \(H_{0}-\frac{1-t^{a_{1}}(x^{2}+y^{2})}{(x^{2}+y^{2})+(x^{2}+y^{2})}\) & \((n-2)N^{2}\) \\ \hline Binary tetrahedral \(\mathbb{C}\) & \(24\) & \(E_{c}\) & \(x^{2}+y^{2}+z^{2}=0\) & \(\left\{16,8,9,4,\frac{1}{2}\right\}\) & \(H_{0}-\frac{1-t^{a_{1}}(x^{2}+y^{2})}{(x^{2}+y^{2})+(x^{2}+y^{2})}\) & \(6N^{2}\) \\ \hline Binary octahedral \(\mathbb{O}\) & \(48\) & \(E_{c}\) & \(x^{2}+y^{2}+y^{2}=0\) & \(\left\{24,12,5,\frac{1}{2}\right\}\) & \(H_{0}-\frac{1-t^{a_{1}}(x^{2}+y^{2})}{(x^{2}+y^{2})+(x^{2}+y^{2})}\) & \(12N^{2}\) \\ \hline Binary icosahedral \(\mathbb{I}\) & \(120\) & \(E_{c}\) & \(x^{2}+y^{2}+z^{2}=0\) & \(\left\{40,20,\frac{1}{2},8,\frac{1}{2}\right\}\) & \(H_{0}-\frac{1-t^{a_{1}}(x^{2}+y^{2})}{(x^{2}+y^{2})+(x^{2}+y^{2})}\) & \(30N^{2}\) \\ \hline \end{tabular} \end{table} Table 3: Geometric data of the corresponding singularity \(\mathbb{C}^{2}/G(G=ADE)\) whose dual quiver is the affine ADE quiver, and the R charge is found by requiring the top form to have charge \(2\) and \(a_{0}(R_{w})\) minimization. \begin{table} \begin{tabular}{|c|c|} \hline Group & \(\Omega\) \\ \hline \(A_{n}\) & \(\Omega ### Relevant deformations Let's now study relevant deformations of the above \(ADE\) quiver gauge theory. An obvious choice would be turning on the mass deformation for all the adjoint chiral fields. The IR \(R\) charge of all the chiral fields now becomes \(\frac{1}{2}\), and the corresponding central charges \(a,c\), Hilbert series and single trace index are listed in Table 8. From the Hilbert series, one may conjecture that the dual geometry is given by the hypersurface singularity \[f_{ADE}(x,y,z)+w^{h}=0,\] \begin{table} \begin{tabular}{|l|l|l|} \hline Group & Field theory data & \\ \hline \multirow{4}{*}{\(A_{n}\)} & Central charges \((a,c)\) & \(a=\frac{1}{24}(1+n)(-5+6N^{2})\quad c=\frac{1}{12}(1+n)(-2+3N^{2})\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{1-t^{4(n+1)/3}}{(1-t^{2/3})(1-t^{2/3})(1-t^{4(n+1)/3})^{2}}\) \\ \cline{2-3} & Single trace index & \(\mathcal{I}_{s.t}=\frac{(n+1)t^{2}}{1-t^{2}}+2\frac{t^{2(n+2)}}{1-t^{2(n+2)}} -\frac{(n+1)(t^{2}-t^{4})}{(1-t^{3}y^{-1})(1-t^{3}y)}\) \\ \hline \multirow{4}{*}{\(D_{n}\)} & Central charges \((a,c)\) & \(a=-\frac{5}{24}(n+1)+(n-2)N^{2}\quad c=-\frac{1}{6}(n+1)+(n-2)N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{1-t^{4(n-1)/3}}{(1-t^{2/3})(1-t^{4(n-2)/3})(1-t^{4(n-1)/3})}\) \\ \cline{2-3} & Single trace index & \(\mathcal{I}_{s.t}=\frac{(n+1)t^{2}}{1-t^{2}}+\frac{2t^{8}}{1-t^{2}}+\frac{t^{4 (n-8)}}{1-t^{2(n-8)}}-\frac{t^{4}}{1-t^{2}}-\frac{(n+1)(t^{2}-t^{4})}{(1-t^{3} y^{-1})(1-t^{3}y)}\) \\ \hline \multirow{4}{*}{\(E_{6}\)} & Central charges \((a,c)\) & \(a=-\frac{35}{24}+6N^{2}\quad c=-\frac{7}{6}+6N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(\frac{1-t^{46}}{(1-t^{2/3})(1-t^{4(1)}-t^{6(3)})(1-t^{4})}\) \\ \cline{2-3} & Single trace index & \(\mathcal{I}_{s.t}=\frac{7t^{2}}{1-t^{2}}+\frac{t^{8}}{1-t^{8}}+\frac{2t^{22}}{ 1-t^{2}}-\frac{t^{4}}{1-t^{7}}-\frac{7(t^{2}-t^{4})}{(1-t^{3}y^{-1})(1-t^{3}y)}\) \\ \hline \multirow{4}{*}{\(E_{7}\)} & Central charges \((a,c)\) & \(a=-\frac{5}{3}+12N^{2}\quad c=-\frac{4}{3}+12N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{1-t^{24}}{(1-t^{4/3})(1-t^{6(3)})(1-t^{6(3)})(1-t^{2})}\) \\ \cline{2-3} & Single trace index & \(\mathcal{I}_{s.t}=\frac{8t^{2}}{1-t^{2}}+\frac{t^{8}}{1-t^{8}}+\frac{t^{22}}{ 1-t^{2}}+\frac{t^{60}}{1-t^{60}}-\frac{t^{4}}{1-t^{4}}-\frac{8(t^{2}-t^{4})} {(1-t^{3}y^{-1})(1-t^{3}y)}\) \\ \hline \multirow{4}{*}{\(E_{8}\)} & Central charges \((a,c)\) & \(a=-\frac{15}{8}+30N^{2}\quad c=-\frac{3}{2}+30N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{1-t^{60}}{(1-t^{2/3})(1-t^{60/3})(1-t^{60})}\) \\ \cline{2-3} & Single trace index & \(\mathcal{I}_{s.t}=\frac{9t^{2}}{1-t^{2}}+\frac{t^{8}}{1-t^{8}}+\frac{t^{22}}{ 1-t^{24}}+\frac{t^{60}}{1-t^{60}}-\frac{t^{4}}{1-t^{6}}-\frac{9(t^{2}-t^{4})} {(1-t^{3}y^{-1})(1-t^{3}y)}\) \\ \hline \end{tabular} \end{table} Table 5: Field theory data for affine ADE quiver gauge theory whose dual singularity is \(\mathbb{C}^{2}/G\) with \(G=ADE\) finite subgroups of \(SU(2)\). where \(h\) is the Coxeter number, and \(f_{ADE}\) is the ADE singularity, see Table 6. We also compute the integral second homology group of the corresponding link, see Table 7: \(b_{2}\) is equal to the rank of the type of singularity, and it is torsion free. Such deformations for the field theory have already been studied in [29], we put the dual geometry in a more explicit form, and our computation of the Hilbert series and index would confirm the duality. Unlike the ADE singularity, our new singularity has the so-called homogeneous deformation (the deformation has the same weight as the original polynomial), for example, the \(A\) type geometry can be deformed as follows: \[x^{2}+y^{2}+z^{n}+w^{n}+\sum_{i=2}^{n-2}\tau_{i}z^{i}w^{n-i}=0\,.\] These \(\tau\)'s are identified with the exact marginal deformations. These singularities also admit crepant resolutions, and the number of exceptional curves is equal to \(\text{rank}(G)\) (\(G=ADE\)), so noncommutative crepant resolutions also exist [47]. The number of gauge groups in the NCCR is equal to \(\text{rank}(G)+1\) (the number of exceptional curves plus one), which is exactly as that in our proposed quiver. **Remark 1**: The deformed superpotential for above theories is \(W+\sum m_{i}\operatorname{Tr}\phi_{i}^{2}\), where \(\phi_{i}\)'s are the adjoint chiral fields and \(W\) is the original superpotential. One may turn on higher order deformation, i.e. \(W+\sum m_{i}\operatorname{Tr}\phi_{i}^{n}\), and the added deformation is irrelevant 4. However, one might still find out the dual geometry \(f_{ADE}(x,y,z)+w^{(n-1)h}=0\) by computing quiver Hilbert series [49] from using the \(R\) charge compatible with superpotential and naive anomaly free condition on \(R\) charge. Footnote 4: When \(n=3\), the deformation is marginal irrelevant. **Remark 2**: The deformed geometry \(f_{A_{n}}(x,y,z)+w^{p}=0,\;\;\frac{n+1}{2}<p<2(n+1)\) is also stable [8], but one could not find a quiver gauge theory description for the field theory as the singularity admits no crepant resolution. These situations would be discussed elsewhere. **Remark 3**: Since the adjoints are integrated out, one can perform the Seiberg duality on these theories, and it enjoys similar self-dual property as studied in conifold example [50]. The following is an example for affine \(A_{2}\) Seiberg duality, see Figure 8. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Group \(G\) & Name & Deformed Equation & \(R\) charge (\(d;w_{1},w_{2},w_{3},w_{4},w_{5}\)) & Hilbert series \(H_{00}\) & Leading order of \(a,c\) \\ \hline Cyclic & \(A_{n}\) & \(x^{2}+y^{2}+z^{n+1}+w^{n+1}=0\) & \(\left\{(n+1);\frac{n+1}{2},\frac{n+1}{2},1,1\right\}\) & \(H_{00}=\frac{1}{(1-\sigma^{n+1})(1-\sigma^{n+1})}\frac{1-\sigma^{n+1}}{(1- \sigma^{n+1})(1-\sigma^{n+1})(1-\sigma^{n+1})}\) & \(\frac{2n+1}{2}N^{2}\) \\ \hline \(\mathbb{D}\) & \(D_{n}\) & \(x^{2}+y^{n-1}+y^{2}+w^{n-2}=0\) & \(\left\{2(n-1);n-1,2,(n-2),1\right\}\) & \(H_{00}=\frac{1}{(1-\sigma^{n+1})(1-\sigma^{n+1})(1-\sigma^{n+1})(1-\sigma^{n+ 1})}\) & \(\frac{2n}{2}(n-2)N^{2}\) \\ \hline \(\mathbb{T}\) & \(E_{k}\) & \(x^{2}+y^{2}+x^{4}+w^{2}=0\) & \(\left\{12;6,4,3,1\right\}\) & \(H_{00}=\frac{1}{(1-\sigma^{n+1})(1-\sigma^{n+1})(1-\sigma^{n+1})}\) & \(\frac{2}{2}6N^{2}\) \\ \hline \(\mathbb{O}\) & \(E_{1}\) & \(x^{2}+y^{3}+w^{3}=w^{3}=w^{3}=w^{3}=w^{3}=w^{3}=w^{3}=w^{3}=w^{3}=w^{3}=w^{3}=w^{ 3}=w^ \begin{tabular}{|c|c|c|c|c|} \hline Name & Equation & \((u_{1},u_{2},u_{3},u_{4};v_{1},v_{2},v_{3},v_{4})\) & \(b_{2}:\) rank of \(H_{2}(L)\) & \(H_{2}(L)^{\mathit{for}}\) \\ \hline \(A_{n}\) & \(x^{2}+y^{2}+z^{n+1}+w^{n+1}=0\) & \((n+1,n+1,2,2;1,1,1,1)\) & \(n\) & No \\ \hline \(D_{n}\) & \(x^{2}+y^{n-1}+yz^{2}+w^{2n-2}=0\) & \(\begin{array}{c}(2n-2,n-1,n-1,2;1,1,\frac{n-2}{2},1)\) for n even \\ (2n-2,n-1,2n-2,2;1,1,n-2,1)\) for n odd & \(n\) & No \\ \hline \(E_{6}\) & \(x^{2}+y^{3}+z^{4}+w^{12}=0\) & \((12,4,3,2;1,1,1)\) & 6 & No \\ \hline \(E_{7}\) & \(x^{2}+y^{3}+yz^{3}+w^{18}=0\) & \((18,9,3,2;1,2,1,1)\) & 7 & No \\ \hline \(E_{8}\) & \(x^{2}+y^{3}+z^{5}+w^{30}=0\) & \((30,5,3,2;1,1,1,1)\) & 8 & No \\ \hline \end{tabular} **Table 7**: The geometrical data for the singularity \(f_{ADE}(x,y,z)+w^{h}=0\), and its dual theory is given by integrating out the adjoints of the affine ADE quiver. Here \(\frac{u_{i}}{v_{i}}=\frac{d}{w_{i}}\), with \(u_{i},v_{i}\) coprime, see Table 6 for the weights. Figure 8: Seiberg duality of affine \(A_{2}\) quiver. \begin{table} \begin{tabular}{|c|l|l|} \hline Group & Field theory data & \\ \hline \multirow{3}{*}{\(A_{n}\)} & Central charges \((a,c)\) & \(a=\frac{3}{128}(n+1)(9N^{2}-8)\quad c=\frac{1}{128}(n+1)(27N^{2}-16)\) \\ \cline{2-3} & \multirow{2}{*}{Quiver Hilbert series} & \(H_{00}=\frac{1-t^{n+1}}{(1-t)^{2}(1-t^{(n+1)/2})^{2}}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{(n+1)t^{3}}{1-t^{3}}+\frac{2t^{3(n+1)/2}}{1-t^{3(n+1)/2}}\) \\ \hline \multirow{3}{*}{\(D_{n}\)} & Central charges \((a,c)\) & \(a=\frac{3}{32}(-2-18N^{2}+n(9N^{2}-2))\quad c=\frac{1}{32}(-4-54N^{2}+n(-4+27N^ {2}))\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{1-t^{2n-2}}{(1-t)(1-t^{2})(1-t^{(n-1)})}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{nt^{3}}{1-t^{3}}+\frac{2t^{6}}{1-t^{6}}+\frac{t^{3n-6}}{ 1-t^{3n-6}}\) \\ \hline \multirow{3}{*}{\(E_{6}\)} & Central charges \((a,c)\) & \(a=\frac{3}{16}(-7+27N^{2})\quad c=-\frac{7}{8}+\frac{81}{16}N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(\frac{1-t^{12}}{(1-t)(1-t^{2})(1-t^{3})(1-t^{5})}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{6t^{3}}{1-t^{3}}+\frac{t^{6}}{1-t^{6}}+\frac{2t^{6}}{1- t^{3}}\) \\ \hline \multirow{3}{*}{\(E_{7}\)} & Central charges \((a,c)\) & \(a=\frac{3}{8}(-4+27N^{2})\quad c=-1+\frac{81}{8}N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{1-t^{18}}{(1-t)(1-t^{3})(1-t^{6})(1-t^{6})}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{7t^{3}}{1-t^{3}}+\frac{t^{6}}{1-t^{6}}+\frac{t^{6}}{1- t^{6}}+\frac{t^{12}}{1-t^{12}}\) \\ \hline \multirow{3}{*}{\(E_{8}\)} & Central charges \((a,c)\) & \(a=\frac{27}{16}(-1+15N^{2})\quad c=\frac{9}{16}(-2+45N^{2})\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{1-t^{30}}{(1-t)(1-t^{6})(1-t^{10})(1-t^{15})}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{8t^{3}}{1-t^{3}}+\frac{t^{6}}{1-t^{6}}+\frac{t^{9}}{1-t^ {9}}+\frac{t^{15}}{1-t^{12}}\) \\ \hline \end{tabular} \end{table} Table 8: Field theory data for deformed affine \(ADE\) quiver where the adjoint chiral superfields are integrated out. ## 5 Finite subgroups of \(So(3)\) In this section, we will discuss the quiver gauge theory corresponding to finite subgroups of \(SO(3)\) group and the geometry data is listed in Table 9. For each of these finite subgroups, we have a quiver gauge theory for the corresponding singularity. The corresponding McKay quiver and the superpotential are listed in Table 10. We also list the physical quantities including central charges \(a,c\), quiver Hilbert series and single trace index for each quiver gauge theory in Table 12. The expansions of the single trace index are shown in Table 13, which would be useful for the comparison with the computation on gravity side. For the dihedral group, the quiver is slightly different for the limiting cases. When \(n\) is even, the limiting case is \(D_{4}\). When \(n\) is odd, the limiting case is \(D_{2}\) and \(D_{6}\). The physical data is shown in Table 11. From the table, we see that compared with the general case, the constant term of central charges and the coefficient of the last term in the single trace index for \(D_{2}\) are different. And all the physical quantities for \(D_{4}\) and \(D_{6}\) are the same as the general case. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Group & \multicolumn{2}{|c|}{Mckay quiver and superpotential} \\ \hline & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ \hline & & & \\ & & & \\ & & & \\ \hline & & & \\ & & & \\ \hline & & & \\ \hline & & & \\ \hline & & & \\ \hline & & & \\ \hline & & & \\ \hline \end{tabular} \end{table} Table 9: Geometric data of the corresponding singularity \(\mathbb{C}^{3}/G\) with \(G\) finite subgroups of \(SO(3)\), and the \(R\) charge is determined by requiring the top form to have \(R\) charge \(2\). Continued from previous page \begin{tabular}{|c|c|} \hline Group & McKay quiver and superpotential \\ \hline Dihedral group & \(W/2=\)\(-\)\(ad_{0}C-cD_{0}A+u_{1}D_{0}d_{0}+u_{1}Cc-\sum_{i=1}^{m-2}u_{i}d_{i}D_{i}\) \\ \(D_{2n}\) (n=2m+1 odd) & \\ & \(W/2=-ad_{0}C-cD_{0}A+u_{1}D_{0}d_{0}+u_{1}Cc-\sum_{i=1}^{m-1}u_{i}d_{i}D_{i}+ \sum_{i=2}^{m}u_{i}D_{i-1}d_{i-1}-u_{m}v^{2}\) \\ Tetrahedral group & \\ & \(W=uAa+uBb+uCc+u^{3}+v^{3}+vAa+vBb+vCc\) \\ \hline Octahedral group & \\ & \(W/6=\)\(uAa-ubB-udD-1/3u^{3}+veE-vCc-vDd+1/3v^{3}\) \\ \(\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+(w^{2}- w)dCB+(w^{2}-w)Dbc\) \\ \hline Icosahedral group & \\ & \(W/12=\)\(-\)\(uAa+5ubB-2/3u^{3}+15vbB-5vcC-20vEe+10/3v^{3}\) \\ & \(+5weE-wDd-1/3w^{3}-5dec+10CED\) \\ \hline \end{tabular} \begin{table} \begin{tabular}{|c|c|} \hline Group & McKay quiver and superpotential \\ \hline Dihedral group & \(W/2=\)\(-\)\(ad_{0}C-cD_{0}A+u_{1}D_{0}d_{0}+u_{1}Cc-\sum_{i=1}^{m-2}u_{i}d_{i}D_{i}\) \\ \(D_{2n}\) (n=2m+1 odd) & \\ & \(W/2=-\)\(ad_{0}C-cD_{0}A+u_{1}D_{0}d_{0}+u_{1}Cc-\sum_{i=1}^{m-1}u_{i}d_{i}D_{i}+\sum_{i=2}^{m}u_{i}D_{i-1}d_{i-1}-u_{m}v^{2}\) \\ Tetrahedral group & \\ & \(W=uAa+uBb+uCc+u^{3}+v^{3}+vAa+vBb+vCc\) \\ \hline Octahedral group & \\ & \(W/6=\)\(uAa-ubB-udD-1/3u^{3}+veE-vCc-vDd+1/3v^{3}\) \\ & \(+(w^{2}-w)dCB+(w^{2}-w)Dbc\) \\ \hline Icosahedral group & \\ & \(W/12=\)\(-\)\(uAa+5ubB-2/3u^{3}+15vbB-5vcC-20vEe+10/3v^{3}\) \\ & \(+5weE-wDd-1/3w^{3}-5dec+10CED\) \\ \hline \end{tabular} \end{table} Table 10: McKay quivers with superpotentials corresponding to finite subgroups of \(SO(3)\). \begin{tabular}{|c|c|c|c|} \hline \(D_{4}\) & \(N\)\(N\) & Central charges \((a,c)\) & \(a=-3/4+N^{2},\;\;\;c=-1/2+N^{2}\) \\ \cline{2-4} & \(N\)\(N\) & Quiver Hilbert series & \(H_{00}=\dfrac{1-t^{4}}{(1-t^{4/3})^{3}(1-t^{2})}\) \\ \cline{2-4} & & Single trace index & \({\cal I}_{s.t.}=\dfrac{6t^{4}}{1-t^{4}}\) \\ \hline \(D_{2}\) & \(N\)\(N\) & Central charges \((a,c)\) & \(a=-5/12+1/2N^{2},\;\;\;c=-1/3+1/2N^{2}\) \\ \cline{2-4} & & Quiver Hilbert series & \(H_{00}=\dfrac{1-t^{8/3}}{(1-t^{4/3})^{3}(1-t^{2/3})}\) \\ \cline{2-4} & & Single trace index & \({\cal I}_{s.t.}=\dfrac{2t^{2}}{1-t^{2}}+\dfrac{2t^{4}}{1-t^{4}}-\dfrac{2(t^{ 2}-t^{4})}{(1-yt^{3})(1-y^{-1}t^{3})}\) \\ \hline \(D_{6}\) & \(N\)\(N\) & Central charges \((a,c)\) & \(a=-29/48+3/2N^{2},\;\;\;c=-11/24+3/2N^{2}\) \\ \cline{2-4} & \(N\)\(N\) & Quiver Hilbert series & \(H_{00}=\dfrac{1-t^{16/3}}{(1-t^{4/3})^{2}(1-t^{2})(1-t^{8/3})}\) \\ \cline{2-4} & & Single trace index & \({\cal I}_{s.t.}=\dfrac{2t^{2}}{1-t^{2}}+\dfrac{2t^{4}}{1-t^{4}}+\dfrac{t^{6}}{ 1-t^{6}}-\dfrac{2(t^{2}-t^{4})}{(1-yt^{3})(1-y^{-1}t^{3})}\) \\ \hline \end{tabular} **Table 11**: Field theory data for quiver theories whose dual singularity is \(\mathbb{C}^{3}/G\) with G the Limiting cases of the dihedral groups. \begin{table} \begin{tabular}{|c|l|l|} \hline Group & Field theory data & \\ \hline \multirow{4}{*}{Cyclic group of order \(n+1\)} & Central charges \((a,c)\) & \(a=\frac{1}{4}(n+1)N^{2}-\frac{5}{24}(n+1),\quad c=\frac{1}{4}(n+1)N^{2}-\frac{1} {6}(n+1)\) \\ \cline{2-3} & \multirow{2}{*}{Quiver Hilbert series} & \(H_{00}=\frac{1-t^{(4n+4)/3}}{(1-t^{4/3})(1-t^{(6n+2)/3})^{2}}\) \\ \cline{2-3} & & Single trace index & \({\cal I}_{s.t.}=(n+1)\frac{t^{2}}{1-t^{2}}+2\frac{t^{2+2n}}{1-t^{2+2n}}-(n+1) \frac{t^{2}-t^{4}}{(1-y^{-1}t^{2})(1-y^{6})}\) \\ \hline \multirow{4}{*}{Dihedral group of order 2n \(D_{2n}\) (n=2m even)} & Central charges \((a,c)\) & \(a=-\frac{13}{24}+m(-\frac{5}{24}+N^{2}),\quad c=-\frac{1}{3}+m(-\frac{1}{6}+N^ {2})\) \\ \cline{2-3} & \multirow{2}{*}{Quiver Hilbert series} & \(H_{00}=\frac{1-t^{(8m+4)/3}}{(1-t^{4/3})^{2}(1-t^{4m/3})(1-t^{- 4(4m+2)/3})}\) \\ \cline{2-3} & & Single trace index & \({\cal I}_{s.t.}=(m-1)\frac{t^{2}}{1-t^{2}}+5\frac{t^{4}}{1-t^{4}}+\frac{t^{4m} }{1-t^{4m}}-(m-1)\frac{t^{2}-t^{4}}{(1-y^{-1}t^{3})(1-yt^{3})}\) \\ \hline \multirow{4}{*}{Dihedral group of order 2n \(D_{2n}\) (n=2m+1 odd)} & Central charges \((a,c)\) & \(a=-\frac{19}{48}-\frac{5}{24}m+N^{2}(m+\frac{1}{2}),\quad c=-\frac{7}{24}- \frac{1}{6}m+N^{2}(m+\frac{1}{2})\) \\ \cline{2-3} & \multirow{2}{*}{Quiver Hilbert series} & \(H_{00}=\frac{1-t^{(8m+8)/3}}{(1-t^{4/3})^{2}(1-t^{(4m+2)/3})(1-t^{(4m+4)/3})}\) \\ \cline{2-3} & \multirow{2}{*}{Single trace index} & \({\cal I}_{s.t.}=(m+1)\frac{t^{2}}{1-t^{2}}+2\frac{t^{4}}{1-t^{4}}+\frac{t^{4m +2}}{1-t^{4m+2}}-(m+1)\frac{t^{2}-t^{4}}{(1-y^{-1}t^{3})(1-yt^{3})}\) \\ \hline \multirow{4}{*}{ Tetrahedral group} & Central charges \((a,c)\) & \(a=-\frac{19}{24}+3N^{2},\quad c=-\frac{7}{12}+3N^{2}\) \\ \cline{2-3} & \multirow{2}{*}{Quiver Hilbert series} & \(H_{00}=\frac{1-t^{8}}{(1-t^{4})(1-t^{2})(1-t^{\frac{8}{3}})(1-t^{4})}\) \\ \cline{2-3} & \multirow{2}{*}{Single trace index} & \({\cal I}_{s.t.}=2\frac{t^{2}}{1-t^{2}}+2\frac{t^{4}}{1-t^{4}}+2\frac{t^{6}}{1 -t^{6}}-2\frac{t^{2}-t^{4}}{(1-t^{3}y^{-1})(1-t^{3}y)}\) \\ \hline \multirow{4}{*}{ Octahedral group} & Central charges \((a,c)\) & \(a=-\frac{47}{48}+6N^{2},\quad c=-\frac{17}{24}+6N^{2}\) \\ \cline{2-3} & \multirow{2}{*}{Quiver Hilbert series} & \(H_{00}=\frac{1-t^{2}}{(1-t^{4})(1-t^{\frac{8}{3}})(1-t^{4})(1-t^{6})}\) \\ \cline{2-3} & \multirow{2}{*}{Single trace index} & \({\cal I}_{s.t.}=2\frac{t^{2}}{1-t^{2}}+3\frac{t^{4}}{1-t^{4}}+\frac{t^{6}}{1 -t^{6}}+\frac{t^{8}}{1-t^{8}}-2\frac{t^{2}-t^{4}}{(1-y^{-1}t^{3})(1-yt^{3})}\) \\ \hline \multirow{4}{*}{ Icosahedral group} & Central charges \((a,c)\) & \(a=-1+15N^{2},\quad c=-\frac{3}{4}+15N^{2}\) \\ \cline{2-3} & \multirow{2}{*}{Quiver Hilbert series} & \(H_{00}=\frac{1-t^{20}}{(1-t^{4/3})(1-t^{4})(1-t^{20/3})(1-t^{10})}\) \\ \cline{2-3} & \multirow{2}{*}{Single trace index} & \({\cal I}_{s.t.}=3\frac{t^{2}}{1-t^{2}}+2\frac{t^{4}}{1-t^{4}}+\frac{t^{6}}{1 -t^{6}}+\frac{t^{10}}{1-t^{10}}-3\frac{t^{2}-t^{4}}{(1-y^{-1}t^{3})(1-yt^{3})}\) \\ \hline \end{tabular} \end{table} Table 12: Field theory data for quiver gauge theories whose dual singularity is \(\mathbb{C}^{3}/G\) with \(G\) finite subgroups of \(SO(3)\). ### Relevant deformations Let's now study the relevant deformations of those quiver gauge theories. For quiver gauge theory corresponding to Dihedral group of order \(2n\) (\(n=2m\) even) \(D_{4n}\), one can turn on mass deformation for all the adjoints, and this forces the \(R\) charges to be \begin{table} \begin{tabular}{|c|c|} \hline Group & Single trace expansion \\ \hline \multirow{4}{*}{Cylic group of order \(n+1\)} & \({\cal I}_{s.t}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-(n+1)t^{2}+(n+1)t^{4}+(n +1)t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}+\) \\ & \(2t^{2+2n}(1+t^{6})\sum_{s=0}^{\infty}t^{(2+2n)s}-(n+1)t^{5}\sum_{n=0}^{\infty}t^ {2n}(y+y^{-1})-2t^{5+2n}\sum_{s=0}^{\infty}t^{(2+2n)s}(y+y^{-1}))\) \\ & \(=-(n+1)\frac{t^{2}-t^{4}}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-(n+1)t^{2}\sum_{s=0} ^{\infty}t^{2s}+2t^{2+2n}\sum_{s=0}^{\infty}t^{(2+2n)s}=(2+2n)t^{4}+\cdots\) \\ \hline \multirow{4}{*}{Dihedral group of order 2n (n=2m even)} & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-(m-1)t^{2}+(m-1)t^{4}+( m-1)t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}\) \\ & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-(m-1)t^{2}+(m-1)t^{4}+( m-1)t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}}\) \\ & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-(m+1)t^{2}+(m+1)t^{4}+( m+1)t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}}\) \\ \cline{2-3} & \(+2t^{4}(1+t^{6})\sum_{n=0}^{\infty}t^{4n}+t^{4m+2}(1+t^{6})\sum_{n=0}^{\infty }t^{(4m+2)n}-(m+1)t^{5}\sum_{n=0}^{\infty}t^{2n}(y+y^{-1})\) \\ & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-2t^{2}+2t^{4}+2t^{2}(1 +t^{6})\sum_{n=0}^{\infty}t^{2n}+2t^{4}(1+t^{6})\sum_{n=0}^{\infty}t^{4n}}\) \\ \cline{2-3} & \(+2t^{6}(1+t^{6})\sum_{n=0}^{\infty}t^{6n}-2t^{5}\sum_{n=0}^{\infty}t^{2n}(y +y^{-1})-2t^{7}\sum_{n=0}^{\infty}t^{4n}(y+y^{-1})-2t^{9}\sum_{n=0}^{\infty}t ^{6n}(y+y^{-1}))\) \\ \hline \multirow{4}{*}{Octahedral group} & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-2t^{2}+2t^{4}+2t^{2}(1 +t^{6})\sum_{n=0}^{\infty}t^{2n}+3t^{4}(1+t^{6})\sum_{n=0}^{\infty}t^{4n}}\) \\ & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-2t^{2}+2t^{4}+2t^{2}(1 +t^{6})\sum_{n=0}^{\infty}t^{2n}+3t^{4}(1+t^{6})\sum_{n=0}^{\infty}t^{4n}}\) \\ & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-3t^{2}+3t^{4}+3t^{2}(1 +t^{6})\sum_{n=0}^{\infty}t^{2n}+2t^{4}(1+t^{6})\sum_{n=0}^{\infty}t^{4n}}\) \\ & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-3t^{2}+ 3t^{4}+3t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}+2t^{4}(1+t^{6})\sum_{n=0}^{ \infty}t^{4n}}\) \\ & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-3t^{2}+ 3t^{4}+3t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}+2t^{4}(1+t^{6})\sum_{n=0}^{ \infty}t^{4n}}\) \\ & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-3t^{2}+ 3t^{4}+3t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}+2t^{4}(1+t^{6})\sum_{n=0}^{ \infty}t^{4n}}\) \\ & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-(m+1)t^{2}+(m-1)t^{4}+( m-1)t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}\) \\ Dihedral group of order 2n (n=2m even) & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-(m-1)t^{2}+(m-1)t^{4}+( m-1)t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}}\) \\ & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-(m+1)t^{2}+(m+1)t^{4}+( m+1)t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}}\) \\ Dihedral group of order 2n (n=2m+1 odd) & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-(m+1)t^{2}+(m+1)t^{4}+( m+1)t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}}\) \\ Dihedral group of order 2n (n=2m+1 odd) & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-2t^{2}+2t^{4}+2t^{2}(1 +t^{6})\sum_{n=0}^{\infty}t^{2n}+2t^{4}(1+t^{6})\sum_{n=0}^{\infty}t^{4n}}\) \\ \hline \multirow{4}{*}{Tetrahedral group} & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-2t^{2}+2t^{4}+2t^{2}(1 +t^{6})\sum_{n=0}^{\infty}t^{2n}+2t^{4}(1+t^{6})\sum_{n=0}^{\infty}t^{4n}}\) \\ & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-2t^{2}+2t^{4}+2t^{2}(1 +t^{6})\sum_{n=0}^{\infty}t^{2n}+3t^{4}(1+t^{6})\sum_{n=0}^{\infty}t^{4n}}\) \\ Octahedral group & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-2t^{2}+2t^{4}+2t^{2}(1 +t^{6})\sum_{n=0}^{\infty}t^{2n}+3t^{4}(1+t^{6})\sum_{n=0}^{\infty}t^{4n}}\) \\ Octahedral group & \({\cal I}_{s.t.}=\frac{1}{(1-y^{-1}t^{3})(1-y^{1}t^{3})}(-3t^{2}+ 3t^{4}+3t^{2}(1+t^{6})\sum_{n=0}^{\infty}t^{2n}+2t^{4}(1+t^{6})\sum_{n=0}^{ \infty}t^{4n}}\) \\ Icosahedral group & \({\ \(R(a)=R(A^{\prime})=R(a^{\prime})=1\), so they can all be integrated out. Other chiral fields all have \(R\) charge \(\frac{1}{2}\). The IR theory is just the same as those defined by affine D type quiver gauge theory \(D_{n+2}\) with adjoint chirals integrated out. For the Dihedral group of order \(2n\) (\(n=2m+1\) odd), one can also add mass terms to the adjoints (except \(v\)), then the \(R\) charges for fields \(A,a\) must be \(R(a)=R(A)=1\), which means that they become massive. The new quiver is shown in the Figure 9, and the R charge for the chiral fields all equal to \(\frac{1}{2}\). The quiver Hilbert series and the single trace index (here we just subtract the \(U(1)\) vector multiplets for the \(U(N)\) gauge group) are \[\begin{split}& H_{00}=\frac{1-t^{2m+3}}{(1-t)(1-t^{2})(1-t^{(2m+ 1)/2})(1-t^{(2m+3)/2})},\\ &\mathcal{I}_{s.t.}=\frac{(m+1)t^{3}}{1-t^{3}}+\frac{t^{6}}{1-t^ {6}}+\frac{t^{3/2}}{1-t^{3/2}}+\frac{t^{3/2+3m}}{1-t^{3/2+3m}}\,.\end{split} \tag{5.1}\] The central charges are \[a=\frac{27}{64}(2m+1)N^{2}-\frac{99+48m}{256},\ c=\frac{27}{64}(2m+1)N^{2}- \frac{75+32m}{256}\,.\] From the index form, we see that there is a chiral field with \(R\) charge \(\frac{1}{2}\) that violates the unitarity bound (this field is easily to be identified with the field \(\operatorname{Tr}v\)). This field becomes free in the IR, and is decoupled [51]. The correct \(U(1)_{R}\) charge should be the original one mixed with the one acting only on the decoupled field, so that the \(R\) charge for the free field is \(\frac{2}{3}\). The index for the interacting theory might be modified as follows: subtract the contribution of the free chiral with the prescribed \(R\) charge \(\frac{1}{2}\), so the index for the interacting theory might be: \[\mathcal{I}_{s.t.}=\frac{(m+1)t^{3}}{1-t^{3}}+\frac{t^{6}}{1-t^{6}}+\frac{t^{ 3/2}}{1-t^{3/2}}+\frac{t^{3/2+3m}}{1-t^{3/2+3m}}-\frac{t^{3/2}-t^{9/2}}{(1-t^{ 3}y)(1-t^{3}y^{-1})}\,. \tag{5.2}\] It would be interesting to verify our proposal using other methods. Based on the quiver Hilbert series, we'd like to conjecture that the dual geometry is: \[\boxed{w^{2}+x^{2m+3}+xy^{m+1}+yz^{2}=0}\,.\] Figure 9: Quiver for Dihedral group \(D_{2n}\) of order \(2n\) (\(n=2m+1\) odd) with all adjoints except \(v\) integrated out. ### A detailed study of Seiberg duality The exciting feature of the quivers in Figure 10 (compare the affine ADE quiver) is that there are quiver nodes without adjoint chirals, and so one can perform Seiberg duality to get many dual quiver descriptions. Let's discuss the Seiberg duality of theories corresponding to the Tetrahedron group in more detail. Let's first do Seiberg duality on node \(*\) in Figure 10. The initial superpotential is (the coefficient is chosen so that superpotential for the following duality frame is generic): \[W_{Q}=uAa+\omega uBb+\omega^{2}uCc-\frac{1}{3}u^{3}-vAa-\omega^{2}vBb-\omega vCc +\frac{1}{3}v^{3}, \tag{103}\] where \(\omega=e^{2\pi i/3}\). The new superpotential is \[W_{Q_{1}}=[W_{Q}]+\Delta=uAa+\omega u[Bb]+\omega^{2}uCc-\frac{1}{3}u^{3}-vAa- \omega^{2}v[Bb]-\omega vCc+\frac{1}{3}v^{3}+[Bb]B^{*}b^{*}\] The reduction is done by integrating out massive superfields \([Bb],v\), namely computing the partial derivative: \[\begin{split}\frac{\partial W_{Q_{1}}}{\partial[Bb]}& =\omega u-\omega^{2}v+B^{*}b^{*}=0,\\ \frac{\partial W_{Q_{1}}}{\partial v}&=-Aa-\omega^{ 2}[Bb]-\omega Cc+v^{2}=0\,.\end{split} \tag{104}\] Thus we get \[\boxed{W_{Q_{1}}=(1-\omega^{2})Aau-wAaBb+(\omega^{2}-1)Ccu-\omega^{2}CcbB+ \omega^{2}Bbu^{2}+\omega(Bb)^{2}u+\frac{1}{3}(Bb)^{3}}\] The quiver \(Q_{1}\) is quite different from the original quiver: a) The \(U(1)_{R}\) charges of chiral fields in the IR limit are no longer free: \(R(B)=R(b)=\frac{1}{3}\), \(R(C)=R(c)=R(A)=R(a)=\frac{2}{3}\), \(R(u)=\frac{2}{3}\). We compute the Hilbert series \(H_{00}\) and the index by using these new \(R\) charges, and they give the same result as that of the quiver \(Q\). The matrix \(\chi(t)\) used in computing the index is \[\chi_{Q_{1}}(t)=\left[\begin{array}{cccc}0&0&t&0\\ 0&0&t^{2}&0\\ t&t^{2}&t^{2}&t^{2}\\ 0&0&t^{2}&0\end{array}\right]\] Figure 10: Seiberg duality of the McKay quiver \(Q\) of the Tetrahedral group. Let's point out a quite interesting point: in the original quiver \(Q\), one can see two free chirals as the traceless part of two adjoint chirals on the middle quiver node; however, the dual theory has just one adjoint chiral whose traceless part would decouple, and the other free chiral is the composite operator \(Tr(Bb)\) whose \(R\) charge is \(\frac{2}{3}\) and become decoupled in the IR. This is the typical thing happening in Seiberg duality: the elementary field in one description becomes the composite field in another dual description. Let's now do Seiberg duality on node \(\bigstar\) in the quiver \(Q_{1}\), see Figure 11. The final superpotential after the reduction is \[\boxed{W_{Q_{12}}=AaBb+AaCc-(Bb)^{2}Cc-Bb(Cc)^{2}}\] The new \(R\) charges would be \(R(B)=R(b)=R(C)=R(c)=\frac{1}{3}\), and \(R(A)=R(a)=\frac{2}{3}\). Again, one can compute the Hilbert series and index of quiver \(Q_{12}\), which are the same as those of \(Q\). The matrix \(\chi(t)\) used in computing the index is \[\chi_{Q_{12}}(t)=\left[\begin{array}{cccc}0&0&t&0\\ 0&0&t&0\\ t&t&0&t^{2}\\ 0&0&t^{2}&0\end{array}\right]\] Finally, Let's do Seiberg duality on the central quiver node (\(\clubsuit\) node) of \(Q_{12}\). We show the process in Figure 12. Figure 11: Seiberg duality of the quiver \(Q_{1}\). The final superpotential is \[\boxed{W_{Q_{123}}=aAu_{0}-AaBb-AaCc+bBu_{2}-dDu_{2}+cCu_{3}-Ddu_{3}+Bdc+CDb}\] The \(R\) charges are \(R(B)=R(b)=R(C)=R(c)=R(D)=R(d)=\frac{2}{3},R(u_{2})=R(u_{3})=\frac{2}{3}\), \(R(A)=R(a)=\frac{1}{3},\ R(u_{0})=\frac{4}{3}\), which would give the same Hilbert series and index as those of the original quiver. The matrix \(\chi(t)\) used in computing the index is \[\chi_{Q_{123}}(t)=\left[\begin{array}{cccc}t^{2}&t^{2}&t^{2}&0\\ t^{2}&t^{2}&t^{2}&0\\ t^{2}&t^{2}&0&t\\ 0&0&t&t^{4}\end{array}\right]\] Next we discuss the Seiberg duality of \(D_{6}\) quiver. We can do the Seiberg duality on node with rank \(N\), see figure. 13. The superpotential of \(D_{6}\) is \[W_{D_{6}}=ACD+acd+uCc+uDd+uv^{2}\,. \tag{5.5}\] Figure 12: Seiberg duality of the quiver \(Q_{12}\). Figure 13: Seiberg duality of \(D_{6}\) with the upper node \(N\) The new superpotential is \[W^{\prime}_{D_{6}}=[W_{D_{6}}]+\Delta=[AC]D+[ac]d+u[Cc]+uDd+uv^{2}+[AC]C^{*}A^{*}+ [ac]c^{*}a^{*}+[Cc]c^{*}C^{*}+[Aa]a^{*}A^{*}\,. \tag{111}\] We can do reduction by integrating out massive superfields \([AC],D,[ac],d,u,[Cc]\) by using equations of motion induced by superpotential: \[\frac{\partial W^{\prime}_{D_{6}}}{\partial[AC]}=D+C^{*}A^{*}=0, \frac{\partial W^{\prime}_{D_{6}}}{\partial D}=[AC]+ud=0, \tag{112}\] \[\frac{\partial W_{D_{6}}}{\partial[ac]}=d+c^{*}a^{*}=0, \frac{\partial W^{\prime}_{D_{6}}}{\partial d}=[ac]+uD=0,\] \[\frac{\partial W^{\prime}_{D_{6}}}{\partial u}=[Cc]+Dd+v^{2}=0, \frac{\partial W^{\prime}_{D_{6}}}{\partial[Cc]}=u+c^{*}C^{*}=0\,.\] The final superpotential is \[\boxed{W^{1}_{D_{6}}=-v^{2}c^{*}C^{*}-C^{*}A^{*}c^{*}a^{*}c^{*}C^{*}+[Aa]a^{*}A ^{*}}\,. \tag{113}\] The \(R\) charges of the dual theory are \(R(C)=R(c)=R(A)=R(a)=\frac{1}{3},R([Aa])=\frac{4}{3},R(v)=\frac{2}{3}\). The duality is verified by computing the Hilbert series and index of the dual theory. ## 6 Conclusion In this paper, we performed a systematical study of the AdS/CFT pair, where the geometry side is given by the quotient singularity \(\mathbb{C}^{3}/G\) with \(G\) a finite subgroup of \(SU(3)\) group. The field theory is given by the McKay correspondence, where the quiver is derived easily from the representation theory of \(G\). Various physical properties such as quiver Hilbert series, and index, Seiberg duality, etc are computed from both the field theory and the gravity side. Our computation gives confirmation that the McKay quiver indeed gives one description of the SCFT on D3 branes probing the quotient singularity. The methods will be used to study AdS/CFT pairs induced from other 3d Gorestein canonical singularities. Many other aspects of AdS/CFT correspondence could be studied for those pairs, i.e. integrability [52], and non-conformal arrangement of the gauge group ranks [50]. Moreover, detailed KK studies on the gravity side are needed. Among the field theories studied in this paper, those corresponding to finite subgroups of \(SU(2)\) and \(SO(3)\) groups are particularly interesting. In fact, the dual geometry belongs to the so-called compound du Val (cDV) singularity whose defining equation is given by \(f_{ADE}(x,y,z)+wg(x,y,z,w)=0\). We would systematically study the AdS/CFT pair corresponding to those singularities in a separate publication. ## Acklowledgement We would like to thank Wen-Bin Yan for helpful discussions. Other cases Some other quiver gauge theories defined by finite subgroups of \(SU(3)\) will be discussed in Appendix A like what we have done for finite groups of SO(3) in section 5. Quiver gauge theories defined by Series E-L will be discussed in Appendix A.1 and the branching rules for the irreducible representations of A series, \(\Delta(3n^{2})\) series, \(\Delta(6n^{2})\) series will be reviewed in Appendix A.2, A.3, A.4. ### \(E,F,G,H,I,J,K,L\) series The coefficients of the tensor products of irreducible representations and equations of groups \(E,F,G,H,I,J,K\) are summarized in [13; 32]. We draw the McKay quivers and list the equations of \(\mathbb{C}^{3}/G\) corresponding to groups \(E,F,G,H,I,J,K,L\) series in Table 14. The physical quantities computed from the quiver gauge theory are shown in Table 15. \begin{tabular}{|c|c|} \hline Group \(G\) & Mckay quiver and the corresponding equation of \(\mathbb{C}^{3}/G\) \\ \hline \(E\) & \(3_{1}\) \\ \(3_{2}\) & \(3_{3}\) \\ \(3_{4}\) & \(3_{5}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{8}\) & \(3_{4}\) \\ \(1_{3}\) & \(1_{2}\) \\ \(1_{2}\) & \(3_{5}\) \\ \(3_{3}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(1_{0}\) & \(3_{7}\) \\ \(3_{1}\) & \(3_{2}\) \\ \(3_{1}\) & \(3_{3}\) \\ \(3_{4}\) & \(3_{5}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{1}\) & \(3_{2}\) \\ \(3_{3}\) & \(3_{4}\) \\ \(3_{1}\) & \(3_{2}\) \\ \(3_{3}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{1}\) & \(3_{2}\) \\ \(3_{1}\) & \(3_{2}\) \\ \(3_{3}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{7}\) & \(3_{8}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{7}\) & \(3_{8}\) \\ \(3_{7}\) & \(3_{8}\) \\ \(3_{8}\) & \(3_{9}\) \\ \(3_{1}\) & \(3_{2}\) \\ \(3_{1}\) & \(3_{3}\) \\ \(3_{1}\) & \(3_{2}\) \\ \(3_{1}\) & \(3_{3}\) \\ \(3_{2}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{6}\) & \(3_{7}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{4}\) & \(3_{4}\) \\ \(3_{5}\) & \(3_{6}\) \\ \(3_{7}\) & \(3_{8}\) \\ \( \begin{tabular}{|c|c|} \hline Group \(G\) & \(4_{3}\)\(3_{4}\)\(5_{3}\)\(3_{1}\)\(1_{2}\)\(4_{1}\)\(3_{6}\)\(4_{2}\)\(3_{5}\)\(5_{1}\)\(5_{2}\)\(3_{3}\)\(1_{1}\)\(1_{2}\)\(3_{4}\)\(4_{1}\)\(3_{6}\)\(5_{1}\)\(5_{3}\)\(3_{3}\)\(1_{0}\)\(3_{2}\)\(1_{1}\)\(3_{5}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{4}\)\(4_{3}\)\(8_{1}\)\(1_{0}\)\(3_{3}\)\(1_{0}\)\(3_{5}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{5}\)\(5_{1}\)\(5_{2}\)\(3_{3}\)\(1_{1}\)\(1_{2}\)\(3_{4}\)\(4_{1}\)\(3_{6}\)\(5_{1}\)\(5_{3}\)\(5_{1}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{1}\)\(3_{4}\)\(4_{2}\)\(3_{5}\)\(5_{2}\)\(5_{1}\)\(5_{3}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{4}\)\(4_{1}\)\(3_{6}\)\(5_{1}\)\(5_{3}\)\(5_{1}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{5}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{4}\)\(4_{3}\)\(8_{1}\)\(1_{0}\)\(3_{3}\)\(1_{1}\)\(3_{5}\)\(5_{1}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{5}\)\(5_{1}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{4}\)\(4_{1}\)\(3_{6}\)\(5_{1}\)\(5_{1}\)\(5_{2}\)\(3_{3}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{5}\)\(5_{1}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{4}\)\(4_{2}\)\(3_{5}\)\(5_{2}\)\(5_{1}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{4}\)\(4_{1}\)\(3_{6}\)\(5_{1}\)\(5_{1}\)\(5_{2}\)\(3_{3}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{5}\)\(5_{1}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{4}\)\(4_{1}\)\(3_{6}\)\(5_{1}\)\(5_{1}\)\(5_{2}\)\(3_{3}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{5}\)\(5_{1}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{4}\)\(4_{2}\)\(3_{5}\)\(5_{1}\)\(5_{2}\)\(5_{1}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{4}\)\(4_{1}\)\(3_{6}\)\(5_{1}\)\(5_{1}\)\(5_{2}\)\(3_{3}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{4}\)\(4_{2}\)\(3_{5}\)\(5_{1}\)\(5_{2}\)\(5_{1}\)\(5_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{4}\)\(4_{1}\)\(3_{6}\)\(5_{1}\)\(5_{1}\)\(5_{2}\)\(3_{3}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{2}\)\(1_{1}\)\(3_{3}\)\(1_{2}\)\ \begin{tabular}{|c|c|c|} \hline Group & Field theory data & \\ \hline \multirow{3}{*}{\(E\)} & Central charges \((a,c)\) & \(a=-\frac{21}{8}+27N^{2}\) \\ & & \(c=\frac{7}{4}+27N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{(1-t^{22})(1-t^{6})}{(1-t^{2})(1-t^{6})}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{6t^{6}}{1-t^{6}}+\frac{2t^{24}}{(1-t^{24})}\) \\ \hline \multirow{3}{*}{\(F\)} & Central charges \((a,c)\) & \(a=-3+54N^{2}\) \\ & & \(c=-2+54N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{(1-t^{24})}{(1-t^{4})(1-t^{6})(1-t^{6})(1-t^{6})}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{6t^{6}}{1-t^{6}}+\frac{3t^{24}}{(1-t^{24})}-\frac{t^{2 }}{1-t^{12}}\) \\ \hline \multirow{3}{*}{\(G\)} & Central charges \((a,c)\) & \(a=-9/2+162N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(c=-3+162N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{(1-t^{6})(1-t^{6})}{(1-t^{6})(1-t^{6})^{2}}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{4t^{6}}{1-t^{6}}+\frac{2t^{18}}{(1-t^{18})}+\frac{t^{2 4}}{(1-t^{24})}+\frac{2t^{30}}{(1-t^{36})}-\frac{t^{12}}{(1-t^{12})}\) \\ \hline \multirow{3}{*}{\(H\)} & Central charges \((a,c)\) & \(a=-1+15N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(c=-\frac{3}{4}+15N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{1-t^{4/3}(1-t^{4/3})(1-t^{20/3})(1-t^{10})}{(1-t^{4})}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{4t^{2}}{1-t^{2}}+\frac{2t^{4}}{(1-t^{4})}+\frac{t^{14}} {(1-t^{14})}-\frac{3(t^{2}-t^{4})}{(1-t^{3})(1-t^{3}y)(1-t^{5}y^{-1})}\) \\ \hline \multirow{3}{*}{\(I\)} & Central charges \((a,c)\) & \(a=-\frac{7}{6}+42N^{2}\) \\ \cline{2-3} & & \(c=-\frac{5}{6}+42N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{(1-t^{28})}{(1-t^{28})}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{2t^{2}}{1-t^{2}}+\frac{t^{4}}{(1-t^{4})}+\frac{t^{6}}{(1 -t^{6})}+\frac{t^{3}}{(1-t^{8})}+\frac{t^{14}}{(1-t^{14})}-\frac{2(t^{2}-t^{3} )}{(1-t^{3}y)(1-t^{5}y^{-1})}\) \\ \hline \multirow{3}{*}{\(J\)} & Central charges \((a,c)\) & \(a=-\frac{45}{16}+45N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{1-t^{24}}{(1-t^{4})^{2}(1-t^{8})(1-t^{10})}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{6t^{6}}{(1-t^{6})}+\frac{2t^{24}}{(1-t^{12})}+\frac{t^{3 0}}{(1-t^{30})}\) \\ \hline \multirow{3}{*}{\(K\)} & Central charges \((a,c)\) & \(a=-\frac{27}{8}+126N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{1-t^{50}}{(1-t^{4})(1-t^{8})(1-t^{12})(1-t^{14})}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{5t^{6}}{(1-t^{6})}+\frac{t^{2}}{(1-t^{12})}+\frac{t^{4 8}}{(1-t^{24})}+\frac{t^{42}}{(1-t^{42})}\) \\ \hline \multirow{3}{*}{\(L\)} & Central charges \((a,c)\) & \(a=-\frac{51}{16}+270N^{2}\) \\ \cline{2-3} & & \(c=-\frac{17}{8}+270N^{2}\) \\ \cline{2-3} & Quiver Hilbert series & \(H_{00}=\frac{(1-t^{60})}{(1-t^{4})(1-t^{8})(1-t^{20})(1-t^{30})}\) \\ \cline{2-3} & Single trace index & \({\cal I}_{s.t.}=\frac{6t^{6}}{(1-t^{6})}+\frac{t^{12}}{(1-t^{12})}+\frac{t^{2 4}}{(1-t^{24})}+\frac{t^{30}}{(1-t^{30})}\) \\ \hline \end{tabular} **Table 15**: central charges, Hilbert series and single trace index of series \(E,F,G,H,I,J,K,L\). ### \(A\) series: abelian subgroups The abelian groups are isomorphic to \(Z_{m}\times Z_{n}\). Set \(g_{k,l}=\operatorname{diag}(\zeta_{m}^{l},\zeta_{n}^{k},\zeta_{m}^{-l}\zeta_{n}^ {-k})\), with \(\zeta_{m},\zeta_{n}\) roots of unity with order \(m\) and \(n\). Define \[\rho_{ij}(g_{k,l})=\zeta_{m}^{ik}\zeta_{n}^{jl},\ \ i=1,\dots,m,\ \ j=1,\dots,n\] which gives all the one dimensional representations of the group. Let \(\pi=\rho_{1,0}\oplus\rho_{0,1}\oplus\rho_{-1,-1}\) be the natural representation s.t. \(\pi(g_{k,l})=g_{k,l}\). We have \[\pi\otimes\rho_{ij}=\rho_{i+1,j}\oplus\rho_{i,j+1}\oplus\rho_{i-1,j-1}\,.\] The quiver can be represented on a two-dimensional lattice drawn on the torus, which is the same as giving a dimer configuration. ### \(\Delta(3n^{2})\) series Those quiver gauge theories were discussed in [53]. \(\Delta(3n^{2})\) is generated by \(H_{n,n}\simeq\mathbb{Z}_{n}\otimes\mathbb{Z}_{n}\) and \(T=\begin{pmatrix}0&1&0\\ 0&0&1\\ 1&0&0\end{pmatrix}\). For \(n\in 3\mathbb{Z}\), there are 9 one-dimensional representations, and \(\frac{n^{2}-3}{3}\) three-dimensional irreducible representations. Denote the one-dimensional representations as \(\theta_{0,0,m},\theta_{\frac{n}{3},\frac{2n}{3},m},\theta_{\frac{2n}{3},\frac{ n}{3},m}\,(m=1,2,3)\), and three-dimensional representations as \(\theta_{i,j}\) where \((i,j)\in(1,2\dots,n)\times(1,2,\dots,n)\backslash\)\(\{(0,0),(\frac{n}{3},\frac{2n}{3}),(\frac{2n}{3},\frac{n}{3})\}\). The integers \(i,j\) are defined modulo \(n\), and there are equivalence relations among \(\theta_{i,j}\): \[\theta_{i,j}=\theta_{-i+j,-i}=\theta_{-j,i-j}\,. \tag{108}\] More explicitly, these representations are given by \[\begin{split}&\theta_{0,0,m}(T)=\xi_{3}^{m}\,,\quad\theta_{0,0,m }(g_{k,l})=1\,,\\ &\theta_{\frac{n}{3},\frac{2n}{3},m}(T)=\xi_{3}^{m}\,,\quad\theta _{\frac{2n}{3},\frac{n}{3},m}(T)=\xi_{3}^{m}\,,\\ &\theta_{\frac{n}{3},\frac{2n}{3},m}(g_{k,l})=\xi_{3}^{k+2l}\,, \quad\theta_{\frac{2n}{3},\frac{n}{3},m}(g_{k,l})=\xi_{3}^{2k+l}\,,\\ &\theta_{i,j}(T)=T\,,\quad\theta_{i,j}(g_{k,l})=\operatorname{ diag}(\xi_{n}^{ik+jl},\xi_{n}^{(j-i)k-il},\xi_{n}^{-jk+(i-j)l}),\end{split} \tag{109}\] where \(\xi_{k}=e^{2\pi i/k}\) and \(g_{k,l}=\operatorname{diag}(\xi_{n}^{k},\xi_{n}^{l},\xi_{n}^{-k}\xi_{n}^{-l}) \in H_{n,n}\). Then the natural representation \(\pi\) can be given by \[\pi(g_{k,l})=\operatorname{diag}(\xi_{m}^{k},\xi_{m}^{l},\xi_{m}^{-k-l}),\pi( T)=T, \tag{110}\] and the decompositions of the tensor product are \[\pi\otimes\theta_{0,0,m}=\theta_{0,1,m},\ \ \pi\otimes\theta_{i,j}= \theta_{i-1,j-1}+\theta_{i,j+1}+\theta_{i+1,j},\] \[\pi\otimes\theta_{\frac{n}{3},\frac{2n}{3},m}=\theta_{\frac{n}{3} +1,\frac{2n}{3},m},\ \ \pi\otimes\theta_{\frac{2n}{3},\frac{n}{3},m}=\theta_{\frac{2n}{3}+1,\frac{n}{3 },m}\,.\] For \(n\not\in 3\mathbb{Z}\), there are 3 one-dimensional representations, and \(\frac{n^{2}-1}{3}\) three-dimensional irreducible representations. The explicit representations are the same as (109) with the second line excluded. And the tensor product is also the same as before with the \(\pi\otimes\theta_{\frac{n}{3},\frac{2n}{3},m}=\theta_{\frac{n}{3}+1,\frac{2n}{3},m}\) terms excluded. The quiver diagrams for n=6 and 7 are shown in the Figure 14: ### \(\Delta(6n^{2})\) series \(\Delta(6n^{2})\) is generated by \(H_{n,n},T\) and \(R=\begin{pmatrix}a&0&0\\ 0&0&b\\ 0&c&0\end{pmatrix}\) where \(abc=-1\). If \(3\mid n\), there are 2 one-dimensional representations \(\theta_{0,0,1},\theta_{0,0,2}\), 4 two-dimensional representations \(\theta_{0,0,3},\theta_{\frac{m}{3},\frac{2m}{3},n_{1}}(n_{1}=1,2,3)\), \(2(m-1)\) three-dimensional representations \(\theta_{i,0,n_{2}}(n_{2}=1,2)\) and \(\frac{m^{2}-2m}{6}\) six-dimensional irreducible representations \(\theta_{i,j}\). If \(3\nmid n\), there are 2 one-dimensional representations \(\theta_{0,0,1},\theta_{0,0,2}\), 1 two-dimensional \(\theta_{0,0,3}\), \(2(m-1)\) three-dimensional representation \(\theta_{i,0,n_{2}}(n_{2}=1,2)\) and \(\frac{m^{2}-3m+2}{6}\) six-dimensional irreducible representations \(\theta_{i,j}\). The natural representation is given by \(\pi=\theta_{1,0,2}\), and the tensor products are \[\pi\otimes\theta_{0,0,1}=\theta_{1,0,1},\pi\otimes\theta_{0,0,2}= \theta_{1,0,2},\otimes\theta_{0,0,3}=\theta_{1,0,1}\oplus\theta_{1,0,2}\] \[\pi\oplus\theta_{i,0,n_{2}}=\theta_{i+1,0,n_{2}}\oplus\theta_{-i,1-i},\ \text{for}\ i\neq 0,n_{2}=1,2.\] \[\pi\otimes\theta_{\frac{n}{3},\frac{2n}{3},2}=\theta_{\frac{n}{3} +1,\frac{2n}{3},2},(\pi\otimes\theta_{\frac{n}{3},\frac{2n}{3},1}=\theta_{ \frac{n}{3}+1,\frac{2n}{3},1},\pi\otimes\theta_{\frac{n}{3},\frac{2n}{3},3}= \theta_{\frac{n}{3}+1,\frac{2n}{3},3})\] \[\pi\otimes\theta_{i,j}=\theta_{i-1,j-1}+\theta_{i,j+1}+\theta_{i+ 1,j},\ \text{for}\ i,j\notin\{0,\frac{m}{3},\frac{2m}{3}\}.\] The relations in parentheses only appear in the case \(3\mid n\). Figure 14: Quiver diagram for \(\Delta(108)\)(left) and \(\Delta(147)\)(right) Figure 15: Mckay quivers of the \(\Delta(150)\)(Left) and \(\Delta(216)\)(Right)
2307.09774
Floquet Nonequilibrium Green's functions with Fluctuation-Exchange Approximation: Application to Periodically Driven Capacitively Coupled Quantum Dots
We study the dynamics of two capacitively coupled quantum dots, each coupled to a lead. A Floquet Green's function approach described the system's dynamics, with the electron-electron interactions handled with the fluctuation-exchange approximation. While electrons cannot move between the separate sections of the device, energy transfer occurs with the periodic driving of one of the leads. This process was found to be explained with four stages. The energy transfer was also found to be sensitive to the driving frequency of the leads, with an optimal frequency corresponding to the optimal completion of the four stages of the identified process.
Thomas D. Honeychurch, Daniel S. Kosov
2023-07-19T06:27:34Z
http://arxiv.org/abs/2307.09774v2
Floquet Nonequilibrium Green's functions with Fluctuation-Exchange Approximation: Application to Periodically Driven Capacitively Coupled Quantum Dots ###### Abstract We study the dynamics of two capacitively coupled quantum dots, each coupled to a lead. A Floquet Green's function approach described the system's dynamics, with the electron-electron interactions handled with the fluctuation-exchange approximation. While electrons cannot move between the separate sections of the device, energy transfer occurs with the periodic driving of one of the leads. This process was found to be explained with four stages. The energy transfer was also found to be sensitive to the driving frequency of the leads, with an optimal frequency corresponding to the optimal completion of the four stages of the identified process. ## I Introduction Capacitive coupling offers a unique tool for investigating and designing open quantum systems. Of particular interest are systems where energy transport occurs between regions without the addition of charge transport. Interesting phenomena that utilize capacitive coupling include coulomb drag [1; 2] and heat rectification [3]. Capacitively coupled quantum dots offer a simple testbed for such phenomena, with heat current across capacitively coupled quantum dots [4; 5] and their use in energy harvesting devices [6; 7; 8; 9; 10] having been investigated. The addition of time-dependent drivings of lead and gate voltages offers a further avenue for exploring particle and energy transport within quantum devices, most usually in the case of periodic drivings [11]: energy transport and entropy production of a noninteracting single electronic level with a periodically modulated gate voltage has been discussed [12; 13]; the periodic modulation of parameters has also been utilized to investigate nanoscale thermal machines [14; 15; 16]; and the AC linear response of both particle and heat current has been investigated for a mesoscopic capacitor [17]. In the context of capacitively coupled devices, the electrothermal admittance has been calculated for a nanoscale parallel plate capacitor in the linear response regime [18], and, most recently, the energy transfer in a system of capacitively coupled dots was investigated when the gate voltage of one dot is modulated periodically [19]. This paper investigates the energy transfer between two capacitively coupled quantum dots, each connected to a respective lead [see Fig. 1]. We study the energy and particle transport within the system due to the periodic driving of one lead's energies. While particles cannot move between systems, the capacitive coupling between the dots allows energy transfer through the system. We make use of a Floquet nonequilibrium Green's functions approach [20; 21; 22; 23; 24], allowing for the exploration of nonadiabatic drivings. The Coulomb interaction is handled with self-consistent perturbation theory, using the fluctuation-exchange (FLEX) approximation [25]. A self-consistent approximation, FLEX includes both particle-particle and particle-hole T-matrix and GW terms [see Fig. 2]. FLEX subsumes the advantages of its constituent terms, making it applicable to a wide variety of interaction strengths and occupations [25]. It was found that the average energy current through the system is sensitive to the driving frequency, with a frequency corresponding to the maximum energy transference observed. This energy transfer was found to be described by a four-stage process. The effects of the other parameters were also investigated. The paper is organized as follows: Section II lays out theory and implementation; Section III investigates the energy transfer while one lead is driven periodically; and within section IV the paper's results are summarized and extensions suggested. Natural units for quantum transport are used throughout the paper, with \(\hbar\), \(e\), and \(k_{B}\) set to unity. ## II Theory ### Hamiltonian and NEGF For simplicity, we focus on two spinless dots, \(A\) and \(B\), coupled to an associated lead, labeled \(\alpha\) and \(\beta\), and coupled capacitively: \[H(t)=H_{A}+H_{B}+H_{int} +H_{A\alpha}+H_{B\beta} \tag{1}\] \[+H_{\alpha}(t)+H_{\beta},\] \[H_{S}=\epsilon_{S}\vec{d}_{S}^{\dagger}\hat{d}_{S}, H_{int} =U\vec{d}_{A}^{\dagger}\hat{d}_{A}\hat{d}_{B}^{\dagger}\hat{d}_{B}, \tag{2}\] Figure 1: Schematic representation of the model investigated. The two quantum dots are coupled to noninteracting electron reservoirs and coupled to each other by Coulomb interaction. Within the investigation, energies of reservoir \(A\) are driven harmonically, resulting in a nonzero current between the dots and reservoirs and energy transfer between the reservoirs. \[H_{S\sigma}=\sum_{k}t_{k\sigma S}\;\hat{c}_{k\sigma}^{\dagger}\hat{d}_{S}+t_{k \sigma S}^{*}\;\hat{d}_{S}^{\dagger}\hat{c}_{k\sigma}, \tag{3}\] and \[H_{\sigma}=\sum_{k}\left(\epsilon_{k\sigma}+\psi_{\sigma}\left(t\right)\right) \hat{c}_{k\sigma}^{\dagger}\hat{c}_{k\sigma}. \tag{4}\] Here, \(S\) refers to a dot, \(\sigma\) refers to its corresponding lead, and \(\bar{S}\) and \(\bar{\sigma}\) refer to the opposing dot and lead, respectively. The two interacting dots' energies are given by \(\epsilon_{S}\), and the electron-electron repulsion between the sites, given by \(H_{int}\), has a strength \(U\). The coupling of the quantum dots to their respective leads is governed by \(H_{k\sigma S}\), with \(t_{k\sigma S}\) denoting a hopping between the lead site \(k\sigma\) and the dot \(S\). The leads are taken as noninteracting, with the explicit time-dependence entering the Hamiltonian via \(\psi_{\sigma}(t)\), which varies the energies \(\epsilon_{k\sigma}\). To model the system out of equilibrium, we make use of nonequilibrium Green's functions: \[G_{S}(\tau,\tau^{\prime})=-i\left\langle T_{c}\left(d_{S}\left(\tau\right)d_{ S}^{\dagger}\left(\tau^{\prime}\right)\right)\right\rangle, \tag{5}\] with the equation of motion, \[\begin{gathered}\left(i\frac{\partial}{\partial\tau}-\epsilon_{ S}\right)G_{S}\left(\tau,\tau^{\prime}\right)\\ -\int_{c}d\tau_{1}\;\Sigma_{S}\left(\tau,\tau_{1}\right)G_{S} \left(\tau_{1},\tau^{\prime}\right)=\delta_{c}\left(\tau-\tau^{\prime}\right), \end{gathered} \tag{6}\] where the self-energy term consists of contributions from the associated lead and the interaction between the quantum dots: \[\Sigma_{S}\left(\tau,\tau^{\prime}\right)=\Sigma_{\sigma}\left(\tau,\tau^{ \prime}\right)+\Sigma_{S}^{int}\left(\tau,\tau^{\prime}\right). \tag{7}\] To account for the capacitive coupling between the dots, we make use of self-consistent perturbation theory: \[\Sigma_{S}^{int}\left(\tau,\tau^{\prime}\right)=-iUG_{\bar{S}}\left(\tau,\tau ^{+}\right)\delta(\tau,\tau^{\prime})+\Sigma_{S}^{corr}(\tau,\tau^{\prime}), \tag{8}\] where the correlations were investigated with the FLEX approximation[25]: \[\begin{gathered}\Sigma_{S}^{FLEX}(\tau,\tau^{\prime})=\Sigma_{S} ^{TPP}(\tau,\tau^{\prime})+\Sigma_{S}^{TPH}(\tau,\tau^{\prime})\\ +\Sigma_{S}^{GW}(\tau,\tau^{\prime})-2\Sigma_{S}^{SB}(\tau,\tau^{ \prime}).\end{gathered} \tag{9}\] The single-bubble, GW, particle-particle, and particle-hole T-matrix approximations follow standard definitions here [see Fig. 2]. The single-bubble approximation is given by \[\Sigma_{S}^{SB}\left(\tau,\tau^{\prime}\right)=UG_{S}\left(\tau,\tau^{\prime} \right)G_{S}\left(\tau^{\prime},\tau\right)UG_{S}\left(\tau,\tau^{\prime} \right). \tag{10}\] The \(GW\) self-energy is given by \[\Sigma_{S}^{GW}(\tau,\tau^{\prime})=iW_{S}^{ns}\left(\tau,\tau^{\prime} \right)G_{S}\left(\tau,\tau^{\prime}\right), \tag{11}\] \[\begin{gathered} W_{S}^{ns}\left(\tau,\tau^{\prime}\right)=\Phi_{S }\left(\tau,\tau^{\prime}\right)\\ +\int_{c}\tau_{1}\int_{c}\tau_{2}\;\Phi_{S}\left(\tau,\tau_{1} \right)P_{S}\left(\tau_{1},\tau_{2}\right)W_{S}^{ns}\left(\tau_{2},\tau^{\prime }\right),\end{gathered} \tag{12}\] \[\Phi_{S}\left(\tau,\tau^{\prime}\right)=UP_{\overline{S}}\left(\tau,\tau^{ \prime}\right)U \tag{13}\] and \[P_{S}\left(\tau,\tau^{\prime}\right)=-iG_{S}\left(\tau,\tau^{\prime}\right)G_ {S}\left(\tau^{\prime},\tau\right). \tag{14}\] The particle-particle T-matrix self-energy is given by \[\Sigma_{S}^{PP}\left(\tau,\tau^{\prime}\right)=i\,T^{PP}(\tau,\tau^{\prime})G _{S}\left(\tau^{\prime},\tau\right), \tag{15}\] \[\begin{gathered} T^{PP}\left(\tau,\tau^{\prime}\right)=-UG^{H} \left(\tau,\tau^{\prime}\right)U\\ +\int d\tau_{1}UG^{H}\left(\tau,\tau_{1}\right)T^{PP}\left(\tau_{1}, \tau^{\prime}\right)\end{gathered} \tag{16}\] and \[G^{H}(\tau,\tau^{\prime})=iG_{A}(\tau,\tau^{\prime})G_{B}(\tau,\tau^{\prime}). \tag{17}\] The particle-hole T-matrix self-energy is given by \[\Sigma_{S}^{PH}\left(\tau,\tau^{\prime}\right)=i\,T_{S}^{PH}(\tau,\tau^{ \prime})G_{\overline{S}}\left(\tau,\tau^{\prime}\right), \tag{18}\] \[\begin{gathered} T_{S}^{PH}\left(\tau,\tau^{\prime}\right)=UG_{ \overline{S}}^{F}\left(\tau,\tau^{\prime}\right)U\\ -\int d\tau_{1}UG_{S\overline{S}}^{F}\left(\tau,\tau_{1}\right)T_{S}^{ PH}\left(\tau_{1},\tau^{\prime}\right)\end{gathered} \tag{19}\] and \[G_{S\overline{S}}^{F}(\tau,\tau^{\prime})=-iG_{S}(\tau,\tau^{\prime})G_{\bar{S}} (\tau^{\prime},\tau). \tag{20}\] Figure 2: The Feynman diagrams considered within the investigation. Here, \(S\) refers to the red fermionic line and corresponds to \(G_{S}(\tau,\tau^{\prime})\). The blue fermionic line corresponds to the opposing dot’s Green’s function, \(G_{S}(\tau,\tau^{\prime})\). For the leads, time dependence within the energies, \(\epsilon_{k\sigma}+\psi_{\sigma}(t)\), results in an additional phase to the otherwise static lead self-energies: \[\begin{split}\Sigma_{\sigma}(t,t^{\prime})&=\Sigma_{ \sigma}^{\prime}(t-t^{\prime})e^{-i\int_{t^{\prime}}^{t}dt_{1}\psi_{\sigma}(t_{ 1})}\\ &=e^{-i\Psi_{\sigma}(t)}\Sigma_{\sigma}^{\prime}(t-t^{\prime})e^{ i\Psi_{\sigma}(t^{\prime})},\end{split} \tag{21}\] where \(\Psi_{\sigma}(t)\) is the anti-derivative of \(\psi_{\sigma}(t)\) and \(\Sigma_{\sigma}^{\prime}(t-t^{\prime})\) is the self-energies of the leads without driving, given in Fourier space with the wide-band approximation as \[\Sigma_{\sigma}^{R/A}\left(\omega\right)=\mp\frac{i}{2}\Gamma_{\sigma}, \tag{22}\] \[\Sigma_{\sigma}^{<}\left(\omega\right)=if_{\sigma}\left(\omega\right)\Gamma_{ \sigma}, \tag{23}\] \[\Sigma_{\sigma}^{>}\left(\omega\right)=-i\left(1-f_{\sigma}\left(\omega \right)\right)\Gamma_{\sigma}, \tag{24}\] where the Fermi distribution follows the standard definition of \(f_{\sigma}(\omega)=1/\left[\exp\left(\left(\omega-\mu_{\sigma}\right)/T \right)+1\right]\). The particle current is given by, \[\begin{split} I_{\sigma}^{P}(t)=2Re\left\{\int_{-\infty}^{ \infty}dt_{1}& Tr\left[G_{S}^{<}(t,t_{1})\Sigma_{\sigma}^{A}(t_{ 1},t)\right.\right.\\ &\left.\left.+G_{S}^{R}(t,t_{1})\Sigma_{\sigma}^{<}(t_{1},t) \right]\right\},\end{split} \tag{25}\] and the occupation of the dots is given by \[n_{S}(t)=-iG_{S}^{<}\left(t,t\right), \tag{26}\] where continuity dictates that \(I_{\sigma}^{P}(t)=-\frac{dn_{S}(t)}{dt}\). To calculate the energy that passes from the leads into the system, we use \[\begin{split} I_{\sigma}^{E}(t)=-i\langle\left[H(t),H_{\sigma}(t )\right]_{-}\rangle\\ =-2\operatorname{Re}\left\{\int dt_{1}\left[i\frac{d}{dt}\Sigma _{\sigma}^{<}(t,t_{1})\right]G_{S}^{A}(t_{1},t)\right.\\ \left.+\left[i\frac{d}{dt}\Sigma_{\sigma}^{R}(t,t_{1})\right]G_{S }^{<}(t_{1},t)\right\}.\end{split} \tag{27}\] Here, energy moves from the leads into the system and system-lead coupling, resulting in the continuity equation \[\begin{split} I_{A}^{E}(t)+I_{B}^{E}(t)=\frac{d}{dt}\left( \langle H_{A\alpha}\rangle+\langle H_{B\beta}\rangle\right.\\ \left.+\langle H_{A}\rangle+\langle H_{B}\rangle+E_{int}(t) \right)\end{split} \tag{28}\] where \[\begin{split}\langle H_{S\sigma}\rangle=2\operatorname{Im}\left\{ \int dt_{1}\Sigma_{\sigma}^{<}(t,t_{1})G_{S}^{A}(t_{1},t)\right.\\ +\left.\Sigma_{\sigma}^{R}(t,t_{1})G_{S}^{<}(t_{1},t)\right\} \end{split} \tag{29}\] and \[\begin{split} E_{int}(t)=\sum_{S=A,B}-\frac{i}{2}\left[\int dt_{ 1}\;\Sigma_{S}^{int,R}\left(t,t_{1}\right)G_{S}^{<}\left(t_{1},t\right)\right. \\ \left.+\Sigma_{S}^{int,<}\left(t,t_{1}\right)G_{S}^{A}\left(t_{1}, t\right)\right].\end{split} \tag{30}\] Taking the time-average of equation (28), gives \[\bar{I}_{A}^{E}=-\bar{I}_{B}^{E}, \tag{31}\] where \(\bar{O}=\lim_{\tau\rightarrow\infty}\left(\int_{0}^{\tau}O(t)\;dt\right)/\tau\). ### Floquet approach We use a Floquet nonequilibrium Green's function approach to solve the equations of motion, assuming the periodicity of the system's dynamics [20; 21; 22; 23; 24]. Within this context, Green's functions are periodic in the central time, \(T=\frac{t+t^{\prime}}{2}\): \[\begin{split} A(t,t^{\prime})=A\left(T=\frac{t+t^{\prime}}{2}, \tau=t-t^{\prime}\right)\\ =\sum_{n=-\infty}^{\infty}A(\tau,n)e^{i\Omega nT}\end{split} \tag{32}\] and \[A\left(\omega,n\right)=\frac{1}{P}\int_{0}^{P}dT\;e^{-i\Omega nT}\int_{-\infty }^{\infty}d\tau e^{i\omega\tau}A(T,\tau), \tag{33}\] which allows us to cast the terms \(C_{+}(t,t^{\prime})=A(t,t^{\prime})B(t,t^{\prime})\) and \(C_{-}(t,t^{\prime})=A(t,t^{\prime})B(t^{\prime},t)\) as \[\begin{split} C_{\pm}\left(\omega,n\right)=\\ \sum_{m=-\infty}^{\infty}\int\frac{d\omega^{\prime}}{2\pi}A\left( \omega^{\prime},m\right)& B\left(\pm\omega\mp\omega^{\prime},n-m \right)\\ &=\left[A\;\square_{\pm}\;B\right]\left(\omega,n\right).\end{split} \tag{34}\] Making a further transformation of the two-time objects into the Floquet matrix form \[\widetilde{A}\left(\omega,m,n\right)=A\left(\omega+\frac{\Omega}{2}\left(m+n \right),n-m\right) \tag{35}\] allows for convolutions of the type \(C(t,t^{\prime})=\int dt_{1}A(t,t_{1})B(t_{1},t^{\prime})\) to be recast as matrix equations \[\begin{split}\widetilde{C}\left(\omega,m,n\right)=\sum_{r=- \infty}^{\infty}&\widetilde{A}\left(\omega,m,r\right)\widetilde{B} \left(\omega,r,n\right)\\ =&\left[\widetilde{A}\circ\widetilde{B}\right]\left( \omega,m,n\right).\end{split} \tag{36}\] Taking real-time projections, we transform the equations of motion into matrix equations with the above transformations: \[\begin{split}\left(\omega+\Omega m-\epsilon_{S}\right)\widetilde{G }_{S}^{R/A}\left(\omega,m,n\right)=\\ \delta_{mn}+\left[\widetilde{\Sigma}_{S}^{R/A}\circ\widetilde{G }_{S}^{R/A}\right]\left(\omega,m,n\right),\end{split} \tag{37}\] \[\begin{split}\widetilde{G}_{S}^{<}(\omega,m,n)=\\ \left[\widetilde{G}_{S}^{R}\circ\widetilde{\Sigma}_{S}^{<}\circ \widetilde{G}_{S}^{A}\right]\left(\omega,m,n\right).\end{split} \tag{38}\] The interaction self-energies can be cast in terms of Fourier coefficients: for GW, \[\begin{split}\Sigma_{GW,S}^{R/A}\left(\omega,n\right)\\ =i\left[W_{S}^{ns,<}\square_{+}\;G_{S}^{R/A}+W_{S}^{ns,R/A}\; \square_{+}\;G_{S}^{>}\right]\left(\omega,n\right),\end{split} \tag{39}\] \[\Sigma_{S}^{PP,</>}\left(\omega,n\right)=i\left[T^{PP,</>}\square_{-}G_{S}^{ \nicefrac{{2}}{{<}}}\right]\left(\omega,n\right), \tag{46}\] \[\widetilde{T}^{PP,R/A}\left(\omega,m,n\right)=-U\widetilde{G}^{H,R/A}\left( \omega,m,n\right)U \tag{47}\] \[\widetilde{T}^{PP,</>}\left(\omega,m,n\right)=-U\widetilde{G}^{H,</>}\left( \omega,m,n\right)U \tag{48}\] \[+U\left[\widetilde{G}^{H,R}\circ\widetilde{T}^{PP,</>}\right]\left(\omega,m,n \right)\] \[+U\left[\widetilde{G}^{H,</>}\circ\widetilde{T}^{PP,A}\right] \left(\omega,m,n\right),\] \[G^{H,R/A}(\omega,n) \tag{49}\] \[=i\left[G_{A}^{<}\square_{+}G_{B}^{R/A}+G_{A}^{R/A}\square_{+}G_{B}^{>} \right],(\omega,n),\] and \[G^{H,</>}(\omega,n) \tag{50}\] and for the particle-hole T-matrix, \[\Sigma_{S}^{PH,R/A}\left(\omega,n\right) \tag{51}\] \[\Sigma_{S}^{PH,<}\square_{+}G_{S}^{R/A}+T_{S}^{PH,R/A}\square_{+}G_{S}^{>} \right]\left(\omega,n\right),\] \[\Sigma_{S}^{PH,</>}\left(\omega,n\right)=i\left[T_{S}^{PH,</>}\square_{+}G_{S} ^{</>}\right]\left(\omega,n\right), \tag{52}\] \[\widetilde{T}_{S}^{PH,R/A}\left(\omega,m,n\right)=U\widetilde{G}_{SS}^{F,R/A} \left(\omega,m,n\right)U \tag{53}\] Figure 3: Observables measured over a period with driving of the left lead. Here, red lines correspond to the observables relating to the driven section, while blue lines refer to the undriven section of the model. Here, the energy current refers to the energy transfer from the lead and coupling region into the central region. The black dashed line is given by \(\cos(\Omega t)\), while the colored dashed lines of Fig. 2(c) correspond to the averages of similarly colored energy currents. The parameters are \(\Gamma_{\alpha}=\Gamma_{\beta}=0.5\), \(U=0.6\), \(T=0.001\), \(\mu_{\alpha}=\mu_{\beta}=0.3\), \(\epsilon_{A}=\epsilon_{B}=0\), \(\Delta_{\alpha}=0.2\) and \(\Omega=0.32\). The discretization was taken at \(0.01\), the bounds of integration between \(-40\) and \(40\), and \(49\) Fourier coefficients were used. The convergence for both dots was taken as \(10^{-4}\). \[\widetilde{T}_{S}^{PH,</>} \left(\omega,m,n\right)=U\widetilde{G}_{S\widetilde{S}}^{F,</>}\left( \omega,m,n\right)U \tag{54}\] \[-U\left[\widetilde{G}_{S\widetilde{S}}^{F,</>}\circ\widetilde{T}_{S }^{PH,A}\right]\left(\omega,m,n\right)\] \[-U\left[\widetilde{G}_{S\widetilde{S}}^{F,R}\circ\widetilde{T}_{S }^{PH,</>}\right]\left(\omega,m,n\right),\] \[G_{S\widetilde{S}}^{F,R/A}(\omega,n) \tag{55}\] \[=-i\left[G_{S}^{<}\ \square_{-}\ G_{S}^{A/R}+G_{S}^{A/A}\ \square_{-}\ G_{S}^{<}\right](\omega,n),\] and \[G_{S\widetilde{S}}^{F,</>}(\omega,n) \tag{56}\] \[=-i\left[G_{S}^{</>}\ \square_{-}\ G_{S}^{>/<}\right](\omega,n).\] The time-dependent driving of the leads' energies was taken as sinusoidal, \[\psi_{\alpha}(t)=\Delta_{\alpha}\cos(\Omega t), \tag{57}\] giving \(\Psi_{\alpha}(t)=\left(\Delta_{\alpha}/\Omega\right)\sin\left(\Omega t\right)\), which can be expanded with the Jacobi-Anger expansion \[e^{i\frac{\Delta_{\alpha}}{\Omega}\sin(\Omega t)}=\sum_{n=-\infty}^{n=\infty} J_{n}\left(\frac{\Delta_{\alpha}}{\Omega}\right)e^{in\Omega t}, \tag{58}\] where \(J_{n}(x)\) are Bessel functions of the first kind. We can recast equation (21), as a Floquet matrices, \[\bar{\Sigma}_{\sigma}\left(\omega,m,n\right)=\left[\widetilde{S}_{\sigma} \circ\widetilde{\Sigma}_{\sigma}^{\prime}\circ\widetilde{S}_{\sigma}^{ \dagger}\right]\left(\omega,m,n\right), \tag{59}\] where \(\widetilde{S}_{\sigma}\left(m,n\right)=J_{m-n}\left(\Delta_{\sigma}/\Omega\right)\). In a similar manner to the equations of motion, the observables can be cast in terms of Fourier coefficients: \[I_{\sigma}^{P}(n-m)=\widetilde{I}_{\sigma}^{P}\left(m,n\right) \tag{60}\] \[=2\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\ \left[\widetilde{G}_{S}^{R} \circ\widetilde{\Sigma}_{\sigma}^{<}+\widetilde{G}_{S}^{<}\circ\widetilde{ \Sigma}_{\sigma}^{A}\right](\omega,m,n),\] \[n_{S}\left(n-m\right)=\widetilde{n}_{S}\left(m,n\right)=-i\int_{-\infty}^{ \infty}\frac{d\omega}{2\pi}\widetilde{G}_{S}^{<}(\omega,m,n), \tag{61}\] and \[I_{\sigma}^{E}(n-m)=\widetilde{I}_{\sigma}^{E}\left(m,n\right) \tag{62}\] \[=-2\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\ \left(\omega+m\Omega \right)\left[\widetilde{\Sigma}_{\sigma}^{<}\circ\widetilde{G}_{S}^{A}+ \widetilde{\Sigma}_{\sigma}^{R}\circ\widetilde{G}_{S}^{<}\right](\omega,m,n).\] ### Implementation To solve the equations of motion, we invert equations (37),(43),(44),(47),(48),(53) and (54) by first truncating the Floquet matrices, as defined in Eq. (35). Equations (39), (40), (41), (42),(45), (46), (49), (50), (51), (52), (55) and (56) are calculated by using the Fourier coefficient taken from the appropriate Floquet matrices. These Fourier coefficients were taken from the terms of the Floquet matrices given by \(n+m=0,-1\) of equation (35). The self-energy terms were then transformed back to Floquet matrix form, using equation (35), for use in the equations of motion. The self-consistent process begins with calculating the noninteracting case followed by the interaction self-energies, as specified above. The interaction self-energies are then used to calculate successive iterations of the Green's functions before the following convergence is satisfied: \[\frac{\sum_{m}\left|n_{m}^{k}-n_{m}^{k-1}\right|}{\sum_{m}\left|n_{m}^{k} \right|}\leq\delta, \tag{63}\] where \(n_{m}^{k}\) is the \(k\)th iteration of the \(m\)th Fourier coefficient of the occupation in question, with \(\delta\) as the convergence. This convergence was satisfied for each dot's occupation. ## III Results and discussion Within the system, energy moves from the driven to the undriven section via the capacitive coupling of the two dots. In particular, energy transfer occurs when the driven dot is occupied at a higher energy and unoccupied at a lower energy, and the undriven dot is unoccupied at a higher energy and occupied at a lower energy. This process is complicated because the energies at which a dot is occupied are informed by the occupation of the opposing dot. This relationship is transparent in the Hartree approximation, where the average energy current into the central region due to current into the dot \(S\) is given by \[\widetilde{I}_{\sigma}^{E}=U\int_{0}^{P}\frac{dt}{P}\ n_{S}(t)I_{\sigma}^{P}(t). \tag{64}\] These observations, coupled with the sinusoidal nature of the driving, suggest the following approximate cyclic stages in the energy transfer process: 1. Following stage 4, charge moves onto the driven dot while the undriven dot is largely occupied. 2. The driven dot is largely occupied as charge moves off the undriven dot. 3. Charge moves off the driven dot as the undriven dot is largely unoccupied. 4. The driven dot is largely unoccupied as charge moves onto the undriven dot. Stages one and two capture the movement of higher energy electrons moving onto the driven dot and off the undriven dot, resulting in the energy transfer from the driven to the undriven region. Stages three and four capture the lower energy electrons moving off the driven dot and onto the undriven dot, resulting in a lower energy transfer than the first two steps in the cycle in the opposite direction. An example of this can be seen in Fig. 3, where the regions in which the stages are most prominent have been highlighted. The amount of energy transferred through the system is sensitive to the driving frequency, with the maximum transference a result of the balancing of stages of energy transfer [see in Fig. 4]. As the driving frequency decreases, electrons move between the dots and their respective leads quicker than the energy transfer stages can complete. In particular, the dots that remain largely occupied in stages one and two and largely unoccupied in processes three and four begin to change in occupation, resulting in less pronounced changes in the opposing dot's occupation energy and outgoing energy current. Conversely, as the driving frequency increases, the charge has less time to move between the dots and their respective leads, resulting in smaller maxima and minima for the occupations over the period, which reduces the opposing dot's occupation energy and its outgoing energy current. The driving profile and the system parameters beyond the driving frequency also inform the effectiveness of the energy transfer process. As expected, given Eq. (64), increases in the interaction strength \(U\) were found to increase the average energy current through the system [see Figure 4: Time-averaged energy current through the system due to the periodic driving of the left lead. The parameters, unless specified, are \(\Gamma_{\alpha}=\Gamma_{\beta}=0.5\), \(\epsilon_{A}=\epsilon_{B}\)=-0.2, \(U=0.4\), \(T=0.001\), \(\mu_{\alpha}=\mu_{\beta}=0\) and \(\Delta_{\alpha}=0.2\). The discretization was taken at \(0.01\), the bounds of integration between \(-40\) and \(40\), and \(49\) Fourier coefficients were used. The convergence for both dots was taken as \(10^{-4}\) Figure 5: Time-averaged observables through the system, with driving of the left lead. The parameters are \(\Gamma_{\alpha}=\Gamma_{\beta}=0.5\), \(U=0.4\), \(T=0.001\), \(\mu_{\alpha}=\mu_{\beta}=0\), \(\Delta_{\alpha}=0.2\) and \(\Omega=0.4\). The discretization was taken at \(0.01\), the bounds of integration between \(-40\) and \(40\) and \(49\) Fourier coefficients were used. The convergence for both dots was taken as \(10^{-4}\). The red dashed line follows \(\epsilon_{A}=\epsilon_{B}\). Fig. 4b]. It was also found that significant asymmetry of the coupling strengths diminished energy flow through the system [see Fig. 4c]. This is due to uneven transfer rates, \(\tau_{\sigma}\sim 1/\Gamma_{\sigma}\), for the movement of electrons between the dots and their respective leads, resulting in the inefficient completion of the stages of the energy transfer process. For the energies of the dots, the largest transference of energy through the system was achieved with dots with equal energies situated below the chemical potential of the two leads, such that, on average, the dots are both around half filled [see Fig. 5]. ## IV Conclusion We have investigated two capacitively coupled quantum dots coupled to respective leads, where one lead's energies are driven sinusoidally. While particles cannot move between the dots, Coulomb repulsion between the dots allows for the transfer of energy. The stages of the energy transfer were identified, and the effects of system parameters' were investigated. In particular, it was found that energy transfer was maximized for a given driving, corresponding to the efficient completion of the identified energy transfer process. This work has focused on a regime of relatively weak coupling with \(\Gamma>U\). Further work in a regime of \(\Gamma<U\) may result in different energy transfer stages, as outlined in Sec. III, for various drivings, suggesting interesting possible avenues for further research. Moreover, more complicated driving profiles and statistics relating to energy transfer may prove valuable in understanding and manipulating energy transfer in systems like that investigated. This result furthers the understanding of particle and energy transfer in capacitively coupled quantum dots, particularly within the context of nonadiabatic driving. This is particularly important as the miniaturization of nanoelectronics brings active elements closer together, resulting in the potential for unwarranted capacitive coupling.
2305.07120
Geometric Modeling and Physics Simulation Framework for Building a Digital Twin of Extrusion-based Additive Manufacturing
Accurate simulation of the printing process is essential for improving print quality, reducing waste, and optimizing the printing parameters of extrusion-based additive manufacturing. Traditional additive manufacturing simulations are very compute-intensive and are not scalable to simulate even moderately-sized geometries. In this paper, we propose a general framework for creating a digital twin of the dynamic printing process by performing physics simulations with the intermediate print geometries. Our framework takes a general extrusion-based additive manufacturing G-code, generates an analysis-suitable voxelized geometry representation from the print schedule, and performs physics-based (transient thermal and phase change) simulations of the printing process. Our approach leverages parallel adaptive octree meshes for both voxelated geometry representation as well as for fast simulations to address real-time predictions. We demonstrate the effectiveness of our method by simulating the printing of complex geometries at high voxel resolutions with both sparse and dense infills. Our results show that this approach scales to high voxel resolutions and can predict the transient heat distribution as the print progresses. This work lays the computational and algorithmic foundations for building real-time digital twins and performing rapid virtual print sequence exploration to improve print quality and further reduce material waste.
Dhruv Gamdha, Kumar Saurabh, Baskar Ganapathysubramanian, Adarsh Krishnamurthy
2023-05-10T01:03:13Z
http://arxiv.org/abs/2305.07120v1
Geometric Modeling and Physics Simulation Framework for Building a Digital Twin of Extrusion-based Additive Manufacturing ###### Abstract Accurate simulation of the printing process is essential for improving print quality, reducing waste, and optimizing the printing parameters of extrusion-based additive manufacturing. Traditional additive manufacturing simulations are very compute-intensive and are not scalable to simulate even moderately-sized geometries. In this paper, we propose a general framework for creating a digital twin of the dynamic printing process by performing physics simulations with the intermediate print geometries. Our framework takes a general extrusion-based additive manufacturing G-code, generates an analysis-suitable voxelized geometry representation from the print schedule, and performs physics-based (transient thermal and phase change) simulations of the printing process. Our approach leverages parallel adaptive octree meshes for both voxelated geometry representation as well as for fast simulations to address real-time predictions. We demonstrate the effectiveness of our method by simulating the printing of complex geometries at high voxel resolutions with both sparse and dense infills. Our results show that this approach scales to high voxel resolutions and can predict the transient heat distribution as the print progresses. This work lays the computational and algorithmic foundations for building real-time digital twins and performing rapid virtual print sequence exploration to improve print quality and further reduce material waste. **Keywords:** Extrusion-based Additive Manufacturing; Physics-based Simulations; Digital Twin; High-Resolution Modeling; Adaptive Octree ## 1 Introduction Fused deposition modeling (FDM) is an additive manufacturing process that builds 3D objects layer-by-layer by melting and extruding thermoplastic filaments through a heated nozzle. The nozzle moves along a predetermined path, depositing the molten material in precise locations to create the desired shape. Once deposited, the material then solidifies to create the final part. FDM is a popular and widely used technology due to its low cost, versatility, and ease of use. It can produce parts with good mechanical properties and accuracy. Such 3D printing approaches have revolutionized the manufacturing industry by enabling the production of complex geometries with high precision and customization. However, FDM faces a few challenges in achieving the desired print quality, and reducing material waste, especially for complex geometries with sparse infills. One challenge is the need for support structures for overhanging features or complex geometries, which can increase material waste and post-processing time. Another is the possibility of warping and distortion due to thermal stresses during printing, which can affect the final dimensional accuracy of the part. Accurate simulation of the _3D printing process_ can help address these challenges by predicting the thermal and mechanical stress distribution as the build progresses, which can inform print quality and material usage. Fast and accurate simulations can help rapidly explore the print process and design non-trivial approaches that minimize material usage while ensuring that thermal stresses are kept to acceptable levels. The ability of real-time simulations also opens up the possibility of creating digital twins of the 3D print process. These digital twins can assimilate measurements from the physical build and accurately predict the intermediate and final shapes. Additionally, these digital twins open up the possibility of feedback control of the printing process, e.g., using model predictive control (MPC). Simulating the 3D printing process is challenging for several reasons. These include computational challenges associated with (a) efficiently representing complex print geometries, (b) the multiscale nature of the process (small time scales of the nozzle-print interaction vs. long simulation time horizon of the full print), (c) the coupled multi-physics phenomena (thermal, phase-change, mechanical), and (d) the time- and location dependant material properties of the print, as well as the complexities of the print schedule (intricate infill patterns, variable resolution, variable material feeds). A final challenge is to simulate the time-dependent printing process in real-time, i.e., faster than the physical print process, to enable control and fast design exploration. This remains a very active research area, with several approaches that resolve some (but not all) of these challenges. Recently, voxel-based methods have emerged as a promising approach for simulating 3D printing, with the possibility of resolving all the challenges listed above. In voxel-based methods, the geometry is represented as a collection of voxels (i.e., volumetric pixels) that capture the printing order (and infill patterns) and material properties. Voxel-based simulations have the advantage of being computationally efficient and flexible in representing complex geometries with arbitrary infill patterns, and the structured representation offers the possibility of fast simulations. In this paper, we propose a novel framework for voxel-by-voxel 3D printing simulation that leverages adaptive octree meshes to efficiently represent the geometry and ensure highly optimized physics simulations. Our framework consists of three distinct stages, which make up our key contributions: * Converting G-code into an intermediate voxel representation: Our first step converts the G-code into a voxel-by-voxel (abbreviated as \(VbV\) in the sequel) print schedule. This intermediate representation is flexible enough to account for sparse and dense infill patterns. * Our second step converts the intermediate voxel representation into an analysis-suitable geometry representation. Here, we use an octree-based representation, with graceful adaptive coarsening and refinement as the print schedule progresses. * A scalable FEM simulator based on adaptive octree meshes that produce real-time predictions of thermal fields during the print process. * at high voxel resolution and under different infill conditions. To the best of our knowledge, our approach is the first to produce high-fidelity, real-time simulations of the 3D printing process for complex geometries by leveraging parallel adaptive octree meshes for FEM-based time dependant 3D simulations. The remainder of this paper is structured as follows: Section 2 reviews the existing literature on 3D printing simulations, voxel-based representations, and physics-based modeling. Section 3 describes the methodology of our approach, including the details of each of the three stages. Section 4 presents the results of our simulations and compares them to existing methods or benchmarks, if available. Finally, we discuss the strengths and weaknesses of our approach and the implications of our results for the field of 3D printing simulations, and conclude the paper in Section 5. ## 2 Background and Related Work Fused Deposition Modeling (FDM) is a widely used additive manufacturing technology that has gained immense popularity in the last decade due to its low cost, ease of use, and ability to produce complex geometries [1; 2]. The process involves extruding a thermoplastic material layer by layer until the final object is formed. FDM technology has found its applications in various fields, including the production of metal [3], ceramics [4], polymers [5; 6], and composites [7; 8]. One of the notable advantages of FDM technology is the ability to create parts with varying densities, which can have significant implications for various applications [9]. Another area where FDM has found its use is in the medical industry, where it is used for manufacturing patient-specific implants and drugs [10; 11]. FDM technology has also gained popularity in the aerospace industry [8]. Despite its advantages, FDM printing faces several challenges that can affect the quality and mechanical properties of the printed parts. One of the most common issues is the surface quality, which can be affected by the stair-step effect due to the layer-by-layer deposition process [12; 13]. Another challenge is achieving the desired surface roughness, which can be influenced by factors such as the nozzle diameter, layer height, and print speed [14]. Furthermore, the mechanical properties of FDM printed parts can be compromised due to weak interlayer bonding and the high melt viscosity of some materials, such as thermoplastic elastomers [15]. Achieving high print speeds can also be challenging, resulting in reduced mechanical strength and lower part accuracy [16]. These challenges have motivated research efforts to improve FDM printing technology and optimize the process parameters for better part quality and performance. Experimental analysis of FDM parts is essential to understand their quality, behavior, and performance in different applications. Mechanical testing, dimensional analysis, surface roughness analysis, and microstructural analysis are commonly used experimental methods [17; 18; 19]. However, experimentally analyzing FDM parts poses several challenges. One of the major challenges is the interplay of various process parameters, such as temperature, layer thickness, infill percentage, and print speed. These parameters affect the quality and performance of the printed parts, making it difficult to obtain consistent and accurate results [20]. The inherent anisotropy of FDM parts is another challenge, as their properties vary in different directions, making it challenging to obtain representative test results [21]. Additionally, FDM parts may contain voids, inclusions, or other defects due to the nature of the process, which can impact their performance and hinder the accuracy of test results [22]. The size and shape of the sample can also affect the test results, making it challenging to compare different samples or generalize findings to other parts or applications. Figure 1: Flow chart of the proposed simulation method. Computational tools can play a crucial role in addressing the challenges faced by experimentalists in analyzing FDM parts. For example, simulation software can model the FDM process and optimize the process parameters for specific applications [23]. This can help to reduce the number of experimental trials needed and provide insights into the effects of different process parameters on the quality and performance of the printed parts. In addition, computational methods such as finite element analysis (FEA) and computational fluid dynamics (CFD) can be used to predict the mechanical behavior of FDM parts, including their strength, stiffness, and deformation. There have been a few physics-based computational studies of the additive manufacturing process. Bentrzy et al. [24] showed that residual stresses in additive manufacturing parts result from thermal expansion during heating, contraction during cooling, and volume change due to the phase transformations. The presence of these stresses can be catastrophic. Bayat et al. [25] shows the difficulty in performing full-body FEM simulations of the additive manufacturing process, which incorporates relevant physical phenomenons such as fluid flow, temperature, and stress evolution due to very high computational requirements and the inability of the models to scale up. Chen and Yan [26] and Bailey et al. [27] have shown a coupled simulation of CFD and solid mechanical model but could only simulate two tracks due to high computational requirements. The lack of computational tools makes the analysis of the effect of these parameters on the final part very difficult. Our work helps overcome many computational challenges because of the easy conversion of G-code into the voxel representation, efficient mesh creations approach, and scalable FEM framework. Vorisek and Patzak [28] presents a G-code processor for advanced additive manufacturing simulations. The paper focuses on the creation of voxel geometry from G-code, which can be used for Finite Element Method (FEM) simulations. However, their simulation is limited to the voxel geometry creation, and they do not perform any FEM simulation which in our understanding is a lot more difficult and computationally expensive. Additionally, the resolution of their high-resolution geometry is comparable to ours. Baiges et al. [29] presents an adaptive Finite Element strategy for the numerical simulation of additive manufacturing processes. The paper uses an adaptive octree mesh that is fine only in regions of high change and coarse elsewhere. The paper shows FEM based steady-state mechanical analysis performed only on simple geometries like cuboids. The number of time steps they simulate is limited to only 100, and the total time taken is around 500 minutes. In comparison, our work performs FEM simulations over the entire printing process, using a more efficient octree representation. Our FEM framework can handle more complex geometries, as we have demonstrated with examples like the Stanford Bunny and Moai. Being a fast, accurate and scalable FEM framework we demonstrate potential of our framework by simulating the full body of the Stanford Bunny and Moai as well as future potential of performing coupled thermo-mechanical simulations. Our total time steps solved for is much higher than the other two papers, with total mesh nodes ranging up to 350K to 500K. Additionally, our print geometry update resolution is at the voxel level, and we are able to simulate the full body of complicated geometries, not just the final mesh. ## 3 Additive Manufacturing Simulation Framework In this section, we present a voxel-based 3D printing simulation framework that addresses the challenges associated with fast simulations of the FDM process. This simulation framework consists of a hierarchical and adaptive representation of the printing geometry and a physics-based simulation of the printing process. Our method leverages the power of hierarchical octree data structures and adaptive mesh refinement to generate a dynamic analysis-suitable geometry representation that captures the intricate infill patterns and changes with each new voxel print. This approach allows us to model the printing process accurately and efficiently while providing high voxel resolution for complex geometries. We utilize FEM-based simulation techniques and illustrate an implementation that predicts thermal evolution during the printing process. The following sections provide more details about each of the steps of our proposed framework. ### Voxel-by-Voxel (VbV) Printing Order The first step in our simulation method involves creating a voxel-by-voxel (VbV) printing order from the G-code. G-code is a standard format used to control 3D printers and typically consists of a sequence of commands that specify the toolpath, extruder speed, and other parameters. To create the VbV printing order, we first parse the G-code file to extract the toolpath information. We then convert the toolpath information into a series of connected line segments or curves corresponding to the extruder's movement during printing. Next, we discretize each line segment or curve by sampling equidistant points corresponding to the axis-aligned voxel grid space of the 3D printer. Each point is then assigned a voxel index based on its location within the voxel grid, and these voxel indices are stored in a list. To ensure no repetitions of the voxels, we check the list of voxel indices and remove any duplicates or overlapping indices. This is done by iterating over the list and checking each index against the indices that precede it. If the current index is found to be the same as or overlapping with a preceding index, it is removed from the list. The VbV printing order is generated by iterating over this list of unique voxel indices and printing the voxels in the order in which they were selected. Since each point on the line segment is sampled at equidistant intervals, the order in which the voxels are printed is automatically determined by the order in which they were selected. We can control the sparsity of the infill by printing the infill rows at regular intervals and skipping over the intermediate voxels. This allows us to control the density of the infill, which can have a significant impact on the mechanical properties of the printed object. Once we have generated the VbV printing order, we can use it as input to our analysis-suitable geometry generation and physics-based simulation modules to simulate the printing process and predict the final geometry of the printed object. ### Analysis Suitable Adaptive Octree Mesh Adaptive octree mesh has been widely used in computational sciences because of its simplicity and ability to scale to a large number of processors [30, 31, 32, 33, 34]. Specifically, we used _2:1 balanced axis aligned linearized_ octree. An octree is said to be _2:1 balanced_ if two neighboring octants do not differ by more than 1 level (i.e., neighboring octants can only differ in size by a factor of \(1/2\)). An _axis-aligned_ octree means that each element of the octree has its axis parallel to the Cartesian coordinate axis; in other words, the elements are not deformed. A linearized octree means that each element of the mesh can be represented by the leaf of an octree, which can be linearized into an array. The linear array is obtained by traversing the octree traversal in Morton or Hilbert order. Such a traversal allows for a good locality for the computations[35]. These features make octree suitable for performing the analysis relevant to the current work. In this section, we briefly describe the algorithmic development specific to our particular case. #### 3.2.1 Grids We consider two grids: the voxel grid (\(V_{G}\)) and the octree grid (\(O_{G}\)). \(V_{G}\) defines the uniform Cartesian grid related to the VbV printing. We consider a non-dimensional setting, where \(\Delta x_{i}\) (in the \(V_{G}\) grid) corresponds to 1 non-dimensional unit. A voxel \(v\in V_{G}\) is said to become active when it is printed by the nozzle, i.e., it participates in VbV printing. Once a voxel is active, it participates in all subsequent simulation steps. The FEM simulations are performed on the octree grid, \(O_{G}\). The octree grid follows the VbV steps of the voxel grid but can have elements (also called octants) at different sizes (or levels). This approach provides significant computational flexibility and efficiency, creating locally refined octants on/near the voxels that just turned active and creating coarsened voxels away from the interesting regions. In all our simulations, the element corresponding to a just active voxel is refined, i.e., \(\Delta x_{O_{G}}\leq\Delta x_{V_{G}}\). #### 3.2.2 Octree Construction and Refinement The octree is constructed in a top-down manner, with the root representing a cube. Whenever a given voxel \(v\) is activated (i.e., added into the simulation domain), an octant is refined to the required level. The identification of an element that needs to be refined is done by computing the equivalent integer coordinate in \(V_{G}\). Each element is given a flag of Refine or No Change depending on the required level and the current level. Refine flags correspond to increasing the level of the octant by a single level, which splits that element into eight smaller octants, whereas No Change refers to retaining the level of the octant. Once the flags are determined for each element, we construct the refined octree. This process is repeated recursively until all the flags correspond to No Change, meaning all the elements are at the desired level (Please see Algorithm 1). ``` 1:\(V_{G},v\) (activated voxel), \(O_{G}\), \(O_{L}\)(Level of octree for required voxel) 2:Refined octree 3:id_3d:int\(\leftarrow\) compute_3d_id(\(v,V_{G}\))\(\triangleright\) Compute the voxel id in 3D coordinate for voxel \(v\). 4:\(R_{flags}\leftarrow\)[No Change]\(\triangleright\) Vector for refine flags filled with No Change 5:for\(e\in O_{G}\)do 6:id_min:int\(\leftarrow\) compute_3d_id(\(e_{min},V_{G}\))\(\triangleright\) Compute the voxel id in 3D coordinate for voxel \(e_{min}\). 7:id_max:int\(\leftarrow\) compute_3d_id(\(e_{max},V_{G}\))\(\triangleright\) Compute the voxel id in 3D coordinate for voxel \(e_{max}\). 8:ifid_min\(\leq\)id_3d\(\leq\)id_max then\(\triangleright\) Checking if octree element is enclosed within the voxel grid 9:iflevel\((e)\leq O_{L}\)then\(\triangleright\) Checking the octree RefineFlags 10:\(R_{flags}[e]\leftarrow\)Refine\(\triangleright\) ``` **Algorithm 1**OctreeRefine: Octree Refinement procedure **Remark**.: _We can also apply the coarsening strategy guided by rigorous aposteriori error estimates [36]. While our framework has the capability to do so, we defer illustrating this capability to a subsequent publications._ In the current implementation, we allow the octree to be refined by a single level per iteration. One could alternately use multi-level refinement [37], where each octant is refined directly to a required level. We are currently exploring the computational trade-off (solve time vs. refinement cost) of this multi-level refinement strategy. #### 3.2.3 Element Classification Once we construct the octree grid \(O_{G}\), we classify each element in the octree mesh into two categories: Active and InActive. Active element refers to the set of elements that overlaps with the region of printed voxels at that given time \(t\), whereas the other regions (representing the volume that is not printed, i.e., air) are classified as InActive. We define the domain formed from the Active region as \(\Omega_{A}\). Therefore, \(\Omega_{I}\) can be defined as \(\Omega_{O}-\Omega_{A}\), where \(\Omega_{O}\) is the cube bounded by the octree at the root level. The classification of Active and InActive elements changes as a function of time. As a new voxel \(v\) is added into the system, the element corresponding to the new voxel is added into \(\Omega_{A}\) and removed from \(\Omega_{I}\). We note that only the Active regions participate in the solution during the solve time. The InActive regions are eliminated from the domain, usually by enforcing a Dirichlet boundary condition that effectively removes these regions from the solve step. This is similar to what is done in immersed-boundary methods [38]. One could alternatively use matrix contraction/expansion to remove/add degrees of freedom dynamically. However, such an approach will be very expensive due to the memory overhead of dynamically creating and allocating contiguous memory. ### _Physics-Based Simulation_ In this current work, we demonstrate the applicability of the algorithm to study the temperature (\(T\)) distribution during the additive manufacturing process. The governing set of equation is given by: \[\rho C_{p}\frac{\partial T}{\partial t} =\nabla\cdot\kappa\nabla u\ +S(t)\ \text{in}\ \Omega_{A} \tag{1}\] \[T(z=0) =T_{0}\ \text{for}\ \Omega_{A}\times[0,t]\] \[T =T_{v}\ \text{for}\ v\in\Omega_{A}\ \text{at}\ t=t_{A}\] \[\frac{\partial T}{\partial n} =0\ \text{in}\ \Gamma_{A}\] where \(\kappa\) denotes the (constant) thermal conductivity, \(\rho\) is the density and \(C_{p}\) is the specific heat capacity of the of the deposited material. \(S(t)\) is the volumetric heat (i.e. latent heat) released when a voxel is printed. \(\Gamma_{A}\) denotes the moving boundary front that encompasses the volume \(\Omega_{A}\). The bottom plate (\(z=0\)) is kept at a constant temperature, \(T_{0}\), producing a Dirichlet boundary condition. The second boundary condition indicates that the voxel \(v\) is added at a constant temperature of \(T_{v}\) at time point \(t=t_{A}\). Finally, we assume that most of the heat transfer is through the printed object down to the constant temperature bed. This produces a zero Neumann boundary condition on \(\Gamma_{A}\). This is representative of a build into a quiescent ``` 0:\(V[]\) (list of voxel in a given order of printing), \(V_{G}\) (Voxel grid),\(O_{L}\) (Octree level for voxel) 0:\(T(\vec{x})\) 1:\(O_{G}\leftarrow\) Construct (\(l=l_{b}\)) \(\triangleright\) Construct octree at base level \(l_{b}\leq O_{L}\) 2:for\(i\in len(V)\)do\(\triangleright\) Loop through the voxel 3:\(v\leftarrow\)V[i] 4:\(O_{G}\leftarrow\)OctreeRefine(\(O_{G},v,V_{G},O_{L}\)) \(\triangleright\) Refine octree for \(v\) (Algorithm 1) 5: Reclassify \(\Omega_{A}\)\(\triangleright\) Add new element to the Active element set 6:t \(\leftarrow\) 0 7:for t \(<T_{n}\)do\(\triangleright\) Solve Eq. 1 for fixed number of timestep 8:for\(e\in\Omega_{A}\)do\(\triangleright\) Loop through Active elements 9: perform matrix and vector assembly 10: Apply Dirichlet Boundary condition on plate and \(\Omega_{I}\) 11: Solve system of equation for \(T\) 12:\(t\gets t+\Delta t\)\(\triangleright\) Increment time return T \(\triangleright\) return the final temperature distribution ``` **Algorithm 2**Simulation Algorithm Figure 2: Thermal simulations for the Stanford bunny and the Moai at \(32^{3}\) and \(64^{3}\) voxel resolutions. surrounding (i.e. no fan), and simply serves as an illustrative example for our simulations. This condition can be relaxed by adding a constant heat flux based on a specified convection coefficient. #### 3.3.1 Algorithm details Algorithm 2 shows the major steps for simulating the temperature distribution during the additive manufacturing process. The algorithm takes a set of voxels as the input in the given printing order. First, an octree is created at a base level, usually much smaller than the required voxel level. Then, in the subsequent step, a voxel \(v\) is popped out from the start of the queue. A new octree \(O_{G}\) is constructed based on the position of the voxel \(v\) using Algorithm 1. This is followed by adding elements corresponding to voxel \(v\) into \(\Omega_{A}\). Once the octree is constructed and the elements are classified appropriately. We solve the heat equation till \(t=T_{n}\). \(T_{n}\), defined as the time taken before the next voxel is added to the system. We assume \(T_{n}\) to be a constant, _i.e._ each voxel is added at a fixed time interval. We repeat this process until the voxel is popped from the list \(V\). The final temperature distribution is then output. #### 3.3.2 Non-dimensionalization The non-dimensionalized form of the transient heat equation can be obtained by introducing dimensionless variables and scaling factors to the original equation. Let's assume that the characteristic length of the system is \(L\), the characteristic time is \(t_{c}\), and the characteristic temperature difference is \(\Delta T_{c}\). Then, the dimensionless variables are defined as follows. The dimensionless temperature \(T^{\prime}\): \(T^{\prime}=\frac{T-T_{\infty}}{\Delta T_{c}}\), where \(T_{\infty}\) is the ambient temperature. The dimensionless spatial coordinate \(x^{\prime}\): \(x^{\prime}=\frac{x}{L}\). The dimensionless time \(t^{\prime}\): \(t^{\prime}=\frac{t}{t_{c}}\). Using these dimensionless variables, we can rewrite the transient heat equation as: \[\frac{\partial T^{\prime}}{\partial t^{\prime}}=\nabla^{\prime}\cdot\alpha \nabla^{\prime}T^{\prime} \tag{2}\] where \(\alpha=\frac{\kappa T_{c}}{\rho c_{p}L^{2}}\) is the dimensionless thermal diffusivity. The non-dimensionalized equation is now independent of the specific values of the characteristic length, time, and temperature difference, and can be used to analyze heat transfer in any system with the same geometry and material properties. ## 4 Simulation Results In this section, we present the results of our finite element method (FEM) simulations for the 3D printing process. The aim of our study is to demonstrate the feasibility of the method, its ability to perform simulations of complex geometries, and the potential to predict print quality and identify potential defects. We describe the simulation parameters used in our study, including infill sparsity, boundary conditions, time step, and material properties. We also discuss the limitations of our simulations and the future direction of our work. ### Complex Geometry Simulations We demonstrate the ability of our method to perform FEM simulations on complex geometries using the Stanford bunny and Moai head models. These models are benchmark models used in the field of computer graphics. The bunny model has a lot of sharp edges, while the Moai head has many concave features. To perform the simulations, we used various simulation parameters. The infill sparsity was set to have sparse infilled layers in the middle of the model and dense infill layers at the top and bottom of the model. In sparse infill layers, we skipped \(3\) rows for each row printed, while the rest of the layers had dense infill, i.e., all the rows were printed. The boundary condition was set so that the print bed was assumed to be at a constant normalized temperature of \(1.0\), and the nozzle was assumed to be at a constant normalized temperature of \(2.0\). No convection with the ambient air was considered. The voxel print time was set to \(3\) time steps, and the thermal diffusivity of the material was set to \(0.0008\) for \(32\) voxel resolution and \(0.0002\) for \(64\) voxel resolution. These simulation parameters were chosen based on their suitability for the models and the 3D printing process. We performed thermal simulations of these models at different voxel resolutions, which are \(32^{3}\) and \(64^{3}\). The results of these simulations are shown in Fig. 2. The temperature distribution at three stages during the 3D printing process is shown. These stages correspond to when the first \(30\%\), \(60\%\), and \(100\%\) of the voxels have been printed. The color of the voxel shows its normalized temperature value; the temperature scale is shown on the right of the figure. In addition to varying voxel resolution, we also investigated the effect of infill sparsity on the temperature distribution during the 3D printing process. Fig. 3 shows the surface and volumetric view of the thermal simulations for the 64 voxel resolution Stanford bunny model with different infill sparsities. We show \(3\) bunny models with different sparsity levels, no, medium, and high sparsity. The sparsity difference between the model can be seen in the volumetric view. The no-sparsity case has all the layers densely infilled. The medium sparsity case has \(50\%\) of the layers in the middle with sparse infill, and the high sparsity case has \(75\%\) of the layers in the middle with sparse infill. It can be seen that the temperature distribution is different for different infill sparsities. The rate of heat diffusion decreases with the increase in sparsity. The dense model is cooler than the medium sparsity, and the medium sparsity is cooler than the high sparsity model. ### Computational Time Table 1 summarizes the computational time required for the finite element method simulations of the 3D printing process using the HPC cluster. The cluster nodes are equipped with Intel Skylake 6140 CPUs with 36 CPUs per node running at max speed of 2.30 GHz. The table includes the geometries of the models used in the simulation, the resolution, the number of voxels printed, and the simulation time in seconds. As the simulation advances, more voxels are added into the Active region resulting in increased problem size. Our setup allows for increase in number of processor. This allows for better use of HPC resources, where Figure 3: Surface and volumetric views of the Stanford bunny model depicting its infill sparsity. The infill density varies across different sections of the model with the middle section having a sparse infill pattern and the top and bottom sections having denser infill patterns. A skip interval of four infill rows for each printed row was implemented in the middle section to achieve this variation in infill density. The effect of infill sparsity on the overall temperature distribution in the model can be observed. the processor count can be adjusted according to the problem size. The increase in the processor count is accompanied by re-distributing the elements into the larger number of processor. We try to maintain a minimum grain size of around 2000 elements per processor. We have observed that this is minimum amount of the local compute that is needed before the scaling starts to flatten. The Stanford bunny and Moai models were simulated at two resolutions, \(32^{3}\) and \(64^{3}\). The results show that the computational time increases significantly with the increase in resolution, as expected. The number of processors used in the simulation is also an important factor affecting the simulation time. ### Scaling This work utilizes a validated and scalable framework that is integrated with PETSc for the linear algebra solver. We have demonstrated the scalability of the proposed framework till \(\mathcal{O}(100K)\) processors in our previous work [37]. In this work, we build upon the previous success and deploy the framework to understand the temperature distribution during the additive manufacturing step. Specifically, we used bi-conjugate gradient descent (-ksp_type bcgs) solver with Additive Schwartz preconditioner (-pc_type asm). The abolute and relative tolerance of the solver is set to \(10^{-15}\) for accurate result. ### Supplementary Video The full thermal simulation video for the \(64^{3}\) voxel resolution Stanford bunny and Moai during the 3D printing process is attached as supplementary material. The video shows the \(VbV\) 3D printing process along with the temperature distribution, the variation in the infill sparsity with model height can also be seen. We can also notice a change in the heat diffusion based on the infill density; the heat diffuses more in the dense infill regions than in the sparse infill regions. These visualizations show that the physics simulations can capture the complex interaction between the 3D printing parameters and the final printed object. ## 5 Discussion and Conclusions The FEM simulations presented in this study provide a valuable tool for predicting temperature distributions and identifying potential defects during the 3D printing process. The method was scalable to larger models \begin{table} \begin{tabular}{|l|r|r|r|} \hline **Geometry** & **Resolution** & **Print** & **Time (s)** \\ & & **Voxels** & \\ \hline Bunny & \(32^{3}\) & 5673 & 929 \\ \hline Bunny & \(64^{3}\) & 40082 & 5042 \\ \hline Moai & \(32^{3}\) & 3943 & 524 \\ \hline Moai & \(64^{3}\) & 23631 & 6111 \\ \hline \end{tabular} \end{table} Table 1: Computational time for the simulation. Figure 4: Top view of the bunny model printed up to an intermediate height showing the voxel pattern. and higher voxel resolutions, making it a useful tool for simulating a wide range of printing scenarios. Our Finite Element Method (FEM) framework is highly parallelizable, making it a promising tool for simulating 3D printing. Despite its strengths, the current simulations assume a zero flux boundary condition, neglecting convective heat transfer, which may not be accurate in all cases. This assumption can easily be relaxed by incorporating a convective heat transfer coefficient into the boundary conditions of the simulation. Looking forward, there is significant potential to extend this simulation framework to incorporate convective heat transfer, phase change, and other physical phenomena such as linear and nonlinear elasticity, fluid flow, and material deposition. This would allow for a more comprehensive understanding of the printing process and could lead to improvements in print quality and the optimization of support structures. Fig. 4 shows the actual voxel print of bunny geometry with sparse infill inside. As a part of our future work, we would like to experimentally validate our thermal simulations using thermal imaging cameras. In conclusion, the FEM simulation framework presented in this study provides a powerful tool for predicting temperature distributions and identifying potential defects during the 3D printing process. Future extensions of the framework could significantly enhance its capabilities and make it an essential tool in the 3D printing industry.
2304.07209
Ricci Inverse Anisotropic Stellar Structures
This paper offers novel quintessence compact relativistic spherically symmetrical anisotropic solutions under the recently developed Ricci inverse gravity Amendola et al., 2020), by employing Krori and Barua gravitational potentials, $Ar^2=\nu(r), ~\&~Br^2+C=\mu(r)$ (with A, B, and C being real constants). For this objective, a specific explicit equation of state, connecting energy density and radial pressure, i.e., $p_r=\omega\rho$, such that $0<\omega<1$, has been utilized with an anisotripic fluid source. Ricci inverse field equations are used to find the exclusive expressions of the energy density, radial and tangential stresses, and the quintessence energy density, the critical physical attributes reflecting the exceptional conduct of extremely dense matter configuration. For the observatory source stars $Her X-1$, $SAX J 1808.4-3658$ and $4U 1820-30$, all the important physical quantities like energy densities, tangential and radial pressures, energy conditions, gradients, anisotropy, redshift and mass-radius functions, and stellar compactness have been worked out and analyzed graphically. It has been concluded that all of the stellar formations under consideration remain free from any undesirable central singularity and are stable.
M. Farasat Shamir, Mushtaq Ahmad, G. Mustafa, Aisha Rashid
2023-04-14T15:48:32Z
http://arxiv.org/abs/2304.07209v1
# Ricci Inverse Anisotropic Stellar Structures ###### Abstract This paper offers novel quintessence compact relativistic spherically symmetrical anisotropic solutions under the recently developed Ricci inverse gravity [1], by employing Krori and Barua gravitational potentials, \(Ar^{2}=\nu(r),\ \&\ Br^{2}+C=\mu(r)\) (with A, B, and C being real constants). For this objective, a specific explicit equation of state, connecting energy density and radial pressure, i.e., \(p_{r}=\omega\rho\), such that \(0<\omega<1\), has been utilized with an anisotripic fluid source. Ricci inverse field equations are used to find the exclusive expressions of the energy density, radial and tangential stresses, and the quintessence energy density, the critical physical attributes reflecting the exceptional conduct of extremely dense matter configuration. For the observatory source stars \(HerX-1\), \(SAXJ1808.4-3658\) & \(4U1820-30\), all the important physical quantities like energy densities, tangential and radial pressures, energy conditions, gradients, anisotropy, redshift and mass-radius functions, and stellar compactness have been worked out and analyzed graphically. It has been concluded that all of the stellar formations under consideration remain free from any undesirable central singularity and are stable. **Keywords**: Anisotropic compact stars; Ricci Inverse Gravity; Quintessence. ## I Introduction The self-gravitation mechanism and its effects have a significant impact on new astrophysics studies and its background. This gravitational collapse results in the formation of new stellar components known as compact stars. These compact objects are thought to be exceedingly dense because the are massive but with smaller volumetric radii. Compact stars have attracted the interest of researchers in relativistic astronomy due to the relativistic structures, which are well explained by general relativity and by different modified theories [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14] as well. Modified and extended theories have been used by countless researchers to overcome the limitations and restrictions of general relativity, which occurred because of dark matter and dark energies [15; 16; 17; 18; 19; 20] that effects the accelerated universe. Each of these approaches is without flaws. Some of the researcher have used simple cosmological constant [21; 22; 23; 24; 25; 26; 27], which has just recently gained dominance in the present era. Quintessence approach depending upon a scalar field has also been used by [28; 29; 30]. The emergence of new dimensional constants is an unappealing feature of most alternative formulations. Another issue is the addition of emerging fields, either scalar or vector, with no evident relationship to the curvature tensor, which represents a significant change from Einstein's geometrical approach. These difficulties are not necessarily deal breakers, but they are also not grounds in favor of changing Einstein's study of gravity. Moreover, the Starobinsky models [31; 32; 33; 34; 35; 36; 37], which include the \(R^{2}\) corrected term, are one of the most promising models. These models are also the fundamental variants of the \(f(R)\) theory [38; 39]. As a result, having a reliable alternative model may lead to a more revealing picture of our universe's early and late time eras [40]. Furthermore, surprising findings may occur in alternate gravitational theories due to the availability of extra \(4th\) order corrected terms like \(R^{2}\) or \(R_{\alpha\beta}A\ R^{\alpha\beta}\). Deser and Woodard [41] investigated two models that allow for a slow reaction to cosmic occurrences while avoiding the normal range. They apply the analysis on FRW space-time by taking the inverse of the D'Alembert operator along with cosmological constant. Soussa and Woodard [42] analyzed the altered Ricci scalar, which allowed them to calculate the gravitational effect imparted by the metric perturbation trace. They have restricted the matter dispersion to have the property that its gravitational collapse rate is similar to the rate of space-time extension. Moreover, Amendola et. al. [43] have expanded the idea of Deser and Woodard by employing the non-local function of altered gravity and changing the Einstein-Hilbert equation with a term \(Rf(\Box^{-1}R)\), where \(f\) is a random function. Furthermore, a non-local altered gravity [44; 45] was obtained by adding the expression \(m^{2}R\Box^{-1}R\), which has caused a self-accelerating expansion by establishing the equation of state (EoS) of dark energy, \(w_{DE}\simeq-1.14\)[46]. Amendola [1] has developed a new theory for gravity known as Ricci inverse (RI) gravity which is dependent on the anti-curvature scalar \(A\). In RI model, the term \(A\) is the trace of anti-curvature tensor \(A^{\alpha\beta}\) that holds the property \(A^{\alpha\beta}=R_{\alpha\beta}^{-1}\). Motivated from this recently developed modified theory of gravity, we aim to investigate anisotropic compact structures in RI gravity. For this purpose, we investigate the quintessence structures for compact stars in the RI theory of gravity using \(HerX-1\), \(SAXJ1808.4-3658\), and \(4U1820-30\) as star sources. Our work is managed as follows: Section II presents the fundamental concepts, structure, and equations of the RI gravity. In Section III, the matching constraints for the RI gravity are developed using Schwarzschild's geometry. Section IV is devoted to calculating and analysing physical properties such as energy density, pressure profiles and quintessence density. The final section contains come concluding remarks. ## II Inverse Ricci gravity The anticurvature tensor, constructed as inverse of Ricci tensor, enables the growth of an alternative gravity theory. A theory like this will need to be examined in order to determine the prevalence of ghosts-like instabilities and infer the consequences on perturbations like gravitational wave propagation, perturbation growth, or the Newtonian bound. Hence, a theory like this may prove a viable option to be assumed as cosmological model after drawing the general motion equations of the generic function with Lagrangian representing scalars of the curvature and anticurvature. It has been demonstrated that in every Lagrangian including a term constituted with some positive or negative exponent of the anticurvature scalar, a generalized no-go theorem prohibits cosmological trajectories from joining a decelerated stage to an accelerated one, ruling out a large group of models. In some especially simple examples of models with no additional dimensional scales, the no-go theorem is demonstrated analytically and mathematically. In this novel theory, the various escape routes from the theorem have been examined, highlighting several (quite complicated) Lagrangians that do so. A thorough examination of such concepts in quest of fresh phenomenology and will happen to be quite fascinating if further investigation is made. A tensor \(A^{ab}\) as inverse of \(R_{ab}\) is defined as \[A^{ab}R_{bc}=\delta_{c}^{a}, \tag{1}\] where \(A^{ab}\) the anticurvature tensor. The action for Ricci-inverse gravity is given by [1] \[S_{Action}=\int\sqrt{-g}d^{4}x(\alpha A+R), \tag{2}\] where \(A\) is a trace of \(A^{ab}\), which is defined as \[A^{ab}=R_{ab}^{-1}. \tag{3}\] The modified field equations are given as [1] \[R^{ab}-\frac{1}{2}Rg^{ab}-\alpha A^{ab}-\frac{1}{2}\alpha Ag^{ab}+\frac{ \alpha}{2}\left(2g^{\varrho a}\nabla_{\alpha}\nabla_{\varrho}A^{\alpha}_{ \sigma}A^{b\sigma}-\nabla^{2}A^{a}_{\sigma}A^{b\sigma}-g^{ab}\nabla_{\alpha} \nabla_{\varrho}A^{\alpha}_{\sigma}A^{\varrho\sigma}\right)=\mathcal{T}^{ab} \tag{4}\] where \(A^{\alpha}_{\sigma}A^{b\sigma}=A^{\alpha\tau}g_{\tau\sigma}A^{\sigma b}=A^{ \alpha\tau}A^{b}_{\tau}=A^{\alpha\sigma}A^{b}_{\sigma}=A^{b}_{\sigma}A^{ \alpha\sigma}\) with \(8\pi G=1\). The stress tensor possessing anisotropic matter content is read as \[\mathcal{T}_{ab}=(\varrho+p_{t})u_{a}u_{b}-p_{t}g_{ab}+(p_{r}-p_{t})v_{a}v_{b}. \tag{5}\] Here, \(u_{a}=e^{\frac{i}{2}}\delta_{a}^{0}\) and \(v_{a}=e^{\frac{i}{2}}\delta_{a}^{1}\), \(\mathcal{T}_{a}^{b}=\mathcal{T}_{a}^{b}+\mathcal{D}_{a}^{b}\), \(\mathcal{D}_{a}^{b}\) gives the stress tensor for quintessence field equation exhibiting energy density expressed as \(\rho_{a}\) and the parameter \(w_{q}\) represents quintessence field such that \((-1<w_{q}<-\frac{1}{3})\). Also, \(-\mathcal{D}_{t}^{t}=\mathcal{D}_{r}^{r}=-\rho_{q}\), and \(\frac{(3w_{q}+1)\rho_{q}}{2}=\mathcal{D}_{\theta}^{\theta}=\mathcal{D}_{\phi} ^{\phi}\). The spherically symmetric metric is given by \[ds^{2}=e^{\mu(r)}dt^{2}-e^{\nu(r)}dr^{2}-r^{2}d\theta^{2}-r^{2}\sin^{2}\theta d \phi^{2}, \tag{6}\] Now we can choose the Krori and Barua gravitational potentials, which are defined as \[Ar^{2}=\nu(r)\qquad\qquad Br^{2}+C=\mu(r), \tag{7}\] with \(A\), \(B\), and \(C\) are some arbitrary constants. By using Eqs. (5-7) in Eq. (4) with quintessence field, we get the following system of modified field equations \[\rho+\rho_{q} = e^{-Ar^{2}}\left(\frac{\alpha\left(\chi_{2}+\chi_{3}-\chi_{4} \right)e^{2Ar^{2}}}{B^{2}\left(r^{2}(B-A)+3\right)^{4}}+\frac{1}{2}\alpha\chi_ {1}e^{2Ar^{2}}-\frac{2Ar^{2}+e^{Ar^{2}}-1}{r^{2}}\right), \tag{8}\] \[p_{r}-\rho_{q} = \frac{1}{2}\alpha\left(\frac{2\chi_{6}}{r\left(B(Br+1)-A\left(Br ^{2}+2\right)\right)^{4}}+\chi_{5}\right),\] (9) \[p_{t}+\frac{1}{2}(3w_{q}+1)\rho_{q} = -r^{2}\left(-\alpha e^{Ar^{2}}\left(\frac{\chi_{9}}{\left(r^{2}( A-B)+e^{Ar^{2}}-1\right)^{4}}+\chi_{8}\right)+\alpha\chi_{7}e^{Ar^{2}}+\chi_{10}e^{- Ar^{2}}\right). \tag{10}\] where \(\chi_{i}\), \(\{i=1,...,9\}\) are given in the Appendix (**I**). To find the explicit expressions for the above field equations, we assume a relation between radial pressure and energy density via \(EoS\) parameter, which is defined as \[p_{r}=\omega\rho,\hskip 28.452756pt0<\omega<1. \tag{11}\] The above relation by Eq. (11) is the particular form of \(EoS\), the general form of this equation was presented by Herrera and Barreto [47]. This linear of equation state, though a simplest choice, has been extensively considered to discuss the physical properties of astrophysical compact objects [48; 49; 50; 51; 52; 53; 54; 55]. By using Eq. (11) with Eqs. (8-10), we get the final expressions for energy density \(\rho\), quintessence energy density \(\rho_{q}\), and pressure components \(p_{r},\ p_{t}\) as \[\rho = \frac{-\frac{1}{2}\alpha\left(2\Psi_{1}\Psi_{2}\right)-e^{-Ar^{2 }}\left(\frac{\alpha\Psi_{4}e^{2Ar^{2}}}{B^{2}\left(r^{2}(B-A)+3\right)^{4}}+ \frac{1}{2}\alpha\Psi_{3}e^{2Ar^{2}}-\frac{2Ar^{2}+e^{Ar^{2}}-1}{r^{2}}\right) }{\omega+1}, \tag{12}\] \[\rho_{q} = \frac{e^{-Ar^{2}}\left(-\frac{\alpha\Psi_{4}e^{2Ar^{2}}}{B^{2} \left(r^{2}(B-A)+3\right)^{4}}-\frac{1}{2}\alpha\Psi_{3}e^{2Ar^{2}}+\frac{2Ar ^{2}+e^{Ar^{2}}-1}{r^{2}}\right)-\frac{1}{2}\alpha\left(2\Psi_{1}\Psi_{2} \right)}{\omega+1}\] (13) \[+ e^{-Ar^{2}}\left(\frac{\alpha\Psi_{4}e^{2Ar^{2}}}{B^{2}\left(r^ {2}(B-A)+3\right)^{4}}+\frac{1}{2}\alpha\Psi_{3}e^{2Ar^{2}}-\frac{2Ar^{2}+e^{ Ar^{2}}-1}{r^{2}}\right),\] \[p_{r} = \frac{\omega\left(-\frac{1}{2}\alpha\left(2\Psi_{1}\Psi_{2} \right)-e^{-Ar^{2}}\left(\frac{\alpha\Psi_{4}e^{2Ar^{2}}}{B^{2}\left(r^{2}(B-A )+3\right)^{4}}+\frac{1}{2}\alpha\Psi_{3}e^{2Ar^{2}}-\frac{2Ar^{2}+e^{Ar^{2}} -1}{r^{2}}\right)\right)}{\omega+1},\] (14) \[p_{t} = -r^{2}\left(e^{-Ar^{2}}\left(\frac{\alpha e^{2Ar^{2}}}{r^{2}(A-B )+e^{Ar^{2}}-1}+\frac{A-2B}{r^{2}}+B(A-B)\right)-\alpha e^{Ar^{2}}\right.\] (15) \[\times \left.\left(\Psi_{6}\left(\Psi_{8}e^{Ar^{2}}+\Psi_{7}\right) \right)+\alpha\Psi_{5}e^{Ar^{2}}\right)-\frac{1}{2}\rho_{q}(3w_{q}+1),.\] where where \(\Psi_{i}\), \(\{i=1,...,10\}\) are given in the Appendix (**I**). ## III Matching conditions The interior border metric does not vary irrespective of the star's geometrical structure, whether from the outside or the inside. Regardless of the reference frame, this emergent circumstance mandates that the metric components must be continuous to the boundary. While evaluating Schwarzschild's solution connected with stellar remnants in general relativity, it is well-thought-out to be the main priority out of all the accessible different alternatives of matching circumstances. Furthermore, while engaging with modified gravity theories, it is a good idea to account for quasi pressure and energy density. Many scholars have done outstanding work on boundary conditions [56; 57]. By integrating some unique constraints to stellar compact structures as well as the thermodynamically related characteristics, Goswami et al. [58] figured out the matching bounds when exploring the modified gravity. However, the well-known Schwarzschild's geometry can be used to solve the various stellar solutions. A few constraints are imposed at the boundary \(r=R\), that is, \(p_{r}(r=R)=0\), to get the expressions for the field equations. To get the quantities of the associated unknowns, we must first figure out how to align the internal geometry with the external spacetime. We may employ the Schwarzschild spacetime for this, which is described as \[ds^{2}=\left(1-\frac{2M}{r}\right)dt^{2}-\left(1-\frac{2M}{r}\right)^{-1}dr^{2 }-r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right). \tag{16}\] The total mass trapped within the star is denoted by \(M\), while the radius of the compact celestial leftover is denoted by \(R\). It is important to mention here that whatever the geometry of the star is, either derived internally or externally, the boundary metric must remains the same with the condition that the components of the metric tensor irrespective of the coordinate system across the surface of the boundary will remain continuous. No doubt, in theory of general relativity, the Schwarzschild solution has been really important in motivating us to choose from the diverse possibilities of the matching conditions. However, while investigating the compact objects in some alternative gravity theories, modified Tolman-Oppenheimer-Volkoff equations with zero pressure and energy density, the solution outside the star can differ from Schwarzschild's solution. Thus, it is expected that the solutions of the modified Tolman-Oppenheimer-Volkoff equations with non-zero energy density and pressure may accommodate Schwarzschild's geometry with some specific choice of modified gravity model [59; 60; 61; 62; 63]. Perhaps this might be the reason or justification that Birkhoff's theorem may not hold in alternative theories of gravity. The detailed investigation on the topic in the context of RI gravity can be an interesting future task. By examining the metric potentials, the accompanying expressions are obtained at \(r=R\) \[g_{tt}^{+}=g_{tt}^{-},\qquad g_{rr}^{+}=g_{rr}^{-},\quad\ \frac{\partial g_{tt}^{+}}{ \partial r}=\frac{\partial g_{tt}^{-}}{\partial r}. \tag{17}\] The inner and outer signatures respective to the boundary \(r=R\) are reflected as (-) and (+), respectively. Comparison of the inner and outer matrices, the values of the desired unknown quantities can be determined as follows: \[A=-\frac{log(1-\frac{2M}{R_{e}})}{R_{e}^{2}},\ \ \ \ B=-\frac{Mlog(1-\frac{2M}{R_{e}}) ^{-1}}{R_{e}^{2}},\ \ \ C=log(1-\frac{2M}{R_{e}})-\frac{M}{R_{e}(1-\frac{2M}{R_{e}})}. \tag{18}\] The quantities \(M\), \(R\), \(M/R\), and the calculated values of the involved parameters like \(A\) and \(B\), are provided in Tab. 1. Actually, the values of the parameters \(A\) and \(B\) come from the Eq. 18. As it's very clear from the Eq. 18, there is no effect of the quintessence field \(w_{q}\). In fact, Eq. 18 only depends upon the observational values of stars like \(HerX-1\), \(SAXJ1808.4-3658\) & \(4U1820-30\). Further, it is necessary to mention that quintessence field \(w_{q}\) has an important role in the anisotropic matter, especially in the tangential pressure component. ## IV Physical analysis We devote this section to investigate a few key characteristics associated with considered compact stars. Energy density \(\rho\), pressure profiles \(p_{r}\), and \(p_{t}\), and debates on the quintessence field, as well as their physical interpretation under RI gravity, are among them. The energy bounds, anisotropic stress, compactness factor, and the star's sound speed concerning the radial and tangential components are also discussed. ### Density and Pressure Profiles The related evolutions of the energy density, as well as the radial and tangential pressures, make up the most significant stellar environment required for the existence of compact structures. The energy density profiles, and the evolutions of quintessence density, and pressure terms have been studied for \(HerX-1\) (Left), \(SAXJ1808.4-3658\) (Middle), and \(4U1820-30\) (Right) for diversified values of \(\omega=0.01,\,0.03,\,0.05,\,0.07,\,0.09\). The plots of the related graphs as shown in Figs. (1-4) clearly demonstrate the related physical analysis. The energy density reaches its maximum value near the star's core, indicating the star's super-dense nature. Furthermore, the tangential and radial pressure profiles exhibit positive behavior and reach their maximal values at the surface of star. For our model under RI gravity, the evolutions of the stars under study also imply the existence of an anisotropic structure independent of any possible singularities. ### Energy Bounds The importance of energy bounds among existing physical characteristics in explaining the presence of anisotropic compact structures is widely recognized throughout the literature. Furthermore, they serve an important role in determining the distribution of the matter content of normal as well as exotic sources inside stellar structures. Because of these energy limitations, several important conclusions can be reached. Physical attributes for the energy conditions seem to be fairly advantageous in realistically examining the matter distribution. The energy constraints have remained fundamentally vital in addressing cosmological and astrophysical challenges. Below are the expressions for \(NEC\), null energy bounds, \(SEC\), the strong energy bounds, \(DEC\), the dominant energy bounds, and \(WEC\), the week energy bounds. \[\begin{split} NEC:\rho+p_{r}\geq 0,\ 0\leq\rho+p_{t};\ \ WEC:\rho \geq 0,\ \rho+p_{r}\geq,\ \rho+p_{t}\geq 0,\\ SEC:0\leq\rho+p_{r},\ \rho+p_{t}\geq 0,\ 0\leq 2p_{t}+p_{r}+\rho +;\ \ DEC:|p_{r}|<\rho,\ |p_{t}|<\rho.\end{split} \tag{19}\] In Figs. (5-9), the graphical development of the energy restrictions is illustrated. The positive profiles of the energy bounds for the stars, \(HerX-1\), \(SAXJ1808.4-3658\), and \(4U1820-30\) for a variety of choices of \(EoS\) parameter show that our solutions are realistically acceptable under RI gravity. ### Gradients and Anisotropy Analysis The total derivatives of the physical quantities \(\rho\), \(p_{r}\), and \(p_{t}\) for compact star are described by the expressions \(\frac{d\rho}{dr}\), \(\frac{dp_{r}}{dr}\), and \(\frac{dp_{t}}{dr}\), respectively. The graphical representations (10-12) of all these radial derivatives for the compact stars under discussion suggest a negative evolution, i.e., \(\frac{d\rho}{dr}<0\), \(\frac{dp_{r}}{dr}<0\), and \(\frac{dp_{t}}{dr}<0\). It is worth noticing here that the gradients \(\frac{d\rho}{dr}\), and \(\frac{dp_{r}}{dr}\to 0\) at the core of the star \(r=0\). This verifies the radial pressure \(p_{r}\) attains the maximum bound as well as the density \(\rho\) at the core of the star. As a result, \(\rho\) and \(p_{r}\) achieve the maximum value at \(r=0\). The role of anisotropy is critical for the compact sphere modelling in order to describe the inner geometry of the relativistic stellar configuration under given conditions, and it is represented by \(\Delta\), and is defined as \(\Delta=p_{t}-p_{r}\). It shows the deviation between the tangential and radial stress quantities for the star configuration's anisotropic nature. If \(p_{t}>p_{r}\), the anisotropy is deemed non-negative, and it is drawn outwards and seen as \(\Delta>0\). If \(p_{r}>p_{t}\), then, the anisotropy remains negative, resulting in \(\Delta>0\), indicating that anisotropy is pulled inwards. It is evident from Fig. (13) that anisotropy stays positive for our current investigation, i.e. \(\Delta>0\), and, therefore, directed outwards. ### Stability analysis and Abreau Condition The sound speeds related with the radial and transversal components with their corresponding representations of \(v_{r}^{2}\) A and \(v_{t}^{2}\) establish the stability of the compact star solutions obtained in our investigations. For this purpose, The conditions \(0\leq v_{t}^{2}=\frac{dp_{t}}{d\rho}\leq 1\) and \(0\leq v_{r}^{2}=\frac{dp_{r}}{d\rho}\leq 1\) must be achieved [64]. The related plots of sound speeds in Figs. (14) for different values of \(EoS\) parameter demonstrate that the radial and transversal sound speeds for the stellar candidates \(HerX-1\), \(SAXJ1808.4-3658\), and \(4U1820-30\) remain within the necessary stability restrictions. The limitations of both the radial and transversal sound velocities are validated for all compact star candidates. The approximation of the steady and fragile epochs inside the anisotropic matter configurations can be discussed by modifying the sound speeds propagations, which have the expression \(v_{t}^{2}-v_{r}^{2}\) endorsing the constraint \(0<|v_{t}^{2}-v_{r}^{2}|<1\), and this condition is referred to as the Abreu condition depicted in Fig.(15). It shows that the for the compact stars discussed under RI gravity, total stability may be achieved. ### Equation of State Parameter The development of compact star emergence can be predicted using \(EoS\) parameter. Furthermore, \(EoS\) is a crucial component of any astrophysical model. The pressure terms \(p_{r}\) and \(p_{t}\) are used to calculate the \(EoS\) for the anisotropic matter distribution. The components of \(EoS\) for studying stellar configuration are \(\omega_{r}\) and \(\omega_{t}\), which are mathematically written as \[\omega_{r}=\frac{p_{r}}{\rho_{e}},\qquad\omega_{t}=\frac{p_{t}}{\rho_{e}}. \tag{20}\] The components of the \(EoS\) exhibit monotonically decreasing behavior with increasing radii and is always below 1, which can be witnessed in Fig. 16. Furthermore,the positive character of both \(EoS\) components, \(\omega_{r}\) and \(\omega_{t}\), is seen which fulfills the requirement \(0\leq\omega_{r}\) and \(\omega_{t}<1\) for the viable stellar solutions. ### Mass-Radius relation, Compactness, and Redshift Analysis Radii \(r\) dependent mass function of a compact star, \(m(r)\) is determined by employing the integral as follows \[m(r)=4\int_{0}^{r}\pi\dot{r}^{2}\rho d\dot{r}. \tag{21}\] The mass-radius graph in Fig. (17) shows that the mass is exactly correlated to the radius \(r\), so that as \(r\to 0\), \(m(r)\to 0\), indicating that the correct behavior of mass at the star's core. Also, as calculated by B homer and Harko [65], the mass-radius ratio must remain \(\frac{2M}{r}\leq\frac{8}{9}\), which in also validated in our study. The stellar structure's compactness \(\mathcal{K}(r)\) is given by the integral \[\mathcal{K}(r)=\frac{4\pi}{r}\int_{0}^{r}\rho\dot{r^{2}}d\dot{r}, \tag{22}\] and plotted in Fig. (18). The redshift \(\Re_{S}\) function is given as \[\Re_{S}+1=[1-2\mathcal{K}(r)]^{\frac{-1}{2}}. \tag{23}\] Fig. (19) shows its graphical depiction and it is concluded that the numerical approximation of \(\Re_{S}\) is still within the required threshold \(\Re_{S}\leq 2\), indicating that our worked out solutions in RI gravity are indeed viable. ## V Results and Discussions This study is focussed to explore anisotropic compact structures, using a new alternative theory termed as RI Gravity [1]. The related field equations of the RI gravity for compact stars are solved using certain assumptions. We establish our calculations based on the statistics related to the \(HerX-1\), \(SAXJ1808.4-3658\), and \(4U1820-30\) as strange star candidates, by choosing appropriate \(EoS\) parametric values. Our current investigation is focused on the feasibility of the existence of quintessence anisotropic compact stars. The gravitational breakdown for the emerging universe at diverse epochs has now been examined by integrating spacetime symmetries with the exclusively matched Schwarzschild's geometry. Following are some of the primary findings from current analysis along with some key points. * The terms \(\rho\), \(p_{t}\), \(p_{r}\) are critical physical factors for the creation of stellar formations. The related plots and the statistics connected to the strange star candidates under our investigation indicate that the energy density near the star core reaches the maximum value, indicating the star's high-density nature. Furthermore, the tangential and radial pressure terms remain positive and show monotonically decreasing behavior towards the boundary of the star. These patterns also provide evidence for the presence of anisotropic matter distribution free from singularities for the RI gravity under consideration. Additionally, the plots of the quintessence energy density show a negative trend, indicating that our stellar RI gravity model supports quintessence. * The importance of energy restrictions in the literature on compact stellar structures is fairly clear. All the related plots show that energy constraints are satisfies proving that our obtained solutions are physically viable under RI gravity. The gradient profiles \(\frac{d\rho}{dr}\), \(\frac{dp_{r}}{dr}\) and \(\frac{dp_{t}}{dr}\) of the compact stars are shown in the respective plots. It is indicated that the first order derivative is evolving negatively which verifies the upper bound of the radial pressure \(p_{r}\) with respect to the density \(\rho\). As a result, at \(r=0\), \(\rho\) and \(p_{r}\) attain the maximum value. * The associated sound speed graphs are illustrated by the relevant figures, which demonstrate that the radial, as well as the transversal sound speeds for all presented stars, are confined within the required limits of consistency. The graphical developments demonstrate that the condition \(0<|v_{st}^{2}-v_{sr}^{2}|<1\) is met for the stable behavior of a compact star. * The \(EoS\) parameters for the stars \(HerX-1\), \(SAXJ1808.4-3658\), and \(4U1820-30\) have also been plotted for the various values. It is concluded that the constraints \(\omega_{r}\) and \(\omega_{t}\) of \(EoS\) remain positive in the stellar interior and are within the stability bounds \(0\leq\omega_{r}\),A and \(\omega_{t}<1\) under RI gravity. * The anisotropy \(\Delta\) with respect to \(r\) of the stars under observation shows a positive increasing behavior as illustrated in their associated graphs, indicating repelling anisotropic forces accommodated by a dense matter source. * Finally, the compactness parameter, mass function, and redshift are also plotted. It can be observed that \(\mu(r)\leq 0.30\) and has an increasing tendency, whereas \(m(r)\) also has an increasing character. Further, \(Z_{s}\) has a decreasing attribute and is always \(Z_{s}\leq 5\), which is associated with the stability of the compact structures. ## Appendix (I) \[\chi_{1} =-\frac{2r^{2}}{r^{2}(A-B)+e^{Ar^{2}}-1}+\frac{3}{B\left(r^{2}(B-A)+ 3\right)}+\frac{1}{Br^{2}(B-A)-2A+B},\] \[\chi_{2} =-72A^{3}r^{4}+143A^{2}Br^{4}+114A^{2}r^{2}-88AB^{2}r^{4}-117ABr^{ 2}+72A+30B^{2}r^{2}-45B,\] \[\chi_{3} =10A^{4}r^{6}-29A^{3}Br^{6}+30A^{2}B^{2}r^{6}-13AB^{3}r^{6}+2B^{4} r^{6}+17B^{3}r^{4},\] \[\chi_{4} =-2Br^{2}e^{2Ar^{2}}\left(r^{2}(B-A)+3\right)\left(r^{2}\left(2A^{ 2}-3AB+B^{2}\right)-8A+5B\right),\] \[\chi_{5} =\frac{2r^{2}\left(r^{2}\left(2A^{2}-3AB+B^{2}\right)-8A+5B \right)}{B\left(r^{2}(B-A)+3\right)^{3}},\] \[\chi_{6} =3A^{4}r^{3}\left(Br^{2}+2\right)^{2}-A^{3}r\left(Br^{2}+2 \right)\left(Br^{2}(Br(r+6)+25)-2\right)+A^{2}Br(Br(r(Br(r(B(2r+3)+4)+35)\right.\] \[\left.+61)+14\right)-8)-AB^{2}(r(Br(r(B(r(Br+5)+8)+3)+45)-3)+4)+B^ {3}\left(Br\left(Br^{2}+r+11\right)+2\right),\] \[\chi_{7} =\frac{1}{\frac{Br^{2}(A-B)+2A-B}{2r^{2}}-\frac{1}{Br(r^{2}(B-A) +3)}}+\frac{1}{r^{2}(A-B)+e^{Ar^{2}}-1},\] \[\chi_{8} =\frac{r^{2}\left(2A^{2}-3AB+B^{2}\right)-8A+5B}{B\left(r^{2}(B- A)+3\right)^{3}},\] \[\chi_{9} =r^{4}\left(3A^{2}-10AB-B^{2}\right)+e^{Ar^{2}}\left(r^{6}(A-B) \left(3A^{2}-8AB+B^{2}\right)+2Ar^{8}(A-B)^{2}(A+B)-2r^{4}(A-B)\right.\] \[\left.\left.\left(17A-B\right)+r^{2}(27A-7B)+8\right)-8A^{2}r^{8} (A-B)^{2}+2Ar^{6}(A-B)(9A-B)+2r^{2}(4B-11A)+e^{3Ar^{2}}\right.\] \[\times\left(r^{2}(A+B)+2\right)+e^{2Ar^{2}}\left(2Ar^{6}(A-B)(A+ B)+2Ar^{4}(A-3B)-2r^{2}(3A+B)-7\right)-3,\] \[\chi_{10} =\frac{\alpha e^{2Ar^{2}}}{r^{2}(A-B)+e^{Ar^{2}}-1}+\frac{A-2B}{ r^{2}}+B(A-B).\] \[\Psi_{1} =\frac{2r^{2}\left(r^{2}\left(2A^{2}-3AB+B^{2}\right)-8A+5B\right) }{B\left(r^{2}(B-A)+3\right)^{3}}+\frac{1}{r\left(B(Br+1)-A\left(Br^{2}+2 \right)\right)^{4}},\] \[\Psi_{2} =3A^{4}r^{3}\left(Br^{2}+2\right)^{2}-A^{3}r\left(Br^{2}+2 \right)\left(Br^{2}(Br(r+6)+25)-2\right)+A^{2}Br(Br(r(Br(r(B(2r+3)+4)\right.\] \[\left.+35)+61)+14)-8)-AB^{2}(r(Br(r(B(Br(Br+5)+8)+3)+45)-3)+4)+B^ {3}\left(Br\left(Br^{2}+r+11\right)+2\right),\] \[\Psi_{3} =-\frac{2r^{2}}{r^{2}(A-B)+e^{Ar^{2}}-1}+\frac{3}{B\left(r^{2}(B- A)+3\right)}+\frac{1}{Br^{2}(B-A)-2A+B},\] \[\Psi_{4} =10A^{4}r^{6}-29A^{3}Br^{6}-72A^{3}r^{4}+30A^{2}B^{2}r^{6}-2Br^{2 }e^{2Ar^{2}}\left(r^{2}(B-A)+3\right)\left(r^{2}\left(2A^{2}-3AB+B^{2}\right)-8 A+5\right.\] \[\times B)+143A^{2}Br^{4}+114A^{2}r^{2}-13AB^{3}r^{6}-88AB^{2}r^{4 }-117ABr^{2}+72A+2B^{4}r^{6}+17B^{3}r^{4}+30B^{2}r^{2}-45B,\] \[\Psi_{5} =\frac{\frac{1}{Br^{2}(A-B)+2A-B}-\frac{1}{B(r^{2}(B-A)+3)}}{2r^{ 2}}+\frac{1}{r^{2}(A-B)+e^{Ar^{2}}-1},\] \[\Psi_{6} =\frac{r^{2}\left(2A^{2}-3AB+B^{2}\right)-8A+5B}{B\left(r^{2}(B- A)+3\right)^{3}}+\frac{1}{\left(r^{2}(A-B)+e^{Ar^{2}}-1\right)^{4}},\] \[\Psi_{7} =r^{4}\left(3A^{2}-10AB-B^{2}\right)-8A^{2}r^{8}(A-B)^{2}+2Ar^{6}( A-B)(9A-B)+2r^{2}(4B-11A)+e^{3Ar^{2}}\] \[\times\left(r^{2}(A+B)+2\right)+e^{2Ar^{2}}\left(2Ar^{6}(A-B)(A+ B)+2Ar^{4}(A-3B)-2r^{2}(3A+B)-7\right)-3,\] \[\Psi_{8} =r^{6}(A-B)\left(3A^{2}-8AB+B^{2}\right)+2Ar^{8}(A-B)^{2}(A+B)-2r^ {4}(A-B)(17A-B)+r^{2}(27A-7B)+8\]
2309.01684
CRUISE-Screening: Living Literature Reviews Toolbox
Keeping up with research and finding related work is still a time-consuming task for academics. Researchers sift through thousands of studies to identify a few relevant ones. Automation techniques can help by increasing the efficiency and effectiveness of this task. To this end, we developed CRUISE-Screening, a web-based application for conducting living literature reviews - a type of literature review that is continuously updated to reflect the latest research in a particular field. CRUISE-Screening is connected to several search engines via an API, which allows for updating the search results periodically. Moreover, it can facilitate the process of screening for relevant publications by using text classification and question answering models. CRUISE-Screening can be used both by researchers conducting literature reviews and by those working on automating the citation screening process to validate their algorithms. The application is open-source: https://github.com/ProjectDoSSIER/cruise-screening, and a demo is available under this URL: https://citation-screening.ec.tuwien.ac.at. We discuss the limitations of our tool in Appendix A.
Wojciech Kusa, Petr Knoth, Allan Hanbury
2023-09-04T15:58:43Z
http://arxiv.org/abs/2309.01684v1
# CRUISE-_Screening_: Living Literature Reviews Toolbox ###### Abstract. Keeping up with research and finding related work is still a time-consuming task for academics. Researchers sit through thousands of studies to identify a few relevant ones. Automation techniques can help by increasing the efficiency and effectiveness of this task. To this end, we developed CRUISE-_Screening_, a web-based application for conducting living literature reviews - a type of literature review that is continuously updated to reflect the latest research in a particular field. CRUISE-_Screening_ is connected to several search engines via an API, which allows for updating the search results periodically. Moreover, it can facilitate the process of screening for relevant publications by using text classification and question answering models. CRUISE-_Screening_ can be used both by researchers conducting literature reviews and by those working on automating the citation screening process to validate their algorithms. The application is open-source,1 and a demo is available under this URL: [https://citation-screening.ec.tuwien.ac.at](https://citation-screening.ec.tuwien.ac.at). We discuss the limitations of our tool in Appendix A. information retrieval, natural language processing, literature reviews, living reviews, systematic reviews, citation screening + Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+] † †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+] † †: [FOOTNOTE:+]Footnote †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] †: [FOOTNOTE:+] † †: [FOOTNOTE:+] †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] † †: [FOOTNOTE:+] † †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] † †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: † †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: [FOOTNOTE:+] †: † †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: †: [FOOTNOTE:+] †: †: † †: †: †: †: † †: † †: †: † †: †: † †: † †: †: † †: † †: †: † †: †: † †: †: † †: † † †: † †: † † †: † †: † † † †: potential to promote collaboration and facilitate the exchange of ideas among researchers. ## 2. Related Work ### Academic Search Engines Private academic search engines, citation indices and paywalled collections such as ScienceDirect and Web of Science are one source of finding publications. Public search engines and publication aggregators such as Google Scholar2, Semantic Scholar [(2)], CORE [(20)], OpenAlex [(35)] and PubMed3 are becoming increasingly popular for allowing researchers to freely access the latest publications. Their main goal is creating a citation network, and their support for conducting systematic literature reviews is often minimal. Moreover, only a few of these tools provide an API, and none of them allow for a traditional systematic literature review workflow. Footnote 2: [https://www.scholar.google.com/](https://www.scholar.google.com/) Footnote 3: [https://www.ncbi.nlm.nih.gov/pubmed/](https://www.ncbi.nlm.nih.gov/pubmed/) ### Systematic Review Toolboxes There is already a number of tools helping researchers conduct systematic literature reviews. An online catalogue4 enumerates 45 tools helping users during the screening phase, whereas Harrison et al. [(14)] found 15 of them accessible and available without specific computing infrastructure for a title and abstract screening step. Dedicated commercial tools exist to help medical researchers conduct systematic literature reviews. They are usually customised to medical reviews and require purchasing a subscription which can be a bottleneck to academic researchers from lower- and lower-middle income countries [(28; 30)]. Footnote 4: [http://www.systematicreviewtools.com](http://www.systematicreviewtools.com) In addition to the commercial tools, a plethora of free or open-source tools is available, usually created by academics. These tools, such as Abstrakr [(42)], Rayyan [(11)], or ASRreview [(41)] usually support only one of the systematic review stages. ### Automated Citation Screening All the documents retrieved from the search step constitute the input to the citation screening step. In a manual screening scenario, reviewers read all these documents to select only the fraction relevant to the systematic review. Because the total number of retrieved studies can go into tens of thousands, it is essential to find a way of improving this process [(32)]. Automated citation screening is an umbrella term for using NLP, machine learning and information retrieval (IR) techniques with the goal of decreasing the time spent on manual screening. Classification approaches train a supervised model on an annotated dataset to determine whether a paper should be included or excluded from the review [(23; 24)]. Previous approaches ranged from statistical models like naive Bayes classification [(3; 27)], support vector machine (SVM) [(7; 8; 15; 26)], voting perceptron [(6)] and random forest [(19)] to neural networks [(21; 22)]. A significant limitation of all these approaches is the need for a large number of manual annotations that must be completed before developing a reliable model for every new systematic review [(40)]. Moreover, the majority of the classification models are evaluated only retrospectively which might raise questions of data leakage when considering large amounts of data used for pretraining language models [(25)]. ## 3. Cruise-Screening Figure 1 shows the architecture of CRUISE-_Screening_, which is built with Python 3.9, Django 4, Bulma and AlpineJS frameworks. ### Data Resources Good quality input data covering multiple domains is the crucial ingredient of a successful literature review. Nussbaumer-Streit et al. [(31)] found that combining two separate databases may suffice to reliably determine the conclusions of a systematic review in medicine. Therefore, CRUISE-_Screening_ was designed to use multiple data sources and to allow for extending them when needed. Currently, it supports the following four search engines as data sources: Semantic Scholar API5, CORE API6, PubMed via Entrez API7 and internal document storage. Footnote 5: [https://www.semanticscholar.org/product/api](https://www.semanticscholar.org/product/api) Footnote 6: [https://www.ncbi.nlm.nih.gov/search/](https://www.ncbi.nlm.nih.gov/search/) Footnote 7: [https://www.sytemanticreviewtools.com](https://www.sytemanticreviewtools.com) The first three APIs call search engines that are used as primary data sources when searching for documents. Using three different search engines with contrasting scopes and content enables good search results coverage. The tool also allows for indexing documents in the internal database. It is implemented using Elasticsearch and communicates with the main application using the API. It can be used, for example, when one wants to store private documents or content not covered by other search engines. For this demo, we index the DBLP-Citation-network Version 13 collection8 created by Tang et al. [(39)]. Footnote 8: [https://www.semanticscholar.org/product/api](https://www.semanticscholar.org/product/api) The system could be expanded to connect to other search engines offering API access. As the system is a meta-search engine, we use a script to deduplicate the search results based on papers' metadata. ### Screening Workflow The typical screening workflow for systematic literature reviews consists of two stages. In the first stage, the researcher searches for documents potentially related to the research topic. In the second stage, the documents are screened for relevance. We have implemented these two stages inside CRUISE-_Screening_. _Search for relevant items._ First, the user creates a new literature review by defining the research protocol 1. The protocol (Figure 2) consists of the review's title, description, at least one search query and a set of inclusion and exclusion criteria (eligibility criteria). The tool allows for specifying search engines, by default selecting all four available sources described in Section 3.1. The search can be limited to only the first \(N\) results if the reviewer is not interested in a comprehensive literature review. CRUISE-_Screening_ sends API requests to selected search engines and gathers all responses 2. Merged and deduplicated search results are stored in a PostgreSQL database 3. In order to support living reviews, the user can re-run the search function periodically to update the list of references. However, since search engines only allow filtering by publication year and not month or day, the tool removes publications older than the year of the previous search during updates. The tool then relies on deduplication to ensure that new publications are not mistakenly added twice. CRUISE-_Screening_ also allows for the additional direct import of data for screening from two sources * Currently, the tool supports bib and ris file extensions. These publications are imported to the PostgreSQL database. * These files are processed using GROBID (Bordes et al., 2017) and then added to the database. Documents added this way can also be marked as seed studies. This way, these new documents are labelled as _relevant_, which can speed up the process of automated screening. Citation screeningCurrently, CRUISE-_Screening_ implements the title and abstract screening while providing external URLs to full text articles whenever available. Figure 3 presents an example screening interface. From the top, it contains the title, abstract, authors, publication venue and year and the link to the full text of the screened paper. Below are two sections with eligibility criteria questions and a main inclusion question. There are two screening workflows in CRUISE-_Screening_: strict and relaxed. Strict screening requires the annotators to conduct the process by manually answering every eligibility question. It mimics the citation screening process of systematic reviews. This mode could be used for in-depth systematic reviews or gathering manual annotations for training machine learning models. The relaxed mode does not impose any requirements on which questions the annotator should answer except for the main _include_/_maybe_/_exclude_ decision. There are optional questions about the reviewer's prior knowledge of the paper and authors, which reviewers can turn on to control for the selection bias. The output of the literature review can be exported in a json format It contains the literature review protocol and all identified studies with corresponding automatic and manual decisions. ### Automation Methods Except for the fully manual workflow, CRUISE-_Screening_ implements automation methods to increase the speed and coverage of the literature review. Implemented approaches include supervised text classification and zero-shot question-answering models. The tool connects to them using an API, which allows for extending the list of supported models. Text classificationWe implemented two examples of supervised classifiers based on previous literature: a logistic regression model using tf-idf text representation and a fastText classifier (Krizhevsky et al., 2014). These models provide a single _yes/no_ decision for each paper (corresponding to the main eligibility question from the manual workflow). Reviewers need to annotate a "training set" of at least three included and three excluded papers before using the models. When Figure 1. Overview of the CRUISE-_Screening_ architecture. Figure 2. Example literature review protocol containing review title, description, search queries and criteria for inclusion and exclusion. the reviewer annotates more publications, the models can be retrained to make an updated prediction on the remaining documents. _Question answering._ In addition to supervised text classification, CRUISE-_Screening_ enables users to conduct automatic screening using prompt-based language models with a question answering approach. The advantage of this method is that it does not require pre-labelled data and can make predictions for all inclusion and exclusion criteria. However, it can be computationally intensive and sensitive to the quality of input questions. The API is designed to support any Text2TextGeneration model implemented in the HuggingFace Transformers (Hugging et al., 2019) library. We used the T0_3B and T0 models (Zhu et al., 2020). The example prompt consists of a single eligibility question and the same paper data as available during manual screening (Figure 3), namely the title, abstract, authors, journal name and publication year. ## 4. Discussion and Conclusion The merging of results from multiple sources can present significant challenges in the context of scientific publications. Although using multiple data sources can lead to better coverage of relevant studies, combining the results from different sources is not a trivial task. The data can have different formats, fields, and identifiers, which require significant effort to reconcile. Additionally, data quality can be poor in some cases, which can further complicate the merging process. Therefore, careful consideration must be given to the merging process, and the use of automated tools can help improve the accuracy and efficiency of the process. Nevertheless, human intervention may still be required to resolve any inconsistencies or errors in the data (Kus ## Acknowledgements This work was supported by the EU Horizon 2020 ITN/ETN on Domain Specific Systems for Information Extraction and Retrieval - DoSSIER (H2020-EU.1.3.1, ID: 860721). We thank all the members of the DoSSIER Project who contributed to the CRUISE workshop.
2307.01328
A maximal inequality for local empirical processes under weak dependence
We introduce a maximal inequality for a local empirical process under strongly mixing data. Local empirical processes are defined as the (local) averages $\frac{1}{nh}\sum_{i=1}^n \mathbf{1}\{x - h \leq X_i \leq x+h\}f(Z_i)$, where $f$ belongs to a class of functions, $x \in \mathbb{R}$ and $h > 0$ is a bandwidth. Our nonasymptotic bounds control estimation error uniformly over the function class, evaluation point $x$ and bandwidth $h$. They are also general enough to accomodate function classes whose complexity increases with $n$. As an application, we apply our bounds to function classes that exhibit polynomial decay in their uniform covering numbers. When specialized to the problem of kernel density estimation, our bounds reveal that, under weak dependence with exponential decay, these estimators achieve the same (up to a logarithmic factor) sharp uniform-in-bandwidth rates derived in the iid setting by \cite{Einmahl2005}.
Luis Alvarez, Cristine Pinto
2023-07-03T20:02:40Z
http://arxiv.org/abs/2307.01328v1
# A maximal inequality for local empirical processes under weak dependence ###### Abstract. We introduce a maximal inequality for a local empirical process under strongly mixing data. Local empirical processes are defined as the (local) averages \(\frac{1}{nh}\sum_{i=1}^{n}\mathbf{1}\{x-h\leq X_{i}\leq x+h\}f(Z_{i})\), where \(f\) belongs to a class of functions, \(x\in\mathbb{R}\) and \(h>0\) is a bandwidth. Our nonasymptotic bounds control estimation error uniformly over the function class, evaluation point \(x\) and bandwidth \(h\). They are also general enough to accomodate function classes whose complexity increases with \(n\). As an application, we apply our bounds to function classes that exhibit polynomial decay in their uniform covering numbers. When specialized to the problem of kernel density estimation, our bounds reveal that, under weak dependence with exponential decay, these estimators achieve the same (up to a logarithmic factor) sharp uniform-in-bandwidth rates derived in the iid setting by Einmahl and Mason (2005). Alvarez: Post-doctoral researcher, Fundacao Getulio Vargas. Pinto: Full professor, Insper. ## 1. Introduction In this paper, we introduce a maximal inequality for a local empirical process under exponential decay of the sample \(\alpha\)-mixing coefficients. We provide a nonasymptotic bound on an Orlicz norm of the uniform estimation error of the local-average process \(\frac{1}{nh}\sum_{i=1}^{n}\mathbf{1}\{x-h\leq X_{i}\leq x+h\}f(Z_{i})\), where uniformity holds simultaneously over the evaluation point \(x\in\mathbb{R}\), bandwidth \(h\in[a_{n},b_{n})\), and function \(f\in\mathcal{F}\), with \(a_{n}\leq b_{n}\) being positive constants and \(\mathcal{F}\) a function class. The nonasymptotic nature of our results allows for function classes whose complexity, as measured by their uniform covering numbers, increases with \(n\). The latter is especially useful for applications in high-dimensional statistics (see Belloni et al. (2017) for an example). We also discuss how to extend our results to multidimensional \(x\) and subgeometric decay of mixing coefficients. We then apply our results to functional classes \(\mathcal{F}\) that exhibit polynomial decay in their uniform covering numbers. This class is particulary interesting, as polynomial decay in uniform entropy is ensured by a finite VC dimension (van der Vaart and Wellner, 1996, Theorem 2.6.7). When specialized to the problem of kernel density estimation, our results show that, under exponential decay in the mixing coefficients, kernel estimators achieve the same, up to a logarithmic factor, sharp uniform-in-bandwidth rates obtained by Einmahl and Mason (2005) in the iid setting. To the best of our knowledge, the closest paper to ours is Escanciano (2020), who provides uniform-in-bandwidth asymptotic rates for local empirical processes under stationarity, a \(\beta\)-mixing condition and polynomial decay in bracketing entropy. There are some important differences between his approach and ours, though. First, Escanciano's analysis is asymptotic, thus not applicable to function classes of increasing complexity. Second, while the author uses bracketing entropy to control complexity, our analysis relies on uniform covering numbers. Finally, his main result (Theorem 2.1) leads to a rate of \(a_{n}^{-1/2}\) over the function classes encompassed by his setting. When compared to our application in Section 3, we see that our resulting rates are logarithmic in \(a_{n}\), thus offering a great improvement over his results in kernel-type problems. The remainder of this paper is organized as follows. Section 2 introduces our main result. Section 3 applies it to function classes with uniform entropy displaying polynomial decay. Section 4 concludes. ## 2. A maximal inequality for local empirical processes We present our main result in the theorem below. In what follows, we define, for some \(p\geq 0\), \(\psi_{p}(x)\coloneqq\exp(-x^{p})-1\); and, for a random variable \(Z\), the Orlicz norm with respect to \(\psi_{p}\) is given by: \[\|Z\|_{\psi_{p}}\coloneqq\inf\left\{C>0:\mathbb{E}\left[\psi_{p}\left(\frac{|Z |}{C}\right)\right]<\infty\right\}\,.\] Moreover, for a sequence \(\{Y_{i}\}_{i=1}^{n}\) of random variables defined on a common probability space \((\Omega,\Sigma,\mathbb{P})\), we define the \(\alpha\)-mixing coefficients as, for each \(k\in\mathbb{N}\). \[\alpha(k)\coloneqq\sup_{t\in\mathbb{N}}\,\sup_{A\in\sigma(Y_{1},\ldots,Y_{t}), B\in\sigma(Y_{t+k},Y_{t+k+1},\ldots)}\left|\mathbb{P}[A\cap B]-\mathbb{P}[A] \mathbb{P}[B]\right|.\] **Theorem 1**.: _Let \(\{(X_{i},Z_{i})\}_{i\in\mathbb{N}}\) be a sequence of random variables defined on a common probability space \((\Omega,\Sigma,\mathbb{P})\), with mixing coefficients satisfying, for some \(c>0\), \(\alpha(i)\leq\exp(-2ci)\ \forall i\in\mathbb{N}\). Suppose that each \(X_{i}\) is real valued, with common distribution fucntion \(G\) that admits a bounded Lebesgue density \(g\). Suppose that each \(Z_{i}\) takes values on a measurable space \((\mathcal{Z},\Sigma)\). Let \(\mathcal{F}\) be a pointwise measurable class of real-valued functions with domain \(\mathcal{Z}\). Define:_ \[S_{n}(x,f;h)\coloneqq\frac{1}{\sqrt{nh}}\sum_{i=1}^{n}\mathbf{1}\{x-h\leq X_{i }\leq x+h\}f(Z_{i})\,,\quad u\in[0,1],f\in\mathcal{F},h\in[a_{n},b_{n})\,,\] _where \(0<a_{n}\leq b_{n}\) are positive constants, with \(\frac{1}{\|g\|_{\infty}n}\leq b_{n}<\frac{1}{\|g\|_{\infty}}\)._ _Suppose that the class of functions \(\mathcal{F}\) admits a measurable and bounded envelope \(\kappa\). Assume that, for some \(q\in(1,\infty]\), \(\omega(\epsilon)\coloneqq\sup_{P\in\mathcal{P}(\mathcal{Z})}\mathcal{N}(2 \epsilon\|\kappa\|_{\infty},\mathcal{F},\|\cdot\|_{q,P})\), is finite for every \(\epsilon>0\), where \(\kappa\) is a positive constant._ Proof.: We first show that for any \(\epsilon>0\), \(\kappa\) is a positive constant. By the definition of \(\kappa\), we have \(\kappa=\kappa\). \(\|f\|_{P,q}=(P|f|^{q})^{1/q}\) and the supremum is taken over all probability measures on \((\mathcal{Z},\Sigma)\). Then, for every \(n\geq 2\):_ \[\left\|\sup_{x\in\mathbb{R},h\in[a_{n},b_{n}),f\in\mathcal{F}}|S_{n}(x,f;h)- \mathbb{E}[S_{n}(x,f;h)]|\right\|_{\psi_{1}}\leq C\frac{\|\kappa\|_{\infty}}{ \sqrt{na_{n}}}+\frac{D\|\kappa\|_{\infty}}{\sqrt{na_{n}}}\sum_{l=\lceil-\log_{2 }(\|g\|_{\infty}b_{n})\rceil}^{\lceil\log_{2}(n)\rceil}\inf_{\delta\geq 0} \psi_{l}(\delta)\,, \tag{1}\] _where \(C>0\) is an absolute consant; the constant \(D>0\) depends solely on \(c\); and:_ \[\begin{split}\psi_{l}(\delta)&=\left(\log(n)^{2}[(l+ 1)\vee\log(\omega(\delta))]+\sqrt{\frac{ln}{2^{l}}\lor 1}\sqrt{[(l+1)\vee \log(\omega(\delta))]}\right)+\\ n\delta&\left[\left(n^{-1}\log(n)^{2}(l+1)+\sqrt{ \frac{l}{2^{l}n}\vee\frac{1}{n^{2}}}\sqrt{(l+1)}\right)^{\frac{q-1}{q}}+\frac {1}{2^{\frac{(q-1)}{q}(l+1)}}\right]\,.\end{split} \tag{2}\] The proof of Theorem 1 is deferred to Appendix A. It relies on a chaining argument due to Rio (2017, Proposition 7.2), which we couple with the Bernstein inequality of Merlevede et al. (2009) to help achieve control of the error of the localizing function \(\mathbf{1}\{x\leq X\leq x+h\}\) uniformly over the bandwidth. Note that our bound is nonasymptotic, with the constant \(D\) depending solely on the decay parameter \(c\). Our bound also depends on minimising the functions \(\psi_{l}(\delta)\). This minimisation involves finding a "small" \(\delta\) that is able to ensure appropriate control of the uniform entropy \(\omega(\delta)\). In the next section, we show that, for classes with polynomial decay in their uniform entropy, a useful upper bound to this infimum can be computed. **Remark 1** (Extension to nid \(X\)).: It is possible to extend our results to the setting where the distribution of \(X_{i}\) varies with \(i\). In this case, one has to adapt Lemma 1 in Appendix A to allow for a different localization parameter \(K_{i}\) for every \(i\). Since this brings clutter to the proofs and does not reveal any new insights, for ease of exposition, we focus on the case where the \(X_{i}\) are identically distributed. Observe that, in the statement of Theorem 1, we do not require the \(Z_{i}\) to be identically distributed. **Remark 2** (Extension to multidimensional \(X\)).: It is possible to extend our results to the setting where \(X\) is multidimensional by relying on Lemma 7.2 of Rio (2017). This result decomposes the difference between a \(d\)-dimensional cdf at two points as a sum of \(d\) differences of the \(d\) marginal cdfs at different evaluation points. Using this result, we may extend Lemma 1 in Appendix A to the multivariate setting, and use it to prove a multivariate version of Theorem 1. **Remark 3** (Extension to unbounded envelopes and subgeometric decay).: The statement of Theorem 1 requires the function class to display bounded envelope; and mixing coefficients to exhibit exponential (geometric) decay. These assumptions can be relaxed by replacing the Bernstein inequality of Merlevede et al. (2009) with a result due to Merlevede et al. (2010). In this case, a tradeoff between the tail behaviour of the envelope function and the decay of mixing coefficients emerges. Consequently, we achieve control over the uniform estimation error under a different Orlicz norm. ## 3. Application: function classes with polynomial decay In this section, we apply our results to classes of functions that exhibit polynomial decay in their covering numbers. Specifically, in the setting of Theorem 1, we take \(q=2\) and consider a sequence of bounded function classes \(\{\mathcal{F}_{n}\}_{n\in\mathbb{N}}\) with corresponding envelope \(\kappa_{n}\), whose uniform entropy numbers \(\omega_{n}\) satisfy: \[\omega_{n}(\delta)\leq C_{n}\left(\frac{1}{\delta}\right)^{v_{n}}\,,\quad \forall\delta\in(0,1],\] for positive constants \(C_{n}>0\),\(v_{n}>0\), \(n\in\mathbb{N}\). We also assume the envelopes to be uniformly bounded, with \(\sup_{n\in\mathbb{N}}\|\kappa_{n}\|_{\infty}\leq\phi<\infty\). Suppose that \(na_{n}\to\infty\). In this case, we may find a constant \(G>0\) such that, for every \(n\) above a certain threshold, \(\lceil-\log_{2}(\|g\|_{\infty}b_{n})\rceil\leq l\leq\lceil\log_{2}(n)\rceil\); and \(\delta<\exp\left(\frac{1}{v_{n}}(\log(C_{n})-(l+1))\right)\): \[\psi_{l}(\delta)\leq G\log(n)^{2}(\log(C_{n})-v_{n}\log(\delta))+G\sqrt{\frac{ ln}{2^{l}}}\sqrt{(\log(C_{n})-v_{n}\log(\delta))}+Gn\delta\frac{\log(n)^{3/2}} {2^{(1/2)(l+1)}}\,, \tag{3}\] Consider the choice: \[\delta_{l}=\frac{C_{n}^{1/v_{n}}}{2^{((1/v_{n}))(l+1)}\sqrt{n}(\|g\|b_{n})^{1 /v_{n}}\log(n)^{3/2}}\,.\] Assuming that: \[\frac{\log(n)^{3}\log\log n}{\sqrt{Ta_{t}}}\to 0\,.\] In this case, we obtain the rate: \[\begin{split}&\left\|\sup_{x\in\mathbb{R},h\in[a_{n},b_{n}),f\in \mathcal{F}}|S_{n}(x,f;h)-\mathbb{E}[S_{n}(x,f;h)]|\right\|_{\psi_{1}}=\\ & O\left(\sqrt{\frac{b_{n}}{a_{n}}}(C_{n}^{1/v_{n}}\vee\sqrt{- \log(\|g\|b_{n})v_{n}\log(n)}\vee-\log(\|g\|b_{n}))\right)\,.\end{split} \tag{4}\] Our rate explicitly accomodates for function classes of increasing complexity. If the sequence of functions has finite (albeit possibly increasing) VC dimension, then Theorem 2.6.7 of van der Vaart and Wellner (1996) shows that the \(v_{n}\) and \(C_{n}\) may be taken such that \(C_{n}^{1/v_{n}}\) is bounded. In this case, the complexity of the function class only affects the rate through the \(\sqrt{v_{n}}\) term, where \(v_{n}\) scales linearly with the VC dimension. Finally, we consider the problem of kernel density estimation. In this case, the function class is the same for every \(n\in\mathbb{N}\). If the bandwidths are taken such that \(a_{n}\asymp b_{n}\), then our rate simplifies to: \[\sqrt{-\log(a_{n})}\sqrt{\log(n)\vee-\log(a_{n})}\,.\] In contrast, Remark 2 in Einmahl and Mason (2005) shows that, in the iid setting, one can achieve the rate: \[\sqrt{\log\log(n)\vee-\log(a_{n})}\,,\] which coincides with our rate except for logarithmic factors. Note that, if we consider, as it is usually done in practice, polynomial bandwidths, i.e \(a_{n}=Cn^{-\alpha}\) for \(\alpha>0\), then our rate simplifies to \(\log(n)\), whereas Einmahl and Mason's collapses to \(\sqrt{\log(n)}\). ## 4. Concluding remarks In this paper, we introduced a maximal inequality for the uniform estimation error of a local empirical process under strongly mixing data, where uniformity holds simultaneously over the function class, bandwidth and evaluation point. Our nonasymptotic bounds accomodate function classes with increasing complexity, which is a useful feature for "high-dimensional" statistical analyses. As an application, we computed our bounds to function classes that exhibit polynomial decay in their uniform entropy. When specialized to the kernel density estimation problem, these results show that our bound leads to the same optimal rates derived by Einmahl and Mason (2005) in the iid setting. More generally, we view our results as a first step in the development of rigorous uniform inference tools in local estimation problems under weak dependence and data-driven bandwidths. Specifically, one may combine our results with couplings in the weakly dependent setting (e.g. Cattaneo et al., 2022) to devise test statistics that control size uniformly over the evaluation point \(x\). An example is the construction of uniform-in-\(x\) confidence bands for local polynomial quantile regression estimators with time series data. We intend to study such procedures in future research. ## Appendix A Proof of Theorem 1 We first prove a preliminary result for uniform random variables. **Lemma 1**.: _Let \(\{(U_{i},Z_{i})\}_{i\in\mathbb{N}}\) be a sequence of random variables defined on a common probability space \((\Omega,\Sigma,\mathbb{P})\), with mixing coefficients satisfying, for some \(c>0\), \(\alpha(i)\leq\exp(-2ci)\ \forall i\in\mathbb{N}\). Suppose that each \(U_{i}\) is uniformly distributed on \([0,1]\), and that each \(Z_{i}\) takes values on a measurable space \((\mathcal{Z},\Sigma)\). Let \(\mathcal{F}\) be a nonnegative pointwise measurable class of real-valued functions with domain \(\mathcal{Z}\). Define:_ \[Z_{n}(u,f)\coloneqq\frac{1}{\sqrt{n}}\left(\sum_{i=1}^{n}\mathbf{1}\{U_{i}\leq u \}f(Z_{i})-\mathbb{E}[\mathbf{1}\{U_{i}\leq u\}f(Z_{i})]\right)\,,\quad u\in[0,1 ],f\in\mathcal{F}\,.\] _Suppose that the class of functions \(\mathcal{F}\) admits a measurable and bounded envelope \(\kappa\). Assume that, for some \(q\in(1,\infty]\), \(\omega(\epsilon)\coloneqq\sup_{P\in\mathcal{P}(\mathcal{Z})}\mathcal{N}(2 \epsilon\|\kappa\|_{\infty},\mathcal{F},\|\cdot\|_{q,P})\), is finite for every \(\epsilon>0\), where \(\|f\|_{P,q}=(P|f|^{q})^{1/q}\) and the supremum is taken over all probability measures on \((\mathcal{Z},\Sigma)\). Then, for every \(n\geq 2\) and \(K\in\mathbb{N}\) such that \(K\leq\lceil\log_{2}(n)\rceil\):_ \[\left\|\sup_{u\in[0,1],f\in\mathcal{F}}|Z_{n}(u,f)-Z_{n}(\Pi_{K}(u),f)| \right\|_{\psi_{1}}\leq\frac{\|\kappa\|_{\infty}}{\sqrt{n}}+D\|\kappa\|_{ \infty}\sum_{l=K+1}^{\lceil\log_{2}(n)\rceil}\inf_{\delta\geq 0}\psi_{l}( \delta)\,,\] _where the constant \(D>0\) depends solely on \(c\); for \(J\in\mathbb{N}\), \(\Pi_{J}(u)=\frac{\lfloor 2^{J}u\rfloor}{2^{J}}\); and:_ \[\psi_{l}(\delta) =\left(n^{-1/2}\log(n)^{2}[(l+1)\vee\log(\omega(\delta))]+\sqrt{ \frac{l}{2^{l}}\vee\frac{1}{n}}\sqrt{[(l+1)\vee\log(\omega(\delta))]}\right)+\] \[\sqrt{n}\delta\left[\left(n^{-1}\log(n)^{2}(l+1)+\sqrt{\frac{l}{2 ^{l}n}\vee\frac{1}{n^{2}}}\sqrt{(l+1)}\right)^{\frac{q-1}{q}}+\frac{1}{2^{ \frac{(q-1)}{q}(l+1)}}\right]\,.\] Proof.: We begin by adapting the chaining argument in the proof of Proposition 7.2. of Rio (2017). Fix \(f\in\mathcal{F}\). Observe that for \(L\geq K\), we may write: \[\sup_{u\in[0,1]}|Z_{n}(u,f)-Z_{n}(\Pi_{K}(u),f)|\leq\sum_{l=K+1}^{L}\Delta_{l} (f)+\Delta_{L}^{*}(f)\,,\] where \(\Delta_{l}(f)=\max_{k\in\{1,\ldots,2^{l}\}}|Z_{n}((k-1)/2^{l},f)-Z_{n}(k/2^{l},f)|\); and \(\Delta_{L}^{*}(f)=\sup_{u\in[0,1]}|Z_{n}(\Pi_{L}(u),f)-Z_{n}(\Pi(u),f)|\). Observe that: \[-\|\kappa\|_{\infty}\frac{\sqrt{n}}{2^{L}}\leq Z_{n}(\Pi(u))-Z_{n}(\Pi_{L}(u) )\leq\Delta_{L}(f)+\|\kappa\|_{\infty}\frac{\sqrt{n}}{2^{L}}\,.\] From the above, we obtain that: \[\sup_{u\in[0,1]}|Z_{n}(u,f)-Z_{n}(\Pi_{K}(u),f)|\leq 2\sum_{l=K+1}^{L}\Delta_{l} (f)+\|\kappa\|_{\infty}\frac{\sqrt{n}}{2^{L}}\,,\] and, by taking \(L=\lceil\log_{2}(n)\rceil\), we arrive at: \[\sup_{u\in[0,1]}|Z_{n}(u,f)-Z_{n}(\Pi_{K}(u),f)|\leq 2\sum_{l=K+1}^{\lceil \log_{2}(n)\rceil}\Delta_{l}(f)+\|\kappa\|_{\infty}\frac{1}{\sqrt{n}}\,. \tag{5}\] The previous argument holds for a fixed \(f\). We achieve uniformity as follows. For a given sequence \(\{\delta_{l}\}_{l=K+1}^{\lceil\log_{2}(n)\rceil}\), we may write, for each \(l\): \[\sup_{f\in\mathcal{F}}\Delta_{l}(f)\leq\max_{f\in T_{l}}\max_{k\in\{1,\ldots,2^ {l}\}}\left|Z_{n}((k-1)/2^{l},f)-Z_{n}(k/2^{l},f)\right|\] \[+\sup_{f\in\mathcal{F}:\|f-f^{\prime}\|_{q,\tilde{p}_{n}}\vee\|f-f^{\prime}\|_ {q,\tilde{p}_{n}}\leq\delta_{l}}\max_{k\in\{1,\ldots,2^{l}\}}\left|Z_{n}((k-1) /2^{l},f)-Z_{n}((k-1)/2^{l},f^{\prime})-Z_{n}(k/2^{l},f^{\prime}))\right|, \tag{6}\] with \(T_{l}\) being, simultaneously, a \(2\delta_{l}\|\kappa\|_{\infty}\)-cover of \(\mathcal{F}\) with respect to \(\|\cdot\|_{q,\tilde{\mathbb{P}}_{n}}\) and \(\|\cdot\|_{q,\tilde{\mathbb{P}}_{n}}\), where \(\|\kappa\|_{q,\tilde{\mathbb{P}}_{n}}=\left(\frac{1}{n}\sum_{i=1}^{n}\mathbb{ E}[|f(Z_{i})|^{q}]\right)^{1/q}\); and \(\hat{\mathbb{P}}_{n}\) denotes the empirical measure \(\frac{1}{n}\sum_{i=1}^{n}\delta_{Z_{i}}\). Observe that, by the assumption in the statement of the lemma, \(|T_{l}|\leq\omega(\delta_{l})^{2}\); and that the set \(T_{l}\) is random. By filling the \(T_{l}\) with constant functions equal to zero whenever \(|T_{l}|<\omega(\delta_{l})^{2}\), we may assume, without loss of generality, that \(|T_{l}|=\omega(\delta_{l})^{2}\) and we may thus order the random elements in \(T_{l}=\{f_{1,l},f_{2,l},\ldots,f_{\omega(\delta_{l})^{2},l}\}\). We thus consider \(T_{l}\) to be a set of \(\omega(\delta_{l})^{2}\) random functions. We will deal with each term on the right-hand side of (6) separately. Consider the first term. For a given \(l\in\mathbb{N}\) and some \(k\in\{1,\ldots,2^{l}\}\) and \(f\in T_{l}\), define \(Y_{i,l}(k,f)=\frac{1}{\sqrt{n}}f(Z_{i})\mathbf{1}\{(k-1)/2^{l}\leq U_{i}\leq k /2^{l}\}\). Rio's covariance inequality (Rio, 2017, Theorem 1.1) yields: \[\sum_{s\geq i}^{\infty}|\mathbb{C}(Y_{i,l}(j,f),Y_{s,l}(j,f))|\leq\frac{4\| \kappa\|_{\infty}^{2}}{n}\sum_{s=0}^{\infty}\alpha(s)\wedge\frac{1}{2^{l}} \leq\frac{4\|\kappa\|_{\infty}^{2}}{n}\left(\frac{l\log(2)}{c2}\frac{1}{2^{l}} +\frac{1}{2^{l-1}}\frac{1}{1-\exp(-2c)}\right)\rightleftharpoons:v_{l}^{2}n ^{-1}\|\kappa\|^{2}.\] It then follows by Theorem 2 of Merlevede et al. (2009) that, fo a constant \(C\) depending only on \(c\), we have that, for any \(n\geq 2\) and every \(l\in\mathbb{N}\), \(k\in\{1,2,\ldots,2^{l}\}\) and \(f\in T_{l}\): \[\mathbb{P}[|Z_{n}((k-1)/2^{l},f)-Z_{n}(k/2^{l},f)|\geq x]\leq\exp\left(\frac{ Cx^{2}}{v_{l}^{2}\|\kappa\|_{\infty}^{2}+\|\kappa\|_{\infty}^{2}n^{-1}+x\| \kappa\|_{\infty}n^{-1/2}\log(n)^{2}}\right),\quad\forall x\geq 0\,.\] By Lemma 2.2.10 of van der Vaart and Wellner (1996), there is a constant \(K\) depending only on \(c\) such that, for every \(l\in\mathbb{N}\) and \(n\geq 2\): \[\begin{split}\left\|\max_{f\in T_{l}}\max_{k\in\{1,\ldots,2^{l} \}}\left|Z_{n}((k-1)/2^{l},f)-Z_{n}(k/2^{l},f)\right|\right\|_{\psi_{1}}\leq \\ K\|\kappa\|_{\infty}(n^{-1/2}\log(n)^{2}\log(1+2^{l}\omega( \delta_{l})^{2})+\sqrt{v_{l}^{2}+n^{-1}}\sqrt{\log(1+2^{l}\omega(\delta_{l})^ {2})})\,.\end{split} \tag{7}\] Next, we deal with the second term in (6). A simple argument reveals that: \[\max_{k\in\{1,\ldots,2^{l}\}}\sup_{f\in\mathcal{F}:\|f-f^{\prime}\|_{q,\tilde {p}_{n}}\vee\|f-f^{\prime}\|_{q,\tilde{p}_{n}}\leq\delta_{l}}\left|Z_{n}((k-1) /2^{l},f)-Z_{n}(k/2^{l},f^{\prime})-Z_{n}(k/2^{l},f^{\prime}))\right|\leq\] \[2\|\kappa\|_{\infty}\delta_{l}n^{(2-q)/2q}\max_{k\in\{1,\ldots,2^{l}\}}\left| \sum_{i=1}^{n}\mathbf{1}\{(k-1)/2^{l}\leq U_{i}\leq k/2^{l}\}\right|^{\frac{q -1}{q}}+2\|\kappa\|_{\infty}\delta_{l}\frac{\sqrt{n}}{2^{\left(\frac{q-1}{q} \right)l}}\,,\] from which it follows that: \[\left\|\max_{k\in\{1,\ldots,2^{l}\}}\sup_{f\in\mathcal{F}:\|f-f^{ \prime}\|_{q,p_{n}}\lor\|f-f^{\prime}\|_{q,p_{n}}\leq\delta_{l}}|Z_{n}((k-1)/2^{l },f)-Z_{n}(k/2^{l},f)-(Z_{n}((k-1)/2^{l},f^{\prime})-Z_{n}(k/2^{l},f^{\prime}))| \right\|_{\psi_{1}}\leq\\ 2S\|\kappa\|_{\infty}\delta_{l}n^{1/2q}\left\|\max_{k\in\{1,\ldots 2^{l}\}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(\mathbf{1}\{(k-1)/2^{l}\leq U _{i}\leq k/2^{l}\}-1/2^{l})\right|\right\|_{\psi_{1}}^{\frac{q-1}{q}}+S4\| \kappa\|_{\infty}\delta_{l}\frac{\sqrt{n}}{2^{\frac{(q-1)}{q}}l}\leq\\ 2S\delta_{l}\|\kappa\|_{\infty}\sqrt{n}\left(K(n^{-1}\log(n)^{2 }\log(1+2^{l})+\sqrt{n^{-1}(v_{l}^{2}+n^{-1})}\sqrt{\log(1+2^{l})})^{\frac{q- 1}{q}}+\frac{2}{2^{\frac{(q-1)}{q}}l}\right)\,, \tag{8}\] where \(S>0\) is an absolute constant, and the last inequality follows from replicating the preceding argument replacing \(T_{l}\) with a constant function equal to one. Using that \(\log(1+2^{l}\omega(\delta_{l})^{2})\leq\log(2^{l+1}\omega(\delta_{l})^{2})=(l+ 1)\log(2)+2\log(\omega(\delta_{l}))\) and \(v_{l}^{2}\leq M\frac{l}{2^{l}}\) for a constant \(M\) depending only on \(c\), we can simplify both (7) and (8). Then, plugging these back onto (6), optimising, and then plugging back onto (5); we obtain the desired result. We are now ready to prove the main result. We begin by proving the result for a nonnegative class of functions with bounded envelope. Start by noticing that: \[\left|S(x,f;h)-\mathbb{E}[S(x,f;h)]\right| \leq\left|\frac{1}{\sqrt{nh}}\sum_{i=1}^{n}(\mathbf{1}\{x\leq X_{ i}\leq x+h\}f(Z_{i})-\mathbb{E}[\mathbf{1}\{x\leq X_{i}\leq x+h\}f(Z_{i})]) \right|+ \tag{9}\] \[\left|\frac{1}{\sqrt{nh}}\sum_{i=1}^{n}(\mathbf{1}\{x-h\leq X_{ i}\leq x\}f(Z_{i})-\mathbb{E}[\mathbf{1}\{x-h\leq X_{i}\leq x\}f(Z_{i})]) \right|+\] \[\left|\frac{1}{\sqrt{nh}}\sum_{i=1}^{n}\mathbf{1}\{X_{i}=x\} \right|\,.\] We deal with each term separately. Consider the first term. Observe that, since each \(X_{i}\) is continuously distributed, we have that: \[\left|\frac{1}{\sqrt{nh}}\sum_{i=1}^{n}(\mathbf{1}\{x\leq X_{i} \leq x+h\}f(Z_{i})-\mathbb{E}[\mathbf{1}\{x\leq X_{i}\leq x+h\}f(Z_{i})]) \right|= \tag{10}\] \[\left|\frac{1}{\sqrt{nh}}\sum_{i=1}^{n}\mathbf{1}\{U_{i}=G(x)\} f(Z_{i})\right|\,,\] where each \(U_{i}\coloneqq G(X_{i})\) is uniformly distributed. The last term on the right hand side of (10) is, with probability one, bounded above by \(\frac{\|g\|_{\infty}}{\sqrt{a_{n}n}}\) uniformly over \(h\), \(f\) and \(x\). Next, note that, for any \(x\in\mathbb{R}\) and \(h>0\), trivially: \[G(x+h)-G(x)\leq\|g\|_{\infty}h\,.\] Combining this fact with continuity of \(G\) reveals that, for each \(x\in\mathbb{R}\), the set: \[\Xi_{x}=\left\{G(x+h)-G(x):0\leq h<b_{n}\right\},\] is an interval contained in \[\left[0,\|g\|b_{n}\right).\] Therefore, setting \(K=\lfloor-\log_{2}(\|g\|_{\infty}b_{n})\rfloor\) yields that: \[\sup_{x\in\mathbb{R}}\sup_{a_{n}\leq h<b_{n}}\sup_{f\in\mathcal{F }}\left|\frac{1}{\sqrt{nh}}\sum_{i=1}^{n}(\mathbf{1}\{x\leq X_{i}\leq x+h\}f( Z_{i})-\mathbb{E}[\mathbf{1}\{x\leq X_{i}\leq x+h\}f(Z_{i})])\right|\leq\] \[\frac{1}{\sqrt{a_{n}}}\sup_{u\in[0,1]}\sup_{s\in[0,2^{-K})}\sup_{ f\in\mathcal{F}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(\mathbf{1}\{u<U_{i} \leq u+s\}f(Z_{i})-\mathbb{E}[\mathbf{1}\{u<U_{i}\leq u+s\}f(Z_{i})])\right|+ \frac{\|g\|_{\infty}}{\sqrt{a_{n}n}}\,.\] But we then note that: \[\sup_{u\in[0,1]}\sup_{s\in[0,2^{-K})}\sup_{f\in\mathcal{F}}\left|\frac{1}{ \sqrt{n}}\sum_{i=1}^{n}(\mathbf{1}\{u<U_{i}\leq u+s\}f(Z_{i})-\mathbb{E}[ \mathbf{1}\{u<U_{i}\leq u+s\}f(Z_{i})])\right|\leq\] \[\sup_{k\in\{1,2,\ldots 2^{K}\}}\sup_{s\in[0,2^{-K})}\sup_{f\in \mathcal{F}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left(\mathbf{1}\left\{ \frac{(k-1)}{2^{K}}<U_{i}\leq\frac{(k-1)}{2^{K}}+s\right\}f(Z_{i})-\mathbb{E} \left[\mathbf{1}\left\{\frac{(k-1)}{2^{K}}-s<U_{i}\leq\frac{(k-1)}{2^{K}}\right\} f(Z_{i})\right]\right)\right|+\] \[\sup_{k\in\{1,2,\ldots 2^{K}\}}\sup_{s\in[0,2^{-K})}\sup_{f\in \mathcal{F}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left(\mathbf{1}\left\{ \frac{(k-1)}{2^{K}}-s<U_{i}\leq\frac{(k-1)}{2^{K}}\right\}f(Z_{i})-\mathbb{E} \left[\mathbf{1}\left\{\frac{(k-1)}{2^{K}}-s<U_{i}\leq\frac{(k-1)}{2^{K}} \right\}f(Z_{i})\right]\right)\right|\,. \tag{11}\] and, since both \(U_{i}\) and \(1-U_{i}\) are uniformly distributed, Lemma 1 provides control over the Orlicz norm of both terms on the right-hand side of (11). An analogous argument enables controls of the uniform error of the second term in (9). Finally, as for the third term in (9), we note that its uniform error is bounded above by \(\frac{\|k\|_{\infty}}{\sqrt{a_{n}}}\). Applying Lemma 1 and combining the resulting terms proves Theorem 1 for nonnegative classes. To extend the result to general bounded classes, consider the constant function \(\underline{f}(z)=-\|\kappa\|_{\infty}\) for all \(z\in\mathcal{Z}\). Next, decompose the function class as \(\mathcal{F}=(\mathcal{F}-\underline{f})+\{\underline{f}_{n}\}\). The class \((\mathcal{F}-\underline{f})\) is nonnegative, admits bounded envelope \(2\|\kappa\|\) and same covering numbers as \(\mathcal{F}\). Therefore, the previous result applies to it. The second class of functions is a singleton and \(\underline{f}_{n}\) has constant sign; therefore the previous result also applies to it, with \(w_{n}(\delta)=1\) and hence \(\delta=0\). Combining with the bound on \((\mathcal{F}-\underline{f}_{n})\) yields the desired result.
2301.03869
Effective-Lagrangian study of $ψ'(J/ψ) \to VP$ and the insights into $ρπ$ puzzle
Within the effective Lagrangian approach, we carry out a unified study of the $J/\psi (\psi')\to VP$, $J/\psi \to P\gamma$ and relevant radiative decays of light-flavor hadrons. Large amount of experimental data, including the various decay widths and electromagnetic form factors, are fitted to constrain the numerous hadron couplings. Relative strengths between the strong and electromagnetic interactions are revealed in the $J/\psi \to VP$ and $\psi' \to VP$ processes. The effect from the strong interaction is found to dominate in the $J/\psi \to \rho\pi$ decay, while the electromagnetic interaction turns out to be the dominant effect in $\psi' \to \rho\pi$ decay, which provides an explanation to the $\rho\pi$ puzzle. For the $J/\psi\to K^{*}\bar{K}+\bar{K}^{*}K$ and $\psi' \to K^{*}\bar{K}+\bar{K}^{*}K$, the former process is dominated by the strong interactions, and the effects from the electromagnetic parts are found to be comparable with those of strong interactions in the latter process. Different $SU(3)$ breaking effects from the electromagnetic parts appear in the charged and neutral channels for the $\psi' \to K^{*}\bar{K}+\bar{K}^{*}K$ processes explain the rather different ratios between $B(\psi'\to K^{*+}K^{-}+K^{*-}K^{+})/B(J/\psi\to K^{*+}K^{-}+K^{*-}K^{+})$ and $B(\psi'\to K^{*0}\bar{K}^{0}+\bar{K}^{*0}K^{0})/B(J/\psi\to K^{*0}\bar{K}^{0}+\bar{K}^{*0}K^{0})$.
Lin-Wan Yan, Yun-Hua Chen, Chun-Gui Duan, Zhi-Hui Guo
2023-01-10T09:41:41Z
http://arxiv.org/abs/2301.03869v1
Effective-Lagrangian study of \(\psi^{\prime}(J/\psi)\to VP\) and the insights into \(\rho\pi\) puzzle ###### Abstract Within the effective Lagrangian approach, we carry out a unified study of the \(J/\psi(\psi^{\prime})\to VP\), \(J/\psi\to P\gamma\) and relevant radiative decays of light-flavor hadrons. Large amount of experimental data, including the various decay widths and electromagnetic form factors, are fitted to constrain the numerous hadron couplings. Relative strengths between the strong and electromagnetic interactions are revealed in the \(J/\psi\to VP\) and \(\psi^{\prime}\to VP\) processes. The effect from the strong interaction is found to dominate in the \(J/\psi\to\rho\pi\) decay, while the electromagnetic interaction turns out to be the dominant effect in \(\psi^{\prime}\to\rho\pi\) decay, which provides an explanation to the \(\rho\pi\) puzzle. For the \(J/\psi\to K^{*}\bar{K}+\bar{K}^{*}K\) and \(\psi^{\prime}\to K^{*}\bar{K}+\bar{K}^{*}K\), the former process is dominated by the strong interactions, and the effects from the electromagnetic parts are found to be comparable with those of strong interactions in the latter process. Different \(SU(3)\) breaking effects from the electromagnetic parts appear in the charged and neutral channels for the \(\psi^{\prime}\to K^{*}\bar{K}+\bar{K}^{*}K\) processes explain the rather different ratios between \(B(\psi^{\prime}\to K^{*+}K^{-}+K^{*-}K^{+})/B(J/\psi\to K^{*+}K^{-}+K^{*-}K^{+})\) and \(B(\psi^{\prime}\to K^{*0}\bar{K}^{0}+\bar{K}^{*0}K^{0})/B(J/\psi\to K^{*0}\bar {K}^{0}+\bar{K}^{*0}K^{0})\). ## 1 Introduction The strong suppression of the branching ratios of the \(\psi^{\prime}\to\rho\pi\) and \(\psi^{\prime}\to K^{*}\bar{K}+c.c.\) processes, relative to those of the corresponding decay channels of the \(J/\psi\), has been a long-standing puzzle in charmonium physics. The annihilation of the \(c\bar{c}\) into three gluons is usually assumed to be the dominant mechanism that rules the decays of the \(J/\psi\) and \(\psi^{\prime}\) to the light-flavor hadrons [1, 2, 3]. The annihilation amplitudes of the latter processes and also the decays to the lepton pairs are proportional to the wave functions of the \(S\)-wave charmonium states \(J/\psi\) and \(\psi^{\prime}\) at the origin. As a result, the branching ratios with the light-flavor-hadron (\(h\)) decays of the \(\psi^{\prime}\) and \(J/\psi\) can be predicted by their leptonic decay widths [4], \(i.e.\), \[Q_{h}\equiv\frac{B(\psi^{\prime}\to h)}{B(J/\psi\to h)}=\frac{B(\psi^{\prime} \to e^{+}e^{-})}{B(J/\psi\to e^{+}e^{-})}=(13.3\pm 0.4)\%\,. \tag{1}\] However, according to the recent PDG averages [4], the ratio of \(Q_{\rho\pi}\), \[\frac{B(\psi^{\prime}\to\rho\pi)}{B(J/\psi\to\rho\pi)}=(0.2\pm 0.1)\%\,, \tag{2}\] and the various ratios of \(Q_{K^{*}\bar{K}}\), \[\frac{B(\psi^{\prime}\to K^{*+}K^{-}+c.c.)}{B(J/\psi\to K^{*+}K^{-}+c.c.)} = (0.5^{+0.2}_{-0.1})\%\,,\] \[\frac{B(\psi^{\prime}\to K^{*0}\bar{K}^{0}+c.c.)}{B(J/\psi\to K^{*0} \bar{K}^{0}+c.c.)} = (2.6^{+0.8}_{-0.7})\%\,,\] \[\frac{B(\psi^{\prime}\to K^{*}\bar{K}+c.c.)}{B(J/\psi\to K^{*} \bar{K}+c.c.)} = (1.4^{+0.5}_{-0.4})\%\,, \tag{3}\] are drastically different from the prediction in Eq. (1). These contradictions are generally referred as the \(\rho\pi\) puzzle, which was first established in Ref. [5] four decades ago. Tremendous efforts have been made to address this problem, including the proposal of a vector glueball near the \(J/\psi\) mass [6], higher Fock components in the charmonium states [7], the intrinsic charm portions in the light-flavor vector \(\rho\)[8], the nodes in the wave functions [9], the meson mixing mechanisms [10], the final-state interactions [11, 12, 13, 14]. Another important issue in those decays is the sizable \(SU(3)\) breaking effects in the charged and neutral \(K^{*}\bar{K}\) decays of the \(J/\psi\) and \(\psi^{\prime}\). I.e., the strict \(SU(3)\) flavor symmetry would give the prediction \(Q_{K^{*+}K^{-}+c.c.}=Q_{K^{*0}\bar{K}^{0}+c.c.}\), which is however severely violated according to the experimental measurements in Eq. (3). In this work, we aim at a unified description of the processes \(J/\psi(\psi^{\prime})\to VP\) and \(\gamma^{(*)}P\), with \(V\) and \(P\) the light-flavor vector and pseudoscalar mesons in order. In such kinds of decay processes, one needs to simultaneously take into account of the single-Okubo-Zweig-Iizuka (OZI) or even the doubly suppressed OZI strong interaction effects, the electromagnetic contributions and the \(SU(3)\) flavor symmetry breaking terms. The effective Lagrangian approach can provide an excellent framework to properly include all the aforementioned effects. Regarding the well-celebrated OZI rule, a quantitative way to understand such suppression mechanism is the large \(N_{C}\) QCD [15]. In the effective field theory (EFT) approach, the \(N_{C}\) counting order can be directly related to the number of traces in the flavor space [16, 17]. Typically one additional flavor trace will introduce one more \(1/N_{C}\) suppression order to the EFT operator. The single OZI/double OZI effects can be systematically incorporated via the EFT operators with the proper numbers of flavor traces. Furthermore, the chiral EFT is constructed according to the spontaneous and explicit chiral symmetry breaking patterns of QCD. The \(SU(3)\) flavor symmetry breaking effects can be then introduced through the basic building tensors of the EFT involving the small but nonvanishing light-flavor quark masses. Although the chiral power counting scheme based on the momentum expansion is not valid in the massive \(J/\psi\) or \(\psi^{\prime}\) decays, the basic building blocks and methodology of the EFT Lagrangians are useful to conveniently take into account all the relevant ingredients describing the \(J/\psi(\psi^{\prime})\to VP\) and \(J/\psi\to\gamma^{(*)}P\) processes, including the OZI strong interaction parts, the electromagnetic contributions and the \(SU(3)\) flavor symmetry breaking effects. This formalism has been successfully applied to the light-flavor decay processes of \(V\to P\gamma^{(*)}\), \(e^{+}e^{-}\to K^{*}\bar{K}+c.c.\) and \(J/\psi\to VP,P\gamma^{(*)}\) in a series of works in Refs. [18, 19, 20]. In this work we push forward the study along the line of this research to address the mysterious \(\rho\pi\) puzzle by including similar decay processes of \(\psi^{\prime}\). In addition, we also perform the global analyses of the large amount of updated branching ratios of various decay processes from the PDG [4] and the newly measured different decay widths from the BESIII collaboration [21]. This paper is organized as follows. In Sec. 2, we introduce the relevant effective Lagrangians and elaborate the calculations of the decay amplitudes. The global fit to the various experimental data and the phenomenological consequences are analyzed in detail in Sec. 3. We give the short summary and conclusions in Sec. 4. Effective Lagrangian and calculations of transition amplitudes The primary aim of this work is to study the various decay processes of the \(J/\psi\) and \(\psi^{\prime}\) into a light-flavor vector and a light pseudoscalar meson, and the light-flavor meson radiative decays and relevant form factors. Therefore we need to include not only the transition operators between the charmonia and the light-flavor mesons, but also the EFT operators describing the interactions among the light-meson themselves. To tightly constrain the free couplings, we simultaneously take into account the experimental data from both the decay processes with only light-flavor mesons and also the processes involving the \(J/\psi\) and \(\psi^{\prime}\). Resonance chiral theory (R\(\chi\)T) [22] provides a reliable framework to study the interactions of the light-flavor resonances and the light pseudoscalar mesons \((\pi,K,\eta)\), the latter of which are treated as the pseudo-Nambu-Goldstone bosons (pNGBs) resulting from the spontaneous symmetry breaking of QCD. As an extension of the chiral perturbation theory (\(\chi\)PT), R\(\chi\)T explicitly introduces the heavier degrees of freedom of QCD, i.e., the light-flavor resonances, such as the vectors \(\rho,K^{*},\omega,\phi\), the axial vectors, scalars, etc, into the chiral Lagrangians, together with the pNGBs and external source fields, like the photons. The R\(\chi\)T operators are constructed in a chiral covariant way, therefore the physical amplitudes calculated in the R\(\chi\)T automatically fulfill the requirements of chiral symmetry of QCD in the low energy region. On the other hand, the large \(N_{C}\) expansion of QCD [23] has been widely used as another useful guide to arrange the operators and amplitudes of the R\(\chi\)T [24]. In addition, from the large \(N_{C}\) point of view, the QCD \(U_{A}(1)\) anomaly effect, which is considered to be the most responsible factor for the large mass of the physical state \(\eta^{\prime}\), is however \(1/N_{C}\) suppressed. As a result, the \(\eta^{\prime}\) state would become the ninth pNGB both in large \(N_{C}\) and chiral limits. Based on this argument, the nonet of the pNGBs \((\pi,K,\eta,\eta^{\prime})\) can be systematically included in the effective Lagrangian [25]. We closely follow this guideline to include the singlet \(\eta_{0}\) state in the R\(\chi\)T and adopt the general two-mixing-angle formalism to study the physical processes with the \(\eta\) and \(\eta^{\prime}\) mesons. Next we briefly introduce the relevant R\(\chi\)T Lagrangians. In the present work, only the light-flavor vector resonances will be relevant to our study and the minimal interaction operators with the vectors in even-intrinsic-parity sector of the R\(\chi\)T is given by [22] \[\mathscr{L}_{V}^{(2)} = \frac{F_{V}}{2\sqrt{2}}\langle V_{\mu\nu}f_{+}^{\mu\nu}\rangle+i \frac{G_{V}}{\sqrt{2}}\langle V_{\mu\nu}u^{\mu}u^{\nu}\rangle\,, \tag{4}\] where the nonent of the vector resonances is incorporated via the \(3\times 3\) matrix \[V_{\mu\nu}=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}\rho^{0}+\frac{1}{ \sqrt{6}}\omega_{8}+\frac{1}{\sqrt{3}}\omega_{0}&\rho^{+}&K^{*+}\\ \rho^{-}&-\frac{1}{\sqrt{2}}\rho^{0}+\frac{1}{\sqrt{6}}\omega_{8}+\frac{1}{ \sqrt{3}}\omega_{0}&K^{*0}\\ K^{*-}&\overline{K}^{*0}&-\frac{2}{\sqrt{6}}\omega_{8}+\frac{1}{\sqrt{3}} \omega_{0}\end{array}\right)_{\mu\nu}\quad, \tag{5}\] the basic chiral tensors with the pNGBs and the external source fields are defined as \[U=u^{2}=e^{i\frac{\sqrt{2}\phi}{F}}\,,\quad u_{\mu}=i\big{[}u^{ \dagger}(\partial_{\mu}-ir_{\mu})u-u(\partial_{\mu}-u\ell_{\mu})u^{\dagger} \big{]}\,,\quad f_{\pm}^{\mu\nu}=uF_{L}^{\mu\nu}u^{\dagger}\pm u^{\dagger}F_{R }^{\mu\nu}u\,,\] \[F_{L(R)}^{\mu\nu}=\partial^{\mu}l(r)^{\nu}-\partial^{\nu}l(r)^{ \mu}\,,\quad\chi_{\pm}=u^{\dagger}\chi u^{\dagger}\pm u\chi^{\dagger}u\,, \qquad\chi=2B_{0}(s+ip) \tag{6}\] and the flavor contents of the nonet pNGB matrix read \[\Phi=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}\pi^{0}+\frac{1}{\sqrt{6}}\eta_{8 }+\frac{1}{\sqrt{3}}\eta_{0}&\pi^{+}&K^{+}\\ \pi^{-}&-\frac{1}{\sqrt{2}}\pi^{0}+\frac{1}{\sqrt{6}}\eta_{8}+\frac{1}{\sqrt{3} }\eta_{0}&K^{0}\\ K^{-}&\overline{K}^{0}&-\frac{2}{\sqrt{6}}\eta_{8}+\frac{1}{\sqrt{3}}\eta_{0} \end{array}\right)\,. \tag{7}\] The quark-mass terms are introduced by taking the scalar external source filed \(s\) in Eq. (6) as \(s={\rm diag}\{m_{u},m_{d},m_{s}\}\). In this work, we will take \(m_{u}=m_{d}=\hat{m}\) throughout, i.e. neglecting the isospin breaking effects from the strong interaction parts. The physical vectors \(\omega\) and \(\phi\) can be well described by assuming the ideal mixing of the octet \(\omega_{8}\) and the singlet \(\omega_{0}\)[4] \[\omega_{0} = \sqrt{\frac{2}{3}}\omega-\sqrt{\frac{1}{3}}\phi,\] \[\omega_{8} = \sqrt{\frac{2}{3}}\phi+\sqrt{\frac{1}{3}}\omega. \tag{8}\] In contrast, the mixing pattern of the \(\eta_{8}\) and \(\eta_{0}\) is more involved. The modern chiral prescription introduces the sophisticated two-mixing-angle scheme [26, 27] to address the \(\eta\)-\(\eta^{\prime}\) mixing system \[\left(\begin{array}{c}\eta\\ \eta^{\prime}\end{array}\right)=\frac{1}{F}\left(\begin{array}{cc}F_{8}\, \cos\theta_{8}&-F_{0}\,\sin\theta_{0}\\ F_{8}\,\sin\theta_{8}&F_{0}\,\cos\theta_{0}\end{array}\right)\left(\begin{array} []{c}\eta_{8}\\ \eta_{0}\end{array}\right)\,, \tag{9}\] where \(F_{0}\) and \(F_{8}\) are the weak decay constants of the singlet and octet axial-vector currents, respectively. The conventional mixing formula with a single mixing angle can be naturally recovered by taking \(F_{8}=F_{0}=F\) and \(\theta_{0}=\theta_{8}\) in Eq.(9). Equivalently one can also use the quark-flavor basis to describe the two-mixing-angle formalism \[\left(\begin{array}{c}\eta\\ \eta^{\prime}\end{array}\right)=\frac{1}{F}\left(\begin{array}{cc}F_{q}\, \cos\theta_{q}&-F_{s}\,\sin\theta_{s}\\ F_{q}\,\sin\theta_{q}&F_{s}\,\cos\theta_{s}\end{array}\right)\left(\begin{array} []{c}\eta_{q}\\ \eta_{s}\end{array}\right)\,, \tag{10}\] where the quark-flavor contents of the states \(\eta_{q}=(\eta_{8}+\sqrt{2}\eta_{0})/\sqrt{3}\) and \(\eta_{s}=(-\sqrt{2}\eta_{8}+\eta_{0})/\sqrt{3}\) are \((\bar{u}u+\bar{d}d)/\sqrt{2}\) and \(\bar{s}s\), respectively. The vector resonances in Eq. (4) are expressed in terms of the anti-symmetric tensors, instead of the commonly used Proca fields. It is demonstrated in Refs. [22, 28] that it is convenient to use the anti-symmetric tensors to describe the vector resonances in R\(\chi\)T, since the high energy behaviors of the resulting amplitudes and form factors automatically match the QCD constraints without requiring the inclusion of extra local chiral counter terms in the anti-symmetric tensor formalism. The R\(\chi\)T Lagrangians in the odd-intrinsic-parity sector comprise two different classes, namely the \(VVP\) and \(VJP\) types, with \(J\) the external sources. Those R\(\chi\)T operators written in terms of the anti-symmetric tensor fields that are relevant to the \(O(p^{4})\) chiral low energy constants, are worked out in Ref. [29], and the relevant R\(\chi\)T Lagrangians and discussions on the \(VVP\) Green functions by explicitly including the dynamical singlet \(\eta_{0}\) state are given in Ref. [18]. A more complete basis of the odd-intrinsic-parity R\(\chi\)T operators that contribute to the \(O(p^{6})\) chiral low energy constants, is given in Ref. [30]. A proliferation of the unknown resonance couplings arise in the more complete R\(\chi\)T Lagrangians, as expected. This can hinder one from giving the definite conclusions on the phenomenological discussions [31]. From the practical point of view, we will work with the R\(\chi\)T operator basis from Refs. [18, 29] and we believe that the higher order effects from the extra operators in Ref. [30] can be accounted for by the uncertainties of the resonance couplings in the former two references. For the sake of completeness, we give the explicit expressions of the relevant R\(\chi\)T Lagrangians [18, 29] \[\mathscr{L}_{VVP}= d_{1}\varepsilon_{\mu\nu\rho\sigma}\langle\{V^{\mu\nu},V^{\rho \alpha}\}\nabla_{\alpha}u^{\sigma}\rangle+id_{2}\varepsilon_{\mu\nu\rho \sigma}\langle\{V^{\mu\nu},V^{\rho\sigma}\}\chi_{-}\rangle+d_{3}\varepsilon_{ \mu\nu\rho\sigma}\langle\{\nabla_{\alpha}V^{\mu\nu},V^{\rho\alpha}\}u^{\sigma}\rangle \tag{11}\] \[+d_{4}\varepsilon_{\mu\nu\rho\sigma}\langle\{\nabla^{\sigma}V^{ \mu\nu},V^{\rho\alpha}\}u_{\alpha}\rangle-id_{5}M_{V}^{2}\sqrt{\frac{2}{3}} \varepsilon_{\mu\nu\rho\sigma}\langle V^{\mu\nu}V^{\rho\sigma}\rangle\ln( \det u)\,,\] and \[\mathscr{L}_{VJP}= \frac{c_{1}}{M_{V}}\varepsilon_{\mu\nu\rho\sigma}\langle\{V^{\mu \nu},f_{+}^{\rho\alpha}\}\nabla_{\alpha}u^{\sigma}\rangle+\frac{c_{2}}{M_{V}} \varepsilon_{\mu\nu\rho\sigma}\langle\{V^{\mu\alpha},f_{+}^{\rho\sigma}\} \nabla_{\alpha}u^{\nu}\rangle+\frac{ic_{3}}{M_{V}}\varepsilon_{\mu\nu\rho \sigma}\langle\{V^{\mu\nu},f_{+}^{\rho\sigma}\}\chi_{-}\rangle \tag{12}\] \[+\frac{ic_{4}}{M_{V}}\varepsilon_{\mu\nu\rho\sigma}\langle V^{ \mu\nu}[f_{-}^{\rho\sigma},\chi_{+}]\rangle+\frac{c_{5}}{M_{V}}\varepsilon_{ \mu\nu\rho\sigma}\langle\{\nabla_{\alpha}V^{\mu\nu},f_{+}^{\rho\alpha}\}u^{ \sigma}\rangle+\frac{c_{6}}{M_{V}}\varepsilon_{\mu\nu\rho\sigma}\langle\{ \nabla_{\alpha}V^{\mu\alpha},f_{+}^{\rho\sigma}\}u^{\nu}\rangle\] \[+\frac{c_{7}}{M_{V}}\varepsilon_{\mu\nu\rho\sigma}\langle\{\nabla^ {\sigma}V^{\mu\nu},f_{+}^{\rho\alpha}\}u_{\alpha}\rangle-ic_{8}M_{V}\sqrt{ \frac{2}{3}}\varepsilon_{\mu\nu\rho\sigma}\langle V^{\mu\nu}\tilde{f}_{+}^{ \rho\sigma}\rangle\ln(\det u)\,,\] where the covariant derivative acting on the chiral field \(X\) is given by \[\nabla_{\mu}X=\partial_{\mu}X+[\Gamma_{\mu},X]\,,\qquad\Gamma_{\mu}=\frac{1}{ 2}\big{[}u^{+}(\partial_{\mu}-ir_{\mu})u+u(\partial_{\mu}-il_{\mu})u^{+}\big{]}\,. \tag{13}\] As previously mentioned in the Introduction, both the strong and electromagnetic interactions can be important in the \(J/\psi(\psi^{\prime})\to VP\) processes. The effects from the strong interactions are taken into account by the direct \(J/\psi(\psi^{\prime})VP\) transition operators [18] \[\mathscr{L}_{\psi(\psi^{\prime})VP}= M_{\psi(\psi^{\prime})}h_{1}^{(\prime)}\varepsilon_{\mu\nu\rho \sigma}\psi^{(\prime)\mu}\langle u^{\nu}V^{\rho\sigma}\rangle+\frac{1}{M_{ \psi(\psi^{\prime})}}h_{2}^{(\prime)}\varepsilon_{\mu\nu\rho\sigma}\psi^{( \prime)\mu}\langle\{u^{\nu},V^{\rho\sigma}\}\chi_{+}\rangle \tag{14}\] \[+M_{\psi(\psi^{\prime})}h_{3}^{(\prime)}\varepsilon_{\mu\nu\rho \sigma}\psi^{(\prime)\mu}\langle u^{\nu}\rangle\langle V^{\rho\sigma}\rangle\,,\] where the couplings \(h_{i=1,2,3}^{(\prime)}\) corresponding to the \(J/\psi\) and \(\psi^{\prime}\) will be separately fitted to the experimental data of the two charmonium states. Two types of EFT operators are introduced to account for the electromagnetic effects, which include the direct \(\psi P\gamma\) transition operators \[\mathscr{L}_{\psi P\gamma}=g_{1}\varepsilon_{\mu\nu\rho\sigma}\psi^{\mu} \langle u^{\nu}f_{+}^{\rho\sigma}\rangle+\frac{1}{M_{\psi}^{2}}g_{2} \varepsilon_{\mu\nu\rho\sigma}\psi^{\mu}\langle\{u^{\nu},f_{+}^{\rho\sigma}\} \chi_{+}\rangle\,, \tag{15}\] and the conversion vertex of the charmonium and the photon \[\mathscr{L}_{\psi\gamma}=\frac{-1}{2\sqrt{2}}\frac{f_{\psi}}{M_{\psi}}\langle \hat{\psi}_{\mu\nu}f_{+}^{\mu\nu}\rangle\,, \tag{16}\] being \(\hat{\psi}_{\mu\nu}=\partial_{\mu}\psi^{\nu}-\partial_{\nu}\psi^{\mu}\). The values of the couplings \(g_{1}\), \(g_{2}\) and \(f_{\psi}\) are different for the \(J/\psi\) and \(\psi^{\prime}\) and they will be determined by the relevant experimental data. Different powers of the \(M_{\psi(\psi^{\prime})}\) are introduced in Eqs. (14)-(16), so that the couplings appearing in those Lagrangians are dimensionless. It is found [20, 32] that the \(J/\psi\to\eta^{(^{\prime})}\gamma^{(*)}\) amplitudes are dominated by the \(\eta_{c}\) mediating diagrams, i.e., via the \(J/\psi\to\eta_{c}\gamma^{(*)}\to\eta^{(^{\prime})}\gamma^{(*)}\) intermediate processes. The decay amplitude of the \(\psi\to\eta^{(^{\prime})}\gamma^{(*)}\) can be written as \[{\cal M}^{mixing}_{\psi\to\eta^{(^{\prime})}\gamma^{*}}=e\,\varepsilon_{\mu \nu\rho\sigma}\epsilon^{\mu}_{\psi}\epsilon^{\nu}_{\gamma^{*}}q^{\rho}k^{ \sigma}\,\lambda_{\eta_{c}\eta^{(^{\prime})}}\,g_{\psi\eta_{c}\gamma^{*}}(s)\, e^{i\delta_{P}}\,, \tag{17}\] being \(P=\eta,\eta^{\prime}\), where the electromagnetic transition form factor between the \(\psi\) and \(\eta_{c}\) takes form [33, 34, 35] \[g_{\psi\eta_{c}\gamma^{*}}(s)=g_{\psi\eta_{c}\gamma^{*}}(0)e^{\frac{s}{16\beta ^{2}}}\,. \tag{18}\] For the mixing parameters \(\lambda_{\eta_{c}\eta^{(^{\prime})}}\) between the \(\eta_{c}\) and \(\eta^{(^{\prime})}\) states, we take the determinations \(\lambda_{\eta_{c}\eta}=-4.6\times 10^{-3}\) and \(\lambda_{\eta_{c}\eta^{\prime}}=-1.2\times 10^{-2}\) from Ref. [32]. The phenomenological phase factors \(\delta_{\eta^{(^{\prime})}}\) in front of the \(\eta_{c}\) mediating diagrams need to be separately fitted to the data of the \(J/\psi\). The various Feynman diagrams relevant to our study are illustrated in Figs.1, 2 and 3. To be more specific, the diagrams in Fig. 1 contribute to the light-flavor processes \(V\to P\gamma^{(*)}\) and \(P\to V\gamma^{(*)}\). The amplitudes of the \(J/\psi(\psi^{\prime})\to VP\) and \(J/\psi\to P\gamma^{(*)}\) receive contributions from the diagrams in Figs. 2 and 3, respectively. The formulas relevant to the \(V\to P\gamma^{(*)}\) and \(P\to V\gamma^{(*)}\) processes are worked out in Ref. [18], and the expressions of the \(J/\psi\to VP,P\gamma^{(*)}\) amplitudes are calculated in Ref. [20]. The corresponding decay amplitudes of the \(\psi^{\prime}\) state share similar expressions as those involving \(J/\psi\), with obvious replacements of the resonance Figure 1: Diagrams relevant to the \(V\to P\gamma^{(*)}\) processes: (a) direct type and (b) indirect type. Figure 2: Feynman diagrams for the processes \(J/\psi\to VP\). The notations of the solid square in diagram (c) and the open circle in diagram (d) are explained in the text. couplings. Nevertheless, for the sake of completeness and to set up the notations, we further elaborate the amplitudes of the processes of \(\psi^{\prime}\to VP\). For the \(\psi^{\prime}\to VP\) decay, the first diagram (a) in Fig. 2 denotes the contributions from the strong interactions, i.e., from the Lagrangians in Eq. (14). Other diagrams in Fig. 2 correspond to the electromagnetic effects. The \(\psi^{\prime}\to VP\) amplitude can be written as \[{\cal M}_{\psi^{\prime}\to VP}=\varepsilon_{\mu\nu\rho\sigma}\epsilon^{\mu}_{ \psi^{\prime}}\epsilon^{\nu}_{V}q^{\rho}k^{\sigma}G_{\psi^{\prime}\to VP}\,, \tag{19}\] where the polarization vectors of the \(\psi^{\prime}\) and \(V\) are given by \(\epsilon^{\mu}_{\psi^{\prime}}\) and \(\epsilon^{\nu}_{V}\), \(q\) and \(k\) stand for the four-momentum of the \(\psi^{\prime}\) and \(V\), respectively. The effective couplings \(G_{\psi^{\prime}\to VP}\) include various contributions from the individual diagram of Fig. 2. The explicit expressions of \(G_{\psi^{\prime}\to VP}\) for the various processes are given in Appendix (A). The decay widths of \(\psi^{\prime}\to VP\) read \[\Gamma(\psi^{\prime}\to VP)=\frac{1}{96\pi M_{\psi^{\prime}}^{3}}\lambda(M_{ \psi^{\prime}},M_{V},m_{P})^{\frac{3}{2}}\left|G_{\psi^{\prime}\to VP} \right|^{2}, \tag{20}\] with the Kallen function \(\lambda(x,y,z)=x^{2}+y^{2}+z^{2}-2xy-2xz-2yz\). Similarly, the corresponding amplitude of the radiative process \(J/\psi(q)\to\gamma^{*}(k)P(q-k)\) can be given in terms of one effective coupling as well \[{\cal M}_{\psi\to P\gamma^{*}}=e\,\varepsilon_{\mu\nu\rho\sigma}\epsilon^{\mu} _{\psi}\epsilon^{\nu}_{\gamma^{*}}q^{\rho}k^{\sigma}G_{\psi\to P\gamma^{*}}(s)\,, \tag{21}\] with \(s=k^{2}\). The effective coupling \(G_{\psi\to P\gamma^{*}}\) can receive contributions from all the diagrams in Fig. 3. The explicit expressions are given in Ref. [20]. The formula of the decay width of the \(J/\psi\to P\gamma\) process finds it form \[\Gamma_{\psi\to P\gamma}=\frac{1}{3}\alpha\left(\frac{M_{\psi}^{2}-M_{P}^{2}} {2M_{\psi}}\right)^{3}|G_{\psi\to P\gamma^{*}}(0)|^{2}\,. \tag{22}\] The expression of the width for the Dalitz decay process \(J/\psi\to P\gamma^{*}\to Pl^{+}l^{-}\) is given by \[\Gamma_{\psi\to Pl^{+}l^{-}}=\int_{4m_{l}^{2}}^{(M_{\psi}-m_{P})^{2}}\frac{ \alpha^{2}(2m_{l}^{2}+s)}{72M_{\psi}^{3}\pi s^{3}}\sqrt{s(s-4m_{l}^{2})}\left[ \lambda(s,M_{\psi},m_{P})\right]^{\frac{3}{2}}|G_{\psi\to P\gamma^{*}}(s)|^{2} ds\,. \tag{23}\] ## 3 Comprehensive fits and phenomenological discussions Compared to the previous studies in Refs. [18, 20], we incorporate in this work the data from the various \(\psi^{\prime}\to VP\) decays, apart from other types of data from the \(V\to P\gamma^{(*)}\), \(P\to V\gamma^{(*)}\) Figure 3: Feynman diagrams for the processes \(J/\psi\to P\gamma^{(*)}\) and \(J/\psi\to VP,P\gamma^{(*)}\) processes, to perform a comprehensive fit, so that the \(\rho\pi\) puzzle can be addressed. In addition, we update numerous types of data according to the most recent PDG averages [4], and timely revise the determinations of the resonance couplings. In total, we include 135 data points from several different types of processes in the comprehensive fit. To be more specific, the data from the pure light-flavor processes amount to 70, and they consist of both the decay widths, such as those of the \(\omega\to\eta\gamma\), \(\eta^{\prime}\to\omega\gamma\), \(\eta\to\gamma\gamma\), etc, and the form factors of the \(\phi\to\eta\gamma^{*}\) and \(\eta^{(^{\prime})}\to\gamma\gamma^{*}\). For the data related to the \(J/\psi\), they include all the available widths of the \(J/\psi\to VP\), \(P\gamma\) and \(Pe^{+}e^{-}\) from the PDG [4], and also the recent BESIII measurements of the invariant-mass distributions of the lepton pairs in the transition of \(J/\psi\to\eta\gamma^{*}\)[21]. Regarding the data of the \(\psi^{\prime}\), we will include in the joint fit all the available widths of the \(\psi^{\prime}\to VP\) processes from PDG [4]. An efficient way to reduce the number of unknown couplings in the R\(\chi\)T is to impose the high energy constraints dictated by QCD to the various form factors and Green functions calculated from the R\(\chi\)T Lagrangians in Eqs. (4), (11) and (12). Furthermore, the high energy behaviors of the resulting amplitudes after imposing such constraints will mimic the properties as predicted by QCD. Following the previous discussions in Refs. [18, 19, 20, 36, 37, 38], we take the following high energy constraints on the various couplings \[4c_{3}+c_{1}=0\,,\qquad c_{1}-c_{2}+c_{5}=0\,,\qquad c_{5}-c_{6}= \frac{N_{C}}{64\pi^{2}}\frac{M_{V}}{\sqrt{2}F_{V}}\,,\] \[d_{1}+8d_{2}-d_{3}=\frac{F^{2}}{8F_{V}^{2}}\,,\qquad d_{3}=- \frac{N_{C}}{64\pi^{2}}\frac{M_{V}^{2}}{F_{V}^{2}}\,,\qquad c_{8}=-\frac{\sqrt {2}M_{0}^{2}}{\sqrt{3}M_{V}^{2}}c_{1}\,, \tag{24}\] where the pion weak decay constant takes the normalization \(F=92.4\) MeV throughout, the \(U_{A}(1)\) anomaly parameter is set to be \(M_{0}=900\) MeV [39, 40], the chiral-limit mass of the lowest vector resonance multiplet is fixed at \(M_{V}=M_{\rho}=775\) MeV and the vector-photon transition coupling \(F_{V}\) will be fitted. By taking into account the the leptonic widths of the \(J/\psi\) and \(\psi^{\prime}\), we can determine the charmonium-photon transition coupling \(f_{J/\psi(\psi^{\prime})}\) in Eq. (16), whose explicit values are found to be \[f_{J/\psi}=293.8\pm 3.5\ {\rm MeV}\,,\qquad f_{\psi^{\prime}}=208.1\pm 5.1\ {\rm MeV}\,. \tag{25}\] We are then left with 23 undetermined parameters, including the four \(\eta\)-\(\eta^{\prime}\) mixing parameters introduced in Eq. (9), four couplings \(F_{V}\), \(c_{3},c_{4}\) and \(d_{2}\) that emerge from the light-flavor resonance interactions, nine parameters exclusively entering in the \(J/\psi\) decays and six parameters that are dedicated to the \(\psi^{\prime}\) processes. The couplings that describe the interactions of the light-flavor resonances will also enter in the charmonia decays. Therefore the joint fits by simultaneously including the relevant data of the light-flavor mesons, the data from the \(J/\psi\to VP,P\gamma^{(*)}\) and the \(\psi^{\prime}\) ones, will obviously give more stringent constraints on the couplings than the situation by including just one of these data sets. Furthermore, such comprehensive studies in a unified framework are also expected to give a further insight into \(\rho\pi\) puzzle elaborated in the Introduction. The resulting parameters from the joint fit are given in Table. 1. The updated parameters related to the light-flavor resonances, the \(J/\psi\) decays and the \(\eta\)-\(\eta^{\prime}\) mixing are well consistent with the previous determinations [18, 19, 20] where the data of the \(\psi^{\prime}\) processes are not included in these studies. For the \(\psi^{\prime}\to P\gamma^{(*)}\) processes, which could receive significant contributions from the \(\psi^{\prime}\to J/\psi P\) transition vertexes, i.e. \(\psi^{\prime}\to J/\psi\eta\to\gamma\eta\)[41], are not considered in this work. Therefore we will not discuss such kinds of processes here. The relative phases \(\delta_{\eta^{(^{\prime})}}\) of Eq. (17) for the \(\eta_{c}\) mediating effects in the \(\psi^{\prime}\to V\eta^{(^{\prime})}\) decay processes, are found to be insensitive to our present studies. As a result, the phases of \(\delta_{\eta^{(^{\prime})}}\) in the \(\psi^{\prime}\) decays will be fixed to zero throughout. The previous study in Ref. [18] pointed out a strong correlation between the \(d_{2}\) and \(d_{5}\) parameters, and we find that this correlation still holds in our joint fit. The resulting relation turns out to be \(d_{5}=3.57d_{2}+0.01\). Regarding the four parameters \(F_{8},F_{0},\theta_{8}\) and \(\theta_{0}\) related with the \(\eta\)-\(\eta^{\prime}\) mixing, our current determinations of the central values and uncertainties more or less resemble those in Ref. [20]. In Ref. [18], only the data from the light-flavor sector were considered and the resulting \(\eta\)-\(\eta^{\prime}\) mixing parameters were found to bear large uncertainties. The simultaneous inclusion of the relevant data from the \(J/\psi\) and \(\psi^{\prime}\) processes, together with the light-flavor ones, can obviously pin down the uncertainties of the \(\eta\)-\(\eta^{\prime}\) mixing parameters [20, 42, 43, 44]. In Table 1, we also give the predictions to the mixing parameters in the quark-flavor basis. Generally speaking, the numerous types of data are well reproduced in our comprehensive fit. The comparisons of the various decay widths for the pure light-flavor processes from the revised fit and the updated PDG values are shown in Table. 2. Similar comparisons for the partial decay widths of the \(J/\psi\) and \(\psi^{\prime}\) are given in Tables. 3 and 4, respectively. The resulting curves of the form factors for the \(\eta\to\gamma\gamma^{*}\), \(\eta^{\prime}\to\gamma\gamma^{*}\), \(\phi\to\eta\gamma^{*}\) and \(J/\psi\to\eta^{\prime}\gamma^{*}\) are shown together with the experimental data in Fig. 4. The fitted results of the recent BESIII measurements on the \(e^{+}e^{-}\) spectra in the \(J/\psi\to\eta e^{+}e^{-}\) processes are illustrated in Fig. 5. We point out a subtlety about the effects of the light-flavor vector resonances in the \(e^{+}e^{-}\) spectra. In the \(J/\psi\to\eta^{\prime}e^{+}e^{-}\) decays, the light-flavor vectors are removed in the BESIII analysis [45], and as a result we have also subtracted the contributions from the intermediate light vector exchanges in accord with the experimental setups. This explains the smooth line shapes of the electromagnetic \(J/\psi\to\eta^{\prime}e^{+}e^{-}\) transition form factors shown in Fig. 4. Regarding the \(J/\psi\to\eta e^{+}e^{-}\) process, we keep the effects of the intermediate light-flavor vector resonances, in order to be consistent with the setups of the experimental analyses in Ref. [21]. It should be stressed that the prominent peaks of the narrow vectors \(\omega\) and \(\phi\) can be diluted due to the large bin widths of the experimental energy resolutions. To clearly show the influence of the \begin{table} \begin{tabular}{c c c c} \hline \hline \(F_{8}\) & \((1.41\pm 0.02)F_{\pi}\) & \(F_{0}\) & \((1.36\pm 0.03)F_{\pi}\) \\ \(\theta_{8}\) & \((-24.3\pm 0.4)^{\circ}\) & \(\theta_{0}\) & \((-12.8\pm 0.5)^{\circ}\) \\ \(F_{V}\) & \(139.04\pm 1.72\) & \(c_{3}\) & \(0.0046\pm 0.0003\) \\ \(c_{4}\) & \(-0.0014\pm 0.0001\) & \(d_{2}\) & \(0.100\pm 0.008\) \\ \(h_{1}\) & \((-2.35\pm 0.06)\times 10^{-5}\) & \(h_{2}\) & (-3.08\(\pm\)0.60)\(\times 10^{-5}\) \\ \(h_{3}\) & (3.39\(\pm\)0.22)\(\times 10^{-6}\) & \(g_{1}\) & (-2.40\(\pm\)0.06)\(\times 10^{-5}\) \\ \(g_{2}\) & (-2.23\(\pm\)0.48)\(\times 10^{-4}\) & \(r_{1}\) & 0.40\(\pm\)0.04 \\ \(h^{\prime}_{1}\) & (0.33\(\pm\)0.23)\(\times 10^{-6}\) & \(h^{\prime}_{2}\) & (-4.01\(\pm\)0.32)\(\times 10^{-5}\) \\ \(h^{\prime}_{3}\) & (0.85\(\pm\)0.47)\(\times 10^{-6}\) & \(g^{\prime}_{1}\) & (-1.70\(\pm\)0.47)\(\times 10^{-4}\) \\ \(g^{\prime}_{2}\) & (0.18\(\pm\)0.95)\(\times 10^{-3}\) & \(\delta_{\eta}\) & \((117.12\pm 3.81)^{\circ}\) \\ \(\delta_{\eta^{\prime}}\) & \((50.03\pm 16.01)^{\circ}\) & \(\beta\) & \(512.86\pm 7.36\) MeV \\ \(\beta^{\prime}\) & \(112.97\pm 0.98\) MeV & \(F^{(*)}_{q}\) & (1.24\(\pm\)0.02)\(F_{\pi}\) \\ \(F^{(*)}_{3}\) & \((1.52\pm 0.02)F_{\pi}\) & \(\theta^{(*)}_{q}\) & \((37.3\pm 0.7)^{\circ}\) \\ \(\theta^{(*)}_{s}\) & \((35.1\pm 0.4)^{\circ}\) & \(\chi^{2}/d.o.f\) & 157.25/(135-23)=1.40 \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters from the joint fit. The quantities marked with asterisk are predictions, instead of free parameters in the fit. \begin{table} \begin{tabular}{c c c} \hline \hline & Exp & Fit \\ \hline \(\Gamma_{\omega\rightarrow\pi\gamma}\) & \(724.78\pm 34.64\) & \(705.65\pm 17.40\) \\ \(\Gamma_{\rho\rightarrow\pi^{0}\gamma}\) & \(70.08\pm 12.37\) & \(73.23\pm 1.81\) \\ \(\Gamma_{K^{*0}\to K^{0}\gamma}\) & \(116.36\pm 11.27\) & \(108.95\pm 2.69\) \\ \(\Gamma_{\omega\rightarrow\pi e-e^{+}}\) & \(6.68\pm 0.63\) & \(6.40\pm 0.16\) \\ \(\Gamma_{\omega\rightarrow\pi\mu-\mu^{+}}\) & \(1.16\pm 0.18\) & \(0.63\pm 0.02\) \\ \(\Gamma_{\omega\rightarrow\eta\gamma}\) & \(3.91\pm 0.41\) & \(5.30\pm 0.11\) \\ \(\Gamma_{\rho^{0}\rightarrow\eta\gamma}\) & \(44.73\pm 3.39\) & \(43.93\pm 0.96\) \\ \(\Gamma_{\phi\rightarrow\eta\gamma}\) & \(55.28\pm 1.23\) & \(55.01\pm 1.00\) \\ \(\Gamma_{\phi\rightarrow\eta^{\prime}\gamma}\) & \(0.26\pm 0.01\) & \(0.26\pm 0.01\) \\ \(\Gamma_{\eta^{\prime}\rightarrow\omega\gamma}\) & \(4.74\pm 0.29\) & \(5.05\pm 0.18\) \\ \(\Gamma_{\eta\rightarrow\gamma\gamma}\) & \(0.52\pm 0.02\) & \(0.50\pm 0.01\) \\ \(\Gamma_{\eta^{\prime}\rightarrow\gamma\gamma}\) & \(4.34\pm 0.20\) & \(3.92\pm 0.11\) \\ \(\Gamma_{\eta\rightarrow\gamma e-e^{+}}\) & \((9.04\pm 0.89)\times 10^{-3}\) & \((8.32\pm 0.23)\times 10^{-3}\) \\ \(\Gamma_{\eta\rightarrow\gamma\mu^{-}\mu^{+}}\) & \((0.41\pm 0.07)\times 10^{-3}\) & \((0.39\pm 0.01)\times 10^{-3}\) \\ \(\Gamma_{\eta^{\prime}\rightarrow\gamma\mu^{-}\mu^{+}}\) & \((2.12\pm 0.61)\times 10^{-2}\) & \((1.47\pm 0.04)\times 10^{-2}\) \\ \(\Gamma_{\phi\rightarrow\eta e-e^{+}}\) & \(0.459\pm 0.018\) & \(0.460\pm 0.008\) \\ \hline \hline \end{tabular} \end{table} Table 2: The decay widths in units of KeV for the light-flavor hadrons. \begin{table} \begin{tabular}{c c c} \hline \hline & Exp & Fit \\ \hline \(J/\psi\rightarrow\rho^{0}\pi^{0}\) & \(5.6\pm 0.7\) & \(5.5\pm 0.3\) \\ \(J/\psi\rightarrow\rho\pi\) & \(16.9\pm 1.5\) & \(16.2\pm 1.0\) \\ \(J/\psi\rightarrow\rho^{0}\eta\) & \(0.193\pm 0.023\) & \(0.185\pm 0.021\) \\ \(J/\psi\rightarrow\rho^{0}\eta^{\prime}\) & \(0.081\pm 0.008\) & \(0.080\pm 0.007\) \\ \(J/\psi\rightarrow\omega\pi^{0}\) & \(0.45\pm 0.05\) & \(0.45\pm 0.04\) \\ \(J/\psi\rightarrow\omega\eta\) & \(1.74\pm 0.20\) & \(1.65\pm 0.09\) \\ \(J/\psi\rightarrow\omega\eta^{\prime}\) & \(0.189\pm 0.018\) & \(0.189\pm 0.018\) \\ \(J/\psi\rightarrow\phi\eta\) & \(0.74\pm 0.08\) & \(0.76\pm 0.06\) \\ \(J/\psi\rightarrow\phi\eta^{\prime}\) & \(0.46\pm 0.05\) & \(0.45\pm 0.05\) \\ \(J/\psi\to K^{*+}K^{-}+c.c.\) & \(6.0\pm 1.0\) & \(6.6\pm 0.3\) \\ \(J/\psi\rightarrow K^{*0}\bar{K}^{0}+c.c.\) & \(4.2\pm 0.4\) & \(3.8\pm 0.2\) \\ \(J/\psi\rightarrow\pi^{0}\gamma\) & \(0.0356\pm 0.0017\) & \(0.0341\pm 0.0016\) \\ \(J/\psi\rightarrow\eta\gamma\) & \(1.085\pm 0.018\) & \(1.085\pm 0.013\) \\ \(J/\psi\rightarrow\eta^{\prime}\gamma\) & \(5.25\pm 0.07\) & \(5.35\pm 0.04\) \\ \(J/\psi\rightarrow\pi^{0}e^{+}e^{-}\) & \((0.076\pm 0.014)\times 10^{-2}\) & \((0.129\pm 0.004)\times 10^{-2}\) \\ \(J/\psi\rightarrow\eta e^{+}e^{-}\) & \((1.42\pm 0.08)\times 10^{-2}\) & \((1.35\pm 0.02)\times 10^{-2}\) \\ \(J/\psi\rightarrow\eta^{\prime}e^{+}e^{-}\) & \((6.59\pm 0.18)\times 10^{-2}\) & \((6.08\pm 0.05)\times 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Branching fractions(\(\times 10^{-3}\)) of the decay processes for \(J/\psi\). bin widths, we give the histograms by using the energy bin width at 50 MeV. It is evident that the signals of narrow light vector resonances can be obviously enhanced when the energy bin width is reduced. With the fitted parameters in Table 1, it is then interesting to decipher the roles of different mechanisms and resonances played in a given process. The \(J/\psi\to Pl^{+}l^{-}\) processes can provide an environment to study the intermediate hadron resonances [20, 53]. Recently, the \(J/\psi\to\eta\gamma^{*}(\to e^{+}e^{-})\) form factors are reported by the BESIII collaboration in Ref. [21], in which the experimental analysis includes only the \(\rho\) resonance in the \(e^{+}e^{-}\) spectra, apart from the QED contributions. However, it is pointed out that the \(\rho\) contribution should come from an isospin violated intermediate process \(J/\psi\to\eta\rho\to\eta e^{+}e^{-}\). In contrast, the contributions from the \(\omega\) and \(\phi\) are expected to be more important, since they enter via the isospin conserved intermediate processes \(J/\psi\to\eta\omega\) and \(J/\psi\to\eta\phi\), whose branching ratios are around eight and four times larger than that of the \(J/\psi\to\rho\eta\) in order. As a result, we expect that the effect of the \(\rho\) resonance is much suppressed, compared to the contributions from \(\omega\) and \(\phi\). Due to the narrow widths of the latter two resonances, they manifest themselves as prominent peaks in the \(e^{+}e^{-}\) spectra, as shown in Fig. 5. However, these narrow peaks can be easily washed out when the energy resolution is low. E.g., we also explicitly give the energy distributions of the \(e^{+}e^{-}\) in Fig. 5 when taking the energy bin width at 50 MeV and 100 MeV. In the latter case the signals of the narrow \(\omega\) and \(\phi\) become faintly visible. As pointed out in Refs. [20, 41, 54], we also confirm the importance of the \(\eta^{(^{\prime})}-\eta_{c}\) mixing mechanism in the \(J/\psi\to\eta^{(^{\prime})}\gamma^{(*)}\) decay processes. A future experimental measurement with higher energy resolution will be definitely helpful to discriminate the roles of different hadrons in the \(J/\psi\to\eta e^{+}e^{-}\) process. In Table 5, we give our predictions to the branching ratios of various \(J/\psi\to Pl^{+}l^{-}\) processes and also make comparisons with the results in Refs. [20, 55, 56]. Our study reveals an interesting feature that can shed light on the \(\rho\pi\) puzzle in the \(J/\psi(\psi^{\prime})\to VP\) decays. For this purpose, let's focus on the interplay between the electromagnetic and strong interactions in the \(J/\psi(\psi^{\prime})\to VP\) processes. We separately show the contributions from the strong and electromagnetic interactions to the isospin conserved and violated decays for \(J/\psi\) and \(\psi^{\prime}\) in Tables 6 and 7, respectively. The contributions from the strong interactions are given by the \(h_{i=1,2,3}\) terms in Eq. (14), while the electromagnetic contributions are obtained by taking \(h_{i=1,2,3}=0\). For the \(J/\psi\to VP\) decays, the contributions from strong interactions turn out to play major roles in most of the isospin conserved channels, with the exception of the \(J/\psi\to\phi\eta^{\prime}\) process, where the strengths of the two types of interactions are comparable. While the isospin violated channels can only receive contributions from the electromagnetic interactions, since the isospin breaking effects from the strong interaction parts are not included in this work. According to the results shown in Table 7, for the \(\psi^{\prime}\to VP\) processes, the strong interactions are found to play comparable roles in many of the isospin \begin{table} \begin{tabular}{c c c} \hline \hline & Exp & Fit \\ \hline \(\psi^{\prime}\to\rho\pi\) & \(0.032\pm 0.012\) & \(0.037\pm 0.010\) \\ \(\psi^{\prime}\to\rho^{0}\eta\) & \(0.022\pm 0.006\) & \(0.021\pm 0.005\) \\ \(\psi^{\prime}\to\rho^{0}\eta^{\prime}\) & \(0.019\pm 0.017\) & \(0.028\pm 0.008\) \\ \(\psi^{\prime}\to\omega\pi^{0}\) & \(0.021\pm 0.006\) & \(0.021\pm 0.004\) \\ \(\psi^{\prime}\to\omega\eta\) & — & \(0.005\pm 0.003\) \\ \(\psi^{\prime}\to\omega\eta^{\prime}\) & \(0.032\pm 0.025\) & \(0.033\pm 0.019\) \\ \(\psi^{\prime}\to\phi\eta\) & \(0.031\pm 0.0031\) & \(0.032\pm 0.003\) \\ \(\psi^{\prime}\to\phi\eta^{\prime}\) & \(0.0154\pm 0.0020\) & \(0.016\pm 0.0019\) \\ \(\psi^{\prime}\to K^{*+}K^{-}+c.c.\) & \(0.029\pm 0.004\) & \(0.029\pm 0.004\) \\ \(\psi^{\prime}\to K^{*0}\bar{K}^{0}+c.c.\) & \(0.109\pm 0.020\) & \(0.080\pm 0.011\) \\ \hline \hline \end{tabular} \end{table} Table 4: Branching fractions(\(\times 10^{-3}\)) of the decay processes for \(\psi^{\prime}\). The \(\psi^{\prime}\to\omega\eta\) channel is not included in the fit, instead the result corresponds to our prediction, which is around two times smaller than the upper limit \(1.1\times 10^{-5}\) reported in PDG [4]. \begin{table} \begin{tabular}{c c c c} \hline \hline Isospin considered cases & Exp & Strong interaction & EM interaction \\ \hline \(|\,G_{J/\psi\to\rho^{0}\pi^{0}}\,|\) & \(2.537\pm 0.154\) & \(2.899\pm 0.075\) & \(0.385\pm 0.006\) \\ \(|\,G_{J/\psi\to\rho\pi}\,|\) & \(4.408\pm 0.191\) & \(5.022\pm 0.129\) & \(0.709\pm 0.009\) \\ \(|\,G_{J/\psi\to\omega\eta}\,|\) & \(1.497\pm 0.084\) & \(1.586\pm 0.037\) & \(0.132\pm 0.009\) \\ \(|\,G_{J/\psi\to\omega\eta^{\prime}}\,|\) & \(0.562\pm 0.026\) & \(0.647\pm 0.028\) & \(0.119\pm 0.008\) \\ \(|\,G_{J/\psi\to\phi\eta}\,|\) & \(1.060\pm 0.056\) & \(1.270\pm 0.045\) & \(0.198\pm 0.031\) \\ \(|\,G_{J/\psi\to\phi\eta^{\prime}}\,|\) & \(0.974\pm 0.052\) & \(1.074\pm 0.049\) & \(2.031\pm 0.044\) \\ \(|\,G_{J/\psi\to K^{*}+K^{-}}\,|\) & \(2.011\pm 0.161\) & \(2.313\pm 0.048\) & \(0.216\pm 0.036\) \\ \(|\,G_{J/\psi\to K^{*}0\bar{K}0}\,|\) & \(1.686\pm 0.078\) & \(2.308\pm 0.048\) & \(0.715\pm 0.009\) \\ \hline \hline Isospin violated cases & Exp & EM interaction & Strong interaction \\ \hline \(|\,G_{J/\psi\to\rho^{0}\eta}\,|\) & \(0.498\pm 0.029\) & \(0.487\pm 0.028\) & \(--\) \\ \(|\,G_{J/\psi\to\rho^{0}\eta^{\prime}}\,|\) & \(0.367\pm 0.018\) & \(0.365\pm 0.017\) & \(--\) \\ \(|\,G_{J/\psi\to\omega\pi\eta}\,|\) & \(0.721\pm 0.039\) & \(0.720\pm 0.035\) & \(--\) \\ \hline \hline \end{tabular} \end{table} Table 6: The effective couplings of \(G_{\psi\to VP}\) in units of \(10^{-6}\)MeV\({}^{-1}\). Figure 5: The form factors and differential branching fractions for the \(J/\psi\to\eta e^{+}e^{-}\). The experimental data are from the Ref. [21]. The red solid lines represent the curves with the central values of the parameters in Table.1, and the shaded areas stand for the error bands. The histograms are obtained by taking different energy bin width at 50 MeV. conserved channels as those from the electromagnetic parts. Especially, the electromagnetic interaction turns out to play the dominant role in the \(\psi^{\prime}\to\rho\pi\) process and the effects from the strong interactions are found to be very small. In contrast, the strong interactions dominate the decay of \(J/\psi\to\rho\pi\) process and the electromagnetic effects appear to be small. This provides a sensible explanation to the \(\rho\pi\) puzzle. For the charged \(K^{*+}K^{-}+c.c.\) and neutral \(K^{*0}\bar{K}^{0}+c.c.\) decay processes of \(J/\psi\) or \(\psi^{\prime}\), the \(SU(3)\) breaking effects can originate from the strong interactions via the \(h_{2}\) term in Eq. (14), which turns out to be the same for both charged and neutral processes, and the electromagnetic interactions via the \(c_{j}\) terms in Eq. (12), where the \(c_{4}\) operator is found to solely contribute to the charged process [19]. The contributions from the electromagnetic parts to the \(J/\psi\to K^{*}\bar{K}+c.c.\) processes are obviously smaller than those from the strong interactions, which explains the similar branching ratios between \(J/\psi\to K^{*+}K^{-}+c.c.\) and \(J/\psi\to K^{*0}\bar{K}^{0}+c.c.\). In contrast, our study reveals that the magnitudes of the strong interactions in the \(\psi^{\prime}\to K^{*}\bar{K}+c.c.\) can be comparable with those of the electromagnetic parts. While, the \(SU(3)\) breaking effects in the electromagnetic parts are quite different for the charged and neutral decay processes due to the \(c_{4}\) operator [19]. This gives a new insight and also a reasonable explanation to the very different branching ratios of the \(\psi^{\prime}\to K^{*+}K^{-}+c.c.\) and \(\psi^{\prime}\to K^{*0}\bar{K}^{0}+c.c.\). ## 4 Summary and conclusions We use the effective Lagrangian approach to simultaneously investigate the processes of \(J/\psi(\psi^{\prime})\to VP\), \(J/\psi\to P\gamma\),\(J/\psi\to Pl^{+}l^{-}\), the radiative decays of light-flavor hadrons and their relevant form factors. High energy constraints on the resonance couplings are used to reduce the number of free parameters. The remaining resonance couplings are then determined through the joint fit to a large amount of experimental data, including the updated PDG averages of the various partial decay widths and the most recent \(J/\psi\to\eta\gamma^{*}\) form factors from BESIII. Thanks to the use of effective Lagrangian, the different types of contributions from the OZI allowed/suppressed strong interactions, \(SU(3)\) breaking terms and electromagnetic effects can be easily identified in our study. We pay special attention to the relative magni tudes from the strong and electromagnetic interactions in the \(J/\psi\to VP\) and \(\psi^{\prime}\to VP\) processes, so as to provide an insight into the \(\rho\pi\) puzzle. An anatomy of the \(J/\psi\to\rho\pi\) and \(\psi^{\prime}\to\rho\pi\) amplitudes reveals that the strong interaction dominates the former process and the electromagnetic interaction prevails the latter one. For the obviously distinct ratios between the charged \(B(\psi^{\prime}\to K^{*+}K^{-}+c.c.)/B(J/\psi\to K^{*+}K^{-}+c.c.)\) and the neutral \(B(\psi^{\prime}\to K^{*0}\bar{K}^{0}+c.c.)/B(J/\psi\to K^{*0}\bar{K}^{0}+c.c.)\) processes, our study uncovers that the \(J/\psi\to K^{*}\bar{K}+c.c.\) processes are mainly ruled by the strong interactions, where the \(SU(3)\) breaking effects enter similarly in both the charged and neutral amplitudes, while the \(\psi^{\prime}\to K^{*}\bar{K}+c.c.\) decays are found to be importantly affected by the electromagnetic interactions, where the \(SU(3)\) symmetry breaking terms appear differently in the charged and neutral processes. ## Acknowledgements We thank Lu Niu for an early-stage contribution to this work. This work is partially funded by the Natural Science Foundation of China under Grant Nos. 11975090, 12150013, 11975028 and 11974043. ## Appendix A The expressions of the effective couplings for \(\psi^{\prime}\to VP\) The expressions of the effective couplings in the \(\psi^{\prime}\to VP\) processes defined in Eq. (20) take the form: \[G_{\psi^{\prime}\to\rho^{0}\pi^{0}}= \frac{2\sqrt{2}}{F_{\pi}M_{\rho}}h^{\prime}_{1}M_{\psi^{\prime}}+ \frac{8\sqrt{2}}{F_{\pi}M_{\rho}}h^{\prime}_{2}m^{2}_{\pi}\frac{1}{M_{\psi^{ \prime}}}+\frac{32\pi\alpha}{F_{\pi}M_{\rho}}F_{V}g^{\prime}_{1}+\frac{128\pi \alpha}{F_{\pi}M_{\rho}}F_{V}g^{\prime}_{2}\frac{m^{2}_{\pi}}{M^{2}_{\psi^{ \prime}}}\] \[+\frac{8\sqrt{2}\pi\alpha}{3}\frac{f_{\psi^{\prime}}}{M_{\psi^{ \prime}}}F_{\rho\pi\gamma^{*}}(M^{2}_{\psi^{\prime}})\,,\] (A.1) \[G_{\psi^{\prime}\to\rho^{+}\pi^{-}}= \frac{2\sqrt{2}}{F_{\pi}M_{\rho}}h^{\prime}_{1}M_{\psi^{\prime}} +\frac{8\sqrt{2}}{F_{\pi}M_{\rho}}h^{\prime}_{2}m^{2}_{\pi}\frac{1}{M_{\psi^{ \prime}}}+\frac{8\sqrt{2}\pi\alpha}{3}\frac{f_{\psi^{\prime}}}{M_{\psi^{ \prime}}}F_{\rho\pi\gamma^{*}}(M^{2}_{\psi^{\prime}})\,,\] (A.2) \[G_{\psi^{\prime}\to\rho^{0}\eta}= \frac{32\sqrt{2}\pi\alpha}{3FM_{\rho}}F_{V}g^{\prime}_{1}(a_{1}-a _{3})+\frac{128\sqrt{2}\pi\alpha}{3FM_{\rho}M^{2}_{\psi^{\prime}}}F_{V}g^{ \prime}_{2}[a_{1}m^{2}_{\pi}-a_{3}(2m^{2}_{K}-m^{2}_{\pi})]\] \[-8\pi\alpha\frac{F_{V}}{M_{\rho}}\lambda_{\eta\eta c}g_{\psi^{ \prime}\eta_{c}\gamma^{*}}(M^{2}_{\rho})e^{i\delta^{\prime}_{\eta}}+\frac{8 \sqrt{2}\pi\alpha}{3}\frac{f_{\psi^{\prime}}}{M_{\psi^{\prime}}}F_{\rho\eta \gamma^{*}}(M^{2}_{\psi^{\prime}})\,,\] (A.3) \[G_{\psi^{\prime}\to\rho^{0}\eta^{\prime}}= \frac{32\sqrt{2}\pi\alpha}{3FM_{\rho}}F_{V}g^{\prime}_{1}(a_{2}-a _{4})+\frac{128\sqrt{2}\pi\alpha}{3FM_{\rho}M^{2}_{\psi^{\prime}}}F_{V}g^{ \prime}_{2}[a_{2}m^{2}_{\pi}-a_{4}(2m^{2}_{K}-m^{2}_{\pi})]\] \[-8\pi\alpha\frac{F_{V}}{M_{\rho}}\lambda_{\eta^{\prime}\eta_{c}}g _{\psi^{\prime}\eta_{c}\gamma^{*}}(M^{2}_{\rho})e^{i\delta^{\prime}_{\eta^{ \prime}}}+\frac{8\sqrt{2}\pi\alpha}{3}\frac{f_{\psi^{\prime}}}{M_{\psi^{\prime} }}F_{\rho\eta^{\prime}\gamma^{*}}(M^{2}_{\psi^{\prime}})\,,\] (A.4) \[G_{\psi^{\prime}\to\omega\pi^{0}}= \frac{32\pi\alpha}{3F_{\pi}M_{\omega}}F_{V}g^{\prime}_{1}+\frac{1 28\pi\alpha}{3F_{\pi}M_{\omega}}F_{V}g^{\prime}_{2}\frac{m^{2}_{\pi}}{M^{2}_{ \psi^{\prime}}}+\frac{8\sqrt{2}\pi\alpha}{3}\frac{f_{\psi^{\prime}}}{M_{\psi^{ \prime}}}F_{\omega\pi\gamma^{*}}(M^{2}_{\psi^{\prime}})\,,\] (A.5) \[G_{\psi^{\prime}\to\omega\eta}= \frac{4}{FM_{\omega}}a_{1}h^{\prime}_{1}M_{\psi^{\prime}}+\frac{16}{ FM_{\omega}}a_{1}h^{\prime}_{2}m^{2}_{\pi}\frac{1}{M_{\psi^{\prime}}}+\frac{4}{FM_{ \omega}}(2a_{1}+a_{3})h^{\prime}_{3}M_{\psi^{\prime}}+\frac{32\sqrt{2}\pi \alpha}{9FM_{\omega}}F_{V}g^{\prime}_{1}(a_{1}-a_{3})\] \[+\frac{128\sqrt{2}\pi\alpha}{9FM_{\omega}M^{2}_{\psi^{\prime}}}F_ {V}g^{\prime}_{2}[a_{1}m^{2}_{\pi}-a_{3}(2m^{2}_{K}-m^{2}_{\pi})]-\frac{8}{3} \pi\alpha\frac{F_{V}}{M_{\omega}}\lambda_{\eta\eta_{c}}g_{\psi^{\prime}\eta_{c }\gamma^{*}}(M^{2}_{\omega})e^{i\delta^{\prime}_{\eta}}\] \[+\frac{8\sqrt{2}\pi\alpha}{3}\frac{f_{\psi^{\prime}}}{M_{\psi^{ \prime}}}F_{\omega\eta\gamma^{*}}(M^{2}_{\psi^{\prime}})\,,\] (A.6) \[G_{\psi^{\prime}\to\omega\eta^{\prime}}= \frac{4}{FM_{\omega}}a_{2}h^{\prime}_{1}M_{\psi^{\prime}}+\frac{16 }{FM_{\omega}}a_{2}h^{\prime}_{2}m^{2}_{\pi}\frac{1}{M_{\psi^{\prime}}}+\frac{ 4}{FM_{\omega}}(2a_{2}+a_{4})h^{\prime}_{3}M_{\psi^{\prime}}+\frac{32\sqrt{2} \pi\alpha}{9FM_{\omega}}F_{V}g^{\prime}_{1}(a_{2}-a_{4})\] \[+\frac{128\sqrt{2}\pi\alpha}{9FM_{\omega}M^{2}_{\psi^{\prime}}}F_ {V}g^{\prime}_{2}[a_{2}m^{2}_{\pi}-a_{4}(2m^{2}_{K}-m^{2}_{\pi})]-\frac{8}{3} \pi\alpha\frac{F_{V}}{M_{\omega}}\lambda_{\eta^{\prime}\eta_{c}}g_{\psi^{ \prime}\eta_{c}\gamma^{*}}(M^{2}_{\omega})e^{i\delta^{\prime}_{\eta^{\prime}}}\] \[+\frac{8\sqrt{2}\pi\alpha}{3}\frac{f_{\psi^{\prime}}}{M_{\psi^{ \prime}}}F_{\omega\eta^{\prime}\gamma^{*}}(M^{2}_{\psi^{\prime}})\,,\] (A.7) \[G_{\psi^{\prime}\to\phi\eta}= -\frac{2\sqrt{2}}{FM_{\phi}}a_{3}h^{\prime}_{1}M_{\psi^{\prime}}- \frac{8\sqrt{2}}{FM_{\phi}}a_{3}h^{\prime}_{2}(2m^{2}_{K}-m^{2}_{\pi})\frac{1 }{M_{\psi^{\prime}}}-\frac{2\sqrt{2}}{FM_{\phi}}(2a_{1}+a_{3})h^{\prime}_{3}M _{\psi^{\prime}}\] \[+\frac{64\pi\alpha}{9FM_{\phi}}F_{V}g^{\prime}_{1}(a_{1}-a_{3})+ \frac{256\pi\alpha}{9FM_{\phi}M^{2}_{\psi^{\prime}}}F_{V}g^{\prime}_{2}[a_{1} m^{2}_{\pi}-a_{3}(2m^{2}_{K}-m^{2}_{\pi})]\] \[-\frac{8\sqrt{2}}{3M_{\phi}}\pi\alpha F_{V}\lambda_{\eta\eta_{c}} g_{\psi^{\prime}\eta_{c}\gamma^{*}}(M^{2}_{\phi})e^{i\delta^{\prime}_{\eta}}+ \frac{8\sqrt{2}\pi\alpha}{3}\frac{f_{\psi^{\prime}}}{M_{\psi^{\prime}}}F_{ \phi\eta\gamma^{*}}(M^{2}_{\psi^{\prime}})\,,\] (A.8) \[G_{\psi^{\prime}\to\phi\eta^{\prime}}= -\frac{2\sqrt{2}}{FM_{\phi}}a_{4}h^{\prime}_{1}M_{\psi^{\prime}}- \frac{8\sqrt{2}}{FM_{\phi}}a_{4}h^{\prime}_{2}(2m^{2}_{K}-m^{2}_{\pi})\frac{1 }{M_{\psi^{\prime}}}-\frac{2\sqrt{2}}{FM_{\phi}}(2a_{2}+a_{4})h^{\prime}_{3}M _{\psi^{\prime}}\] \[+\frac{64\pi\alpha}{9FM_{\phi}}F_{V}g^{\prime}_{1}(a_{2}-a_{4})+ \frac{256\pi\alpha}{9FM_{\phi}M^{2}_{\psi^{\prime}}}F_{V}g^{\prime}_{2}[a_{2} m^{2}_{\pi}-a_{4}(2m^{2}_{K}-m^{2}_{\pi})]\] \[-\frac{8\sqrt{2}}{3M_{\phi}}\pi\alpha F_{V}\lambda_{\eta^{\prime} \eta_{c}}g_{\psi^{\prime}\eta_{c}\gamma^{*}}(M^{2}_{\phi})e^{i\delta^{\prime} _{\eta^{\prime}}}+\frac{8\sqrt{2}\pi\alpha}{3}\frac{f_{\psi^{\prime}}}{M_{\psi ^{\prime}}}F_{\phi\eta^{\prime}\gamma^{*}}(M^{2}_{\psi^{\prime}})\,,\] (A.9) \[G_{\psi^{\prime}\to K^{*+}K^{-}}=\frac{2\sqrt{2}}{F_{K}M_{K^{*}}}h^{\prime}_{1}M _{\psi^{\prime}}+\frac{8\sqrt{2}}{F_{K}M_{K^{*}}}h^{\prime}_{2}m^{2}_{K}\frac{1 }{M_{\psi^{\prime}}}+\frac{8\sqrt{2}\pi\alpha}{3}\frac{f_{\psi^{\prime}}}{M_{ \psi^{\prime}}}F_{K^{*+}K^{-}\gamma^{*}}(M^{2}_{\psi^{\prime}})\,,\] (A.10) \[G_{\psi^{\prime}\to K^{*0}\bar{K}^{0}}=\frac{2\sqrt{2}}{F_{K}M_{K^{*}}}h^{ \prime}_{1}M_{\psi^{\prime}}+\frac{8\sqrt{2}}{F_{K}M_{K^{*}}}h^{\prime}_{2}m^{2}_ {K}\frac{1}{M_{\psi^{\prime}}}+\frac{8\sqrt{2}\pi\alpha}{3}\frac{f_{\psi^{\prime}}} {M_{\psi^{\prime}}}F_{K^{*0}\bar{K}^{0}\gamma^{*}}(M^{2}_{\psi^{\prime}})\,,\] (A.11) with \[a_{1}= \frac{F}{\cos(\theta_{0}-\theta_{8})}(\frac{\cos\theta_{0}}{\sqrt{6} F_{8}}-\frac{\sin\theta_{8}}{\sqrt{3}F_{0}})\,,\] \[a_{2}= \frac{F}{\cos(\theta_{0}-\theta_{8})}(\frac{\sin\theta_{0}}{\sqrt{6 }F_{8}}+\frac{\cos\theta_{8}}{\sqrt{3}F_{0}})\,,\] \[a_{3}= \frac{F}{\cos(\theta_{0}-\theta_{8})}(\frac{-2\cos\theta_{0}}{ \sqrt{6}F_{8}}-\frac{\sin\theta_{8}}{\sqrt{3}F_{0}})\,,\] \[a_{4}= \frac{F}{\cos(\theta_{0}-\theta_{8})}(\frac{-2\sin\theta_{0}}{ \sqrt{6}F_{8}}+\frac{\cos\theta_{8}}{\sqrt{3}F_{0}})\,.\] (A.12) The expressions \(F_{K^{*+}K^{-}\gamma^{*}}\) and \(F_{K^{*0}\bar{K}^{0}\gamma^{*}}\) that correspond to the electromagnetic contributions to the effective couplings of \(G_{\psi^{\prime}\to K^{*+}K^{-}}\) and \(G_{\psi^{\prime}\to K^{*0}\bar{K}^{0}}\) are given by \[F_{K^{*+}K^{-}\gamma^{*}}(s)= \frac{-2\sqrt{2}}{3F_{K}M_{V}M_{K^{*}}}[(c_{1}+c_{2}+8c_{3}-c_{5} )m_{K}^{2}+(c_{2}+c_{5}-c_{1}-2c_{6})M_{K^{*}}^{2}+(c_{1}-c_{2}+c_{5})s\] \[+24c_{4}(m_{K}^{2}-m_{\pi}^{2})]+\frac{2F_{V}}{3F_{K}M_{K^{*}}}[ (d_{1}+8d_{2}-d_{3})m_{K}^{2}+d_{3}(M_{K^{*}}^{2}+s)]\] \[\times[D_{\omega}(s)+3D_{\rho}(s)-2D_{\phi}(s)],\] (A.13) and \[F_{K^{*0}\bar{K}^{0}\gamma^{*}}(s)= \frac{4\sqrt{2}}{3F_{K}M_{V}M_{K^{*}}}[(c_{1}+c_{2}+8c_{3}-c_{5} )m_{K}^{2}+(c_{2}+c_{5}-c_{1}-2c_{6})M_{K^{*}}^{2}+(c_{1}-c_{2}+c_{5})s]\] \[+\frac{2F_{V}}{3F_{K}M_{K^{*}}}[(d_{1}+8d_{2}-d_{3})m_{K}^{2}+d_ {3}(M_{K^{*}}^{2}+s)][D_{\omega}(s)-3D_{\rho}(s)-2D_{\phi}(s)]\,.\] (A.14) It is pointed out that \(SU(3)\) breaking effect caused by the \(c_{4}\) term only enters in the charged \(F_{K^{*+}K^{-}\gamma^{*}}\) amplitude and is absent in the neutral \(F_{K^{*0}\bar{K}^{0}\gamma^{*}}\) process. For the expressions of other amplitudes \(F_{VP\gamma^{(*)}}\), they are given in Appendix of Ref. [20] and we do not repeat them here.
2307.07981
The alignment of the C3 Accelerator Structures with the Rasnik alignment system
The Rasnik 3-point alignment system, now widely applied in particle physics experiments and in the instrumentation of gravitational wave experiments, can be used as N-point alignment system by daisy chain N individual 3-point systems. The conceptual implementation of Rasnik chains in C3 is presented. The proper operation of a laser diode and a CMOS image sensor in liquid nitrogen has been verified. Next plans for testing a small but complete system, immersed in liquid nitrogen, are presented.
Harry van der Graaf, Niels van Bakel, Bram Bouwens, Martin Breidenbach, Andrew Haase, Joris van Heijningen, Anoop Nagesh Koushik, Emilio Nanni, Tristan du Pree, Nick van Remortel, Caterina Vernieri
2023-07-16T08:46:53Z
http://arxiv.org/abs/2307.07981v2
# The alignment of the C\({}^{3}\) Accelerator Structures with the Rasnik alignment system ###### Abstract The Rasnik 3-point alignment system, widely applied in particle physics experiments and in the instrumentation of gravitational wave experiments, can be used as N-point alignment system by 'leap frog' N individual 3-point systems. The conceptual implementation of Rasnik chains in C\({}^{3}\) is presented. Then, the proper operation of a laser diode and a CMOS image sensor in liquid nitrogen has been verified. Finally, next plans for testing a small but complete system, immersed in liquid nitrogen, are presented. Accelerator Subsystems and Technologies, Beam-line instrumentation (beam position and profile monitors, beam-intensity, Detector alignment and calibration methods (lasers, sources, particle-beams) ## 1 The Rasnik 3-point alignment system The first **R**ed **A**lignment **S**ystem **Nik**hef (Rasnik) was developed in 1983 for the alignment of the muon chambers in the Muon Spectrometer of the L3 experiment at CERN [1]. As sensor, 4-quadrant photodiodes were used. In 1993, CMOS image sensors became available, and Rasnik evolved into a system in which a back-illuminated coded mask is projected, by means of a positive singlet lens, onto the image sensor. Since 2005, some 8000 of these Rasnik system are flawlessly operational in the ATLAS experiment at CERN. With these Rasnik systems, composed from low-cost, commercially available components, the 2D alignment of three points can be measured with a spatial resolution of 1 nm. The best _resolution power_ of 7 pm/Hz was obtained with an image frame rate of 250 Hz [2]. For the alignment of active beam elements of future CLIC and ILC linear colliders, systems with a span between the image sensor and the mask of a Rasnik system larger than 20 m are Figure 1: Principle of the RasDif system. The monochromatic waves, arriving at the zone plate, cause a diffraction pattern on the image sensor. required. The diameter and focal length of the required lens become impractically large, and its replacement by a zone plate was considered. In the so-called RasDif system, depicted in figure 1, the back-illuminated coded mask is replaced by a monochromatic point-like light source generating spherical waves onto the zone plate. This results in a typical diffraction pattern onto the image sensor: the position of this pattern on the sensor is a measure for the 3-point alignment of light source, zone plate and image sensor, respectively. ## 2 Rasnik in Liquid Nitrogen So far, Rasnik systems have been operated in ambient air and in vacuum. In C\({}^{3}\), Rasnik must operate in ambient air, in vacuum and in boiling liquid nitrogen (LN\({}_{2}\)), at 79 K [3][4][5]. Given the index of refraction of LN\({}_{2}\) of 1.20, lenses can not be applied because a sharp image in LN\({}_{2}\) will be out of focus in vacuum. This is the main reason why the RasDif system is applied. Laser diodes were tested to operate in LN\({}_{2}\): they work, albeit that their supply voltage of 2 V at 293 K usually needs to be raised to 6 V when immersed in LN\({}_{2}\). With a current of 10 mA, the heat dissipation causes local boiling and therefore the formation of bubbles. The glass cover seal provides local thermal shielding, so bubble formation in the critical light exit zone is locally reduced. The following three laser diodes were found to be LN\({}_{2}\) proof: Laser Components ADL65074TR, Laser Components ADL65055TL, and ROHM RDL65MZT7. The long term stability of the sealed package in vacuum, in ambient air and in LN\({}_{2}\) remains to be verified. Modern CMOS image sensors are known not to operate below 90 K. In 2014, two groups tested CMOS image sensors in populair low-cost webcams by simply immersing them in LN\({}_{2}\), while reading them out via their USB connection [6; 7]. Eventually, two webcams were found to operate unaffected in LN\({}_{2}\): the Floureon "car reversing camera", and the Microsoft HD-3000 Model 1456 (or earlier). The latter can still be acquired, and proper operation in LN\({}_{2}\) has been recently confirmed, including a cold start, avoiding the self heating of the circuitry due to dissipation. For the coming R&D projects, some 8 of these image sensors will be applied in experimental set-ups. It is well known, in CMOS design, which circuitry could be applied for a given constraint in temperature [8; 9; 10; 11]. There is commercial interest in the development of a state-of-the-art CMOS image sensor for monitoring the inside of LH\({}_{2}\) tanks. ## 3 Rasnik in C\({}^{3}\) Figure 2: Principle of the leapfrog multipoint alignment system. Each chain plate is equipped with a laser diode, a zone plate and an image sensor. In figure 2, the assembly of chain plates is shown: the position, in X and Y, of any plate with respect to any other plate is known. The alignment of Accelerator Structures and Quads of C\({}^{3}\) can be realised by linking them with chain plates. In C\({}^{3}\), the chain plate takes the form of a stick, shown in figure 3. In a chain, all Sticks are practically identical: individual differences are known after a calibration procedure. On a Stick a CMOS image sensor chip is fixed, a pattern forming a zone lens is milled out, and laser diode is mounted. A Stick is mounted, using its mechanical interface, onto an Accelerator Structure or a Quad, the latter being an assembly of a static quadrupole magnet and a Beam Position Monitor. A first approach towards the application of Rasnik in the alignment of the 1 m long copper Accelerator Structures is shown in figure 4. Although there is some redundancy, the Rasnik data does not cover all 6 degrees of freedom of the Structures. Since two adjacent Structures are confined by means of a _raft_ structure, this may be acceptable. In figure 5 the number of Sticks is doubled, providing the redundant measurement of all 6 degrees of freedom of all Structures. Figure 4: Top view of the possible placement of Sticks onto the Accelerator Structures. Note that a rotation of a Structure around its Y axis (perpendicular to the plane of view) is not recorded by Rasnik. Figure 5: Top view of the possible placement of Sticks onto the Accelerator Structures, providing complete and redundant data. The alignment of the Quads will be integrated in this chain. Figure 3: The Stick. Each Stick includes the three basic Rasnik components. At one side, a Stick is fixed onto the object-to-align. It will be a challenge to reduce the associated ’contact error’ to below 0.1 μm. The effective precision of a chain of Rasnik systems The error of a 3-point Rasnik system is defined as the variation in the _sagitta_, defined as the distance, in X and Y, of the optical centre of the zone plate to the optical axis, being the line through the optical centres of the laser source and the image sensor. Assuming that the medium is homogeneous, light propagates in a straight line. For simple systems using low cost laser diodes and image sensors extracted from popular webcams, the (Gaussian) noise in the sagitta measurement due to quantum fluctuations in light falling onto pixels, can be as good as 100 nm. With modern low-cost image sensors, the frame rate of 100 Hz results in a _resolution power_ of 10 nm\(/\sqrt{Hz}\) which is the equivalent of 100 nm resolution of one measurement per second. With a custom designed image sensor optimised for application in C\({}^{3}\), a resolution power of 5 nm\(/\sqrt{\mathrm{Hz}}\) should be within reach [1, 2, 12]. The 2D position of a Quad or Accelerator Structure is transferred to Rasnik by means of Sticks. The mechanical interface between the Stick and the object onto which it is mounted is associated with a _set up error_ of order 1 \(\mathrm{\SIUnitSymbolMicro m}\) if state-of-the-art suspension mechanics are applied. In addition, the positions of the optical centres of laser source, zone plate and image sensor with respect to the mechanical interface, which are measured in a calibration procedure (see **Calibration of the Sticks**) are subject to a similar _offset error_ of 1 \(\mathrm{\SIUnitSymbolMicro m}\). Combining the two errors together, each Rasnik data point from a particular system is subject to an offset error of order 1 \(\mathrm{\SIUnitSymbolMicro m}\), constant in time, and a random Gaussian error much smaller than 0.1 \(\mathrm{\SIUnitSymbolMicro m}\). Figure 6 shows the errors associated with a chain of \(N\) = 16 Rasnik systems. In this calculation the positions of chainplate 0, 1 and 15 are fixed, and the positions of the in-between chainplates are calculated from the MonteCarlo simulated Rasnik data, assuming a spatial resolution of 1.0 (RMS, unitless) for each system. The largest error occurs, as expected, at the central plate since it is far Figure 6: The error propagation in a chain of N = 16 Rasnik systems. The maximum error reaches D times Rasnik’s intrinsic error. away from the fixed chainplates. The largest spatial error equals \(\sqrt{N}\) times the sagitta error of a single Rasnik system, where D = \(\sqrt{N}\). The Quarter Cryogenic Module (QCM) will be equipped with 4-fold Rasnik chains placed at both sides. Given the offset error of an individual Rasnik system of 1 \(\mathrm{\SIUnitSymbolMicro m}\), and a degradation factor \(\sqrt{N}\) = 2, the relative positions of the two Accelerator Structures and the Quad will be known within 1.5 \(\mathrm{\SIUnitSymbolMicro m}\) after the eight calibrated Sticks are placed. In C\({}^{3}\), a SuperSector includes 728 Accelerator Structures. If these are equipped with two continuous Rasnik chains, the largest uncertainty in the middle of each chain is \(\sqrt{728}\) / \(\sqrt{2}\) = 19 \(\mathrm{\SIUnitSymbolMicro m}\). By adding Rasnik systems on Sticks 1, 364 and 728, this largest error can be reduced by a factor \(\sqrt{2}\). As long as the light path area, now confined by the raft's main beams, is respected, there is room for improvement. ## 5 Conclusions and future plans One CMOS image sensor has been found and identified to operate flawlessly immersed in LN\({}_{2}\) at 77 K. With some 10 of these sensors, demonstrations of complete Rasnik systems, in air and immersed in LN\({}_{2}\), could be performed, within a year. Most likely, new CMOS image sensors will become available, capable to operate in a cryogenic environment, thanks to commercial interest. The proper operation of laser diodes in LN\({}_{2}\) has been certified. By applying Rasnik instead of a system in which the local position of a stretched wire is determined by multiple Wire Position Sensors (WPS) [13],[14], severe problems are avoided: no risk of a hard-accessible broken wire, no error in the vertical coordinate due to uncertainties in wire sag, and no risk of variations in the wire position due to flowing LN\({}_{2}\) medium. Other benefits of using Rasnik alignment systems are: Figure 7: The InPlaneDemo setup: it includes two mutually coupled parallel Rasnik systems, and two ‘diagonal’ systems. With this 2 to 4 fold redundant system, the quality of Rasnik systems can be verified in ambient air, vacuum, and fully immersed in LN\({}_{2}\)[1]. This setup fits in a foam cryostat with inner fiducial dimension of 60 x 10 x 10 cm\({}^{3}\) * electronics: benefits of using only off-the-shelf USB Muxs, CPUs or GPUs; * precision is limited by (mechanical) offset; intrinsic 2D spatial resolution of 10 nm; * mechanical noise, due to bubble formation and flow of LN\({}_{2}\) can be well studied given the excellent spatial resolution power; * no scale calibration required; * low cost: C 30 per system, excluding readout; * no drift: 1/f noise is not measurable; * radiation-proof components can be selected. Figure 8: The raft unit. Figure 9: Layout of the 8 Sticks in QCM. With calibrated Sticks, the relative positions of the two Accelerator Structures and the Quad are known. Figure 7 shows the **InPlaneDemo** setup. It includes two parallel and two diagonal Rasnik systems. With this, performance-limiting effects of Rasnik in LN\({}_{2}\) can be studied in detail: * the formation of N\({}_{2}\) gas bubbles on the surface of the heat-dissipating laser diodes and CMOS image sensors; * local variations in the index of refraction of LN\({}_{2}\) due to the laminar or turbulent flow, and due to convection near the warmer laser diodes and CMOS image sensors; * mechanical noise due to bubble formation and bubble transport in LN\({}_{2}\). Here the 10 nm spatial resolution and a high frame rate of Rasnik will be useful; * the long-term quality degradation of laser diodes and CMOS image sensors. **Rasnik for the Quarter Cryogenic Module (QCM).** Figure 8 shows the basic **raft** unit: an assembly of two sequential Accelerator Structures, one Quad module, waveguides and a frame keeping these items in their proper position. Placed in a cryostat filled with LN\({}_{2}\), the functionality of QCM can be fully tested [15]. For this, the relative positions of the Quad and the Accelerator Structures can be monitored with Rasnik, as is shown in figures 9 and 10. The optical paths are confined in the raft's construction bars. Since only the laser diodes and image sensors dissipate their power there is little bubble formation inside these bars, and the LN\({}_{2}\) flow in the bar is expected Figure 10: Cross section of the QCM. Since all Sticks are identical, interfaces are required between a Stick and either an Accelerator Structure (left) or the Quad (right). These interfaces may be an integral part of the Accelerator Structure or the Quad. to be directed in Z, and laminar. Variations in the index of refraction in the X and Y directions should be small. Calibration of the Sticks.With calibrated Sticks, the 6D relative positions of the two Accelerator Structures and the Quad in the QCM are known as soon as the eight Sticks are mounted in position. With the **Calibration Station**, shown in figure 11, placed in a normal lab environment, the 3x X and 3x Y coordinates of the source, zone plate and sensor, respectively, with respect to the mechanical reference, can be obtained for each individual Stick [16]. After measuring the alignment error of the Station, the calibration of any Stick is obtained by placing sets of three Sticks on the station, such that each Stick will pass at least one time the left-, middle- and right position on the Station. We thank Nikhef for facilitating tests of crucial components in their dewar, and we are grateful to Amolf to provide us with LN\({}_{2}\). We thank Oscar van Petten for producing mechanical supports, Berend Munneke for his practical assistance, and Nico Rem for keeping us working safe.
2305.18000
Dissipation, quantum coherence, and asymmetry of finite-time cross-correlations
Recent studies have revealed a deep connection between the asymmetry of cross-correlations and thermodynamic quantities in the short-time limit. In this study, we address the finite-time domain of the asymmetry for both open classical and quantum systems. Focusing on Markovian dynamics, we show that the asymmetry observed in finite-time cross-correlations is upper bounded by dissipation. We prove that, for classical systems in a steady state with arbitrary operational durations, the asymmetry exhibits, at most, linear growth over time, with the growth speed determined by the rates of entropy production and dynamical activity. In the long-time regime, the asymmetry exhibits exponential decay, with the decay rate determined by the spectral gap of the transition matrix. Remarkably, for quantum cases, quantum coherence is equally important as dissipation in constraining the asymmetry of correlations. We demonstrate an example where only quantum coherence bounds the asymmetry while the entropy production rate vanishes. Furthermore, we generalize the short-time bounds on correlation asymmetry, as reported by Shiraishi [Phys. Rev. E 108, L042103 (2023)] and Ohga et al. [Phys. Rev. Lett. 131, 077101 (2023)], to encompass finite-time scenarios. These findings offer novel insights into the thermodynamic aspects of correlation asymmetry.
Tan Van Vu, Van Tuan Vo, Keiji Saito
2023-05-29T10:22:57Z
http://arxiv.org/abs/2305.18000v3
# Dissipation, quantum coherence, and asymmetry of finite-time cross-correlations ###### Abstract Recent studies have revealed a deep connection between the asymmetry of cross-correlations and thermodynamic quantities in the short-time limit. In this Letter, we address the finite-time domain of the asymmetry for both open classical and quantum systems. Focusing on Markovian dynamics, we show that the asymmetry observed in finite-time cross-correlations is upper bounded by dissipation. We prove that, for classical systems in a steady state with arbitrary operational durations, the asymmetry exhibits, at most, linear growth over time, with the growth speed determined by the rates of entropy production and dynamical activity. In the long-time regime, the asymmetry exhibits exponential decay, with the decay rate determined by the spectral gap of the transition matrix. Remarkably, for quantum cases, quantum coherence is equally important as dissipation in constraining the asymmetry of correlations. We demonstrate an example where only quantum coherence bounds the asymmetry while the entropy production rate vanishes. Furthermore, we generalize the short-time bounds on correlation asymmetry, as reported by Shiraishi [arXiv:2304.12775] and Ohga et al. [arXiv:2303.13116], to encompass finite-time scenarios. These findings offer novel insights into the thermodynamic aspects of correlation asymmetry. _Introduction.--_Asymmetry, which refers to the absence of symmetry, constitutes a fundamental concept in physics. The presence of asymmetry typically results in nontrivial and crucial consequences for a given system and has thus drawn considerable attention in various scientific disciplines. Nonequilibrium thermodynamics is one such field where the concept of asymmetry is crucial [1; 2]. For instance, the violation of time-reversal symmetry indicates the existence of nonequilibrium conditions, and the degree of such a violation is closely related to entropy production, with asymmetry being reflected by fluctuation theorems [3; 4; 5; 6; 7; 8]. As manifested in the thermodynamic uncertainty relation [9; 10; 11; 12; 13] and the entropic bound [14], the asymmetry of arbitrary currents is constrained by dissipation. Another intriguing phenomenon is the relaxation asymmetry, which asserts that the heating process is faster than the cooling one [15; 16; 17]. Furthermore, asymmetry can be harnessed to enhance the performance of heat engines [18; 19]. Cross-correlation is a fundamental quantity that embodies both temporal and spatial information pertaining to physical systems. Fluctuation-dissipation theorems [20] establish that the response of a nonequilibrium steady-state system to a small perturbation can be expressed in terms of cross-correlation [21]. The investigation of correlation properties has progressed in various directions, including linear response theory [22; 23], speed limits for auto-correlation [24; 25; 26; 27; 28], and thermodynamic inference of entropy production using the correlation between different observables [29], to name a few. Recently, a deep connection between the asymmetry of cross-correlations and thermodynamic quantities has been reported for steady-state systems in the _short-time_ limit [30; 31], providing novel insight into the fluctuation of oscillations. The correlation asymmetry should also contain information on the time reversal-symmetry breaking for the whole time regime [32; 33]. Thus, it is quite important to unveil the in-depth relationship between the asymmetry and the thermodynamic costs in the entire _finite-time_ domain, as well as to explore quantum effects on correlation. In the present Letter, we address these open problems by considering classical and quantum Markov processes of discrete-state systems whose initial state is a stationary state. We prove that the asymmetry exhibited in finite-time cross-correlations is always bounded from above by the thermodynamic costs for the whole time regime. Specifically, for classical Markov jump processes, we show that the asymmetry grows at most linear in time, with the velocity determined by the rates of entropy production and dynamical activity; in the Figure 1: Numerical illustration of the thermodynamic bounds on the asymmetry of cross-correlations in terms of (a) entropy production [cf. Eq. (7)] and (b) thermodynamic affinity [cf. Eq. (14)] in a three-state biochemical oscillation [34]. Observables \(a\) and \(b\) are randomly sampled in the range \([\)–\(1,1]\), and the forward and backward transition rates are \(w_{*}=2\) and \(w_{-}=1\), respectively. long-time regime, it exponentially decays with the rate determined by the spectral gap of the transition matrix [cf. Eq. (7)]. These provide a basic picture of the asymmetry in generic Markov processes. Variants of this structure are found in other cases with multi-time and multi-observables, as well as in other dynamics such as discrete-time Markov chains, continuous-state overdamped Langevin systems, and open quantum systems. Remarkably, by considering the Lindblad master equations for open quantum systems, we find that quantum coherence plays a crucial role in the asymmetry of correlations [cf. Eq. (11)]. We demonstrate that degeneracy in the spectrum of the steady-state density matrix can lead to a nontrivial phenomenon wherein only quantum coherence is responsible for a finite asymmetry with zero entropy production. In addition, we provide finite-time generalizations for the short-time bounds reported in Refs. [30; 31] [cf. Eqs. (13) and (14)]. These findings further deepen our understanding of the asymmetry of correlations from the thermodynamic perspective. _Setup._--We consider a Markov jump process described by the master equation: \[\ket{\dot{p}_{t}}=W\ket{p_{t}}, \tag{1}\] where the dot \(\cdot\) denotes the time derivative, \(\ket{p_{t}}=[p_{1}(t),\ldots,p_{N}(t)]^{\top}\) is the probability distribution at time \(t\), and \(W=[w_{mn}]\in\mathbb{R}^{N\times N}\) denotes the time-independent transition matrix with \(w_{mn}\geq 0\) being the jump rate from state \(n\) to \(m\left(\ast n\right)\) and \(w_{nn}=-\sum_{m\left(\ast n\right)}w_{mn}\). We assume the local detailed balance \(\ln(w_{mn}/w_{nm})=\Delta s_{mn}\), i.e., the log of the ratio of transition rates is related to the entropy change in the environment \(\Delta s_{mn}\). Hereafter, we consider the case that the system is in a nonequilibrium steady state \(\ket{\pi}\). Let \(j_{mn}\coloneqq w_{mn}\pi_{n}-w_{nm}\pi_{m}\) be the steady-state probability current; then, the master equation (1) implies \(\sum_{m}j_{nm}=0\). Two quantities of importance are the rates of entropy production and dynamical activity, defined as \[\sigma \coloneqq\sum_{m>n}j_{mn}\ln\frac{w_{mn}\pi_{n}}{w_{nm}\pi_{m}}, \tag{2}\] \[\gamma \coloneqq\sum_{m>n}(w_{mn}\pi_{n}+w_{nm}\pi_{m}). \tag{3}\] Qualitatively, \(\sigma\) quantifies the degree of thermodynamic irreversibility, whereas \(\gamma\) reflects the timescale of the system [35]. Another relevant quantity is dynamical state mobility [36], which characterizes the response of probability currents against thermodynamic forces and is defined as \[\kappa \coloneqq\sum_{m>n}\frac{j_{mn}}{\ln(w_{mn}\pi_{n}/w_{nm}\pi_{m})}. \tag{4}\] The relation \(\kappa\leq\gamma/2\) holds in general. Next, we introduce cross-correlation and some notations to be used in this study. Let \(\ket{a}=[a_{1},\ldots,a_{N}]^{\top}\) and \(\ket{b}=[b_{1},\ldots,b_{N}]^{\top}\) be arbitrary observables. The two-time cross-correlation between these two observables can be defined as \[C_{ba}^{\tau} \coloneqq\left\{b(\tau)a(0)\right\}, \tag{5}\] where \(a(t)\left[b(t)\right]\) takes the value of \(a_{n}\left[b_{n}\right]\) if the system is in state \(n\) at time \(t\) and the average \(\left\langle\cdot\right\rangle\) is over all stochastic trajectories of time period \(\tau\). Defining \(\Pi=\mathrm{diag}(\pi_{1},\ldots,\pi_{N})\), then the cross-correlation can be analytically expressed as \(C_{ba}^{\tau}=\left\langle b|e^{W\tau}\Pi|a\right\rangle\). We are interested in the asymmetry of cross-correlations \[\delta C_{ba}^{\tau} \coloneqq C_{ba}^{\tau}-C_{ab}^{\tau}, \tag{6}\] which vanishes in equilibrium. However, it is not the case for nonequilibrium situations. Qualitatively, \(\delta C_{ba}^{\tau}\) reflects symmetry breaking in the causality of observations and can reveal essential aspects of system dynamics, such as the extent to which the system deviates from equilibrium. Several thermodynamic bounds for this quantity have recently been derived in the \(\tau\to 0\) limit [30; 31]. In this Letter, we focus on the entire _finite-time_ regime. Let \(\left\{\lambda_{n}\right\}\) be the set of eigenvalues and \(\left\{\ket{v_{n}^{l}},\ket{v_{n}^{r}}\right\}\) be the left and right eigenvectors of \(W\), respectively (i.e., \(\left\{v_{n}^{l}\right|W=\left\langle v_{n}^{l}\right|\lambda_{n}\text{ and }W\ket{v_{n}^{r}}=\lambda_{n}\ket{v_{n}^{r}}\right\rangle\)). The largest eigenvalue \(\lambda_{1}=0\) is associated with the eigenvector \(\left\{v_{1}^{l}\right|\propto\left\langle 1\right|\), whereas all other eigenvalues have a negative real part, \(0>\mathrm{Re}\{\lambda_{2}\}\geq\cdots\geq\mathrm{Re}\{\lambda_{N}\}\). Here, \(\left|1\right\rangle\) denotes the all-one vector. The eigenvectors are normalized, \(\left\{v_{n}^{l}\right|v_{n}^{l}\}=1\). Since \(\left\{\ket{v_{n}^{l}}\right\}\) forms a basis of \(\mathbb{C}^{N}\), we can always find coefficients \(\left\{\tilde{z}_{n}\right\}\) for any vector \(\ket{z}\) such that \(\ket{z}=\sum_{n}\tilde{z}_{n}\ket{v_{n}^{l}}\). Hereafter, we define the \(\ell_{1}\)-norm \(\|z\|_{*}=\sum_{n\geq 2}\left|\tilde{z}_{n}\right|\) and \(\ell_{2}\)-norm \(\|z\|_{2}\coloneqq\sqrt{\left\langle z\right|z}\). An important quantity is the spectral gap \(g\coloneqq-\mathrm{Re}\{\lambda_{2}\}>0\), which corresponds to the slowest decay mode and characterizes the relaxation speed of the system. _Main results._--Given the above setup, we are now ready to explain our results; a simple numerical illustration is presented in Fig. 1. Our first main result is a thermodynamic bound on the asymmetry of finite-time cross-correlations: \[\frac{\left|\delta C_{ba}^{\tau}\right|}{\left|a\right|_{*}\left|b\right|_{*}} \leq\tau e^{-g\tau}\sigma\Phi\!\left(\frac{\sigma}{2\gamma}\right)^{-1}\leq 2 \tau e^{-g\tau}\sqrt{\sigma\kappa}, \tag{7}\] where \(\Phi(u)\) denotes the inverse function of \(u\tanh(u)\) and satisfies \(\Phi(u)\geq\max\{u,\sqrt{u}\}\,\forall u\geq 0\). The proof is postponed to the end of the Letter. Bound (7) implies that the thermodynamic costs govern the asymmetry over the whole time domain. In the small-time regime, the asymmetry of cross-correlations increases at most linear in time with speed constrained by the entropy production and dynamical activity rates. On the other hand, in the large-time regime, the asymmetry exponentially decays with the rate being the spectral gap. The result can be analogously generalized to other scenarios, such as discrete-time Markov chains, continuous-state overdamped Langevin systems, and multiple observables, as shown in the Supplemental Material (SM) [37]. Notice that \(\delta C_{ba}^{\tau}\) is identical with the difference between cross-correlations in the original dynamics and the dual dynamics [38] with transition rates \(\widehat{W}=\Pi W^{\dagger}\Pi^{-1}\)[39]. Therefore, the bound (7) can also be interpreted as a thermodynamic bound for the discrepancy between these dynamics in terms of observables. In other contexts, \(\delta C_{ba}^{\tau}\) can be employed as a natural measure to study interactions in complex systems [40], dynamic co-localization, diffusion, binding in living cells [41], and information flow in biological systems [30; 42]. Thus, the inequality (7) not only provides information on the temporal behavior of such a measure but also describes the constraints imposed by the thermodynamic costs. Next, we extend the result (7) to quantum cases, which include autonomous thermal engines [43; 44] and quantum measurement processes [45]. We consider a Markovian open quantum system, which is weakly coupled to a single or multiple reservoirs. The time evolution of the reduced density matrix is described by the Lindblad equation, \(\dot{\varrho}_{t}=\mathcal{L}(\varrho_{t})\)[46; 47], where \[\mathcal{L}(\varrho)\coloneqq-i[H,\varrho]+\sum_{k}(L_{k}\varrho L_{k}^{ \dagger}-\{L_{k}^{\dagger}L_{k},\varrho\}/2) \tag{8}\] with \(H\) is the time-independent Hamiltonian and \(\{L_{k}\}\) denote jump operators. In order to be thermodynamically consistent, we assume the local detailed balance condition [48; 49]; that is, the jump operators come in pairs \((k,k^{\prime})\) such that \(L_{k}=e^{\Delta s_{k}/2}L_{k^{\prime}}^{\dagger}\), where \(\Delta s_{k}\) denotes the entropy change in the environment. Let \(\pi\) be the steady state and \(\pi=\sum_{n}\pi_{n}\ket{n}\!\bra{n}\) be its spectral decomposition. The rates of irreversible entropy production and dynamical activity are given by \(\sigma=\sum_{k}\mathrm{tr}\big{\{}L_{k}\pi L_{k}^{\dagger}\big{\}}\Delta s_{k}\) and \(\gamma=\sum_{k}\mathrm{tr}\big{\{}L_{k}\pi L_{k}^{\dagger}\big{\}}\), respectively. The system is measured by the eigenbasis \(\{\ket{n}\!\bra{n}\}\) at both the initial and final times. Note that this two-point measurement scheme does not alter the steady state of the system. Define observables \(A\coloneqq\sum_{n}a_{n}\ket{n}\!\bra{n}\) and \(B\coloneqq\sum_{n}b_{n}\ket{n}\!\bra{n}\). As will be shown later, quantum properties emerge even for this measurement basis. In this case, the cross-correlation can be analytically calculated as [50] \[C_{ba}^{\tau}=\langle b(\tau)a(0)\rangle=\mathrm{tr}\big{\{}Be^{\mathcal{L} \tau}(A\pi)\big{\}}. \tag{9}\] Let \(\tilde{\mathcal{L}}\) be an adjoint superoperator of \(\mathcal{L}\), defined as \[\tilde{\mathcal{L}}(\varrho)\coloneqq i[H,\varrho]+\sum_{k}(L_{k}^{\dagger} \varrho L_{k}-\{L_{k}^{\dagger}L_{k},\varrho\}/2). \tag{10}\] Note that both \(\tilde{\mathcal{L}}\) and \(\mathcal{L}\) have the same eigenvalue spectrum, \(\tilde{\mathcal{L}}(V_{n})=\lambda_{n}V_{n}\), where \(0=\lambda_{1}>\mathrm{Re}\{\lambda_{2}\}\geq\dots\) and the eigenvectors are normalized such that \(\lVert V_{n}\rVert_{\infty}=1\). For any operator \(X\), let \(X_{t}\coloneqq e^{\tilde{\mathcal{L}}t}(X)\) be the time-evolved operator in the Heisenberg picture, and \(\lVert X\rVert_{*}\coloneqq\sum_{n\geq 2}\abs{z_{n}^{x}}\) be the \(\ell_{1}\)-norm of \(X\), where \(X=\sum_{n}z_{n}^{x}V_{n}\). The definition of the spectral gap is analogous to the classical case, \(g\coloneqq-\mathrm{Re}\{\lambda_{2}\}\). With the above setup in place, we obtain a quantum extension of the classical result (7), indicating that the asymmetry of cross-correlations is limited by both dissipation and quantum coherence [37]: \[\lvert\delta C_{ba}^{\tau}\rvert\leq\tau e^{-g\tau}\bigg{[}\mathcal{C}+ \lVert A\rVert_{*}\lVert B\rVert_{*}\sigma\Phi\bigg{(}\frac{\sigma}{2\gamma} \bigg{)}^{-1}\bigg{]}. \tag{11}\] Here, \(\mathcal{C}\) is a quantum coherence term that quantifies the amount of quantum coherence generated in the time-evolved observables in the Heisenberg picture, given by \[\mathcal{C}\coloneqq\Theta\int_{0}^{1}ds\big{[}\lVert B\rVert_{*}C_{\ell_{1} }(A_{(1-s)\tau})+\lVert A\rVert_{*}C_{\ell_{1}}(B_{s\tau})\big{]}, \tag{12}\] where \(\Theta\coloneqq 4\lVert H\rVert_{\infty}+3\sum_{k}\lVert L_{k}\rVert_{\infty}^{2}\) and \(C_{\ell_{1}}(X_{t})\coloneqq e^{gt}\sum_{mn}\left\{\langle m|X_{t}|n\rangle\right\}\) is the \(\ell_{1}\)-norm of quantum coherence in the eigenbasis [51], which is upper bounded by \(N(N-1)\lVert X\rVert_{*}\) for all \(t\geq 0\). In the classical limit (e.g., \(H=0\) and \(L_{k}\propto\lvert m\rangle\!\langle n\rvert\)), \(\mathcal{C}\) vanishes. Bound (11) establishes a qualitative and quantitative relationship between asymmetry, thermodynamic irreversibility, and quantum coherence. Remarkably, \(\delta C_{ba}^{\tau}\) can be nonzero even when \(\sigma=0\), indicating that quantum coherence is inevitable in the bound (see the SM [37] for an analytical analysis). We numerically illustrate this critical role of quantum coherence in a three-level maser [52], which can operate as a heat engine or refrigerator. In a rotating frame [53; 54; 55], the time evolution of the density matrix can be described by the Hamiltonian \(H=-\Delta\sigma_{22}+\Omega(\sigma_{12}+\sigma_{21})\) and the jump operators \(L_{1}=\sqrt{\alpha_{h}(N_{h}+1)}\sigma_{13}\), \(L_{1^{\prime}}=\sqrt{\alpha_{h}N_{h}}\sigma_{31}\), \(L_{2}=\sqrt{\alpha_{c}(N_{c}+1)}\sigma_{23}\), and \(L_{2^{\prime}}=\sqrt{\alpha_{c}N_{c}}\sigma_{32}\), where \(\sigma_{ij}=\lvert\epsilon_{i}\rangle\!\langle\epsilon_{j}\rvert\). We exclusively consider the \(N_{h}=N_{c}\) case, in which \(\sigma=0\). Due to degeneracy in the spectrum of \(\pi\), the measurement basis can be chosen as \([1]=e^{i\phi}\cos\theta\,|\epsilon_{1}\rangle+\sin\theta\,|\epsilon_{2}\rangle\), \(|2\rangle=-\sin\theta\,|\epsilon_{1}\rangle+e^{-i\phi}\cos\theta\,|\epsilon_{2}\rangle\), and \(|3\rangle=|\epsilon_{3}\rangle\), where \(\phi\) and \(\theta\) are arbitrary real numbers. As confirmed in Fig. 2, the asymmetry of correlations does not vanish and is bounded solely by the quantum coherence term \(\mathcal{C}\). In the sequel, we will focus on some quantifications of the normalized asymmetry that have been considered in the literature. Our second main result is a thermodynamic bound for the normalized asymmetry only in terms of entropy production (see the SM [37] for the proof): \[\frac{|\delta C_{ba}^{\tau}|^{2}}{D_{a}^{\tau}+D_{b}^{\tau}}\leq\tau\sigma\min \Bigg{\{}|a^{2}+b^{2}|_{\infty},\Bigg{[}\frac{\max_{c}\ell_{c}}{2N\tan(\pi/N) }\Bigg{]}^{2}\Bigg{\}}, \tag{13}\] where \(D_{a}^{\tau}=C_{a_{\infty}}^{0}-C_{aa}^{\tau}\) is the decay of auto-correlation [30], \(|a^{2}+b^{2}|_{\infty}=\max_{n}(a_{n}^{2}+b_{n}^{2})\), the maximum is over all cycles of a uniform cycle decomposition [56; 57], and \(\ell_{c}:=\sum_{i}\sqrt{(a_{n_{i}}-a_{n_{i+1}})^{2}+(b_{n_{i}}-b_{n_{i+1}})^ {2}}\) is the length of cycle \(c=(n_{1},\ldots,n_{|c|})\). Here, \(|c|\) denotes the size of \(c\) and \(|c|+1\equiv 1\). In the short-time limit, bound (13) can be reduced to the extant result obtained by Shiraishi [31] [see Eqs. (6) and (8) therein]. Therefore, it can be regarded as a finite-time generalization of Shiraishi's bound. Since the normalized asymmetry is experimentally measurable, our bound can be used to estimate entropy production based on trajectory data obtained from experiments, similar to the tool provided by the thermodynamic uncertainty relation [58; 59; 60; 61]. Furthermore, the fact that the asymmetry of cross-correlations decays exponentially, as shown in Eq. (7), leads us to anticipate the existence of exponentially decaying bounds. Investigating such bounds is left as future work. So far, we have demonstrated that entropy production serves as a limit for the asymmetry of cross-correlations in finite time. Our third primary finding is an additional constraint expressed in terms of thermodynamic affinity, given by \[\frac{|\delta C_{ba}^{\tau}|}{2\sqrt{D_{a}^{\tau}D_{b}^{\tau}}}\leq\max_{c} \frac{\tanh(\mathcal{F}_{c}^{\tau}/2|c|)}{\tan(\pi/|c|)}\leq\max_{c}\frac{ \mathcal{F}_{c}^{\tau}}{2\pi}, \tag{14}\] where \(\mathcal{F}_{c}^{\tau}\) is the thermodynamic affinity associated with a cycle \(c=(n_{1},\ldots,n_{|c|})\) in temporal coarse-graining dynamics with timescale \(\tau\): \[\mathcal{F}_{c}^{\tau}=\ln\frac{w_{n_{2}n_{1}}^{\tau}w_{n_{3}n_{2}}^{\tau} \ldots w_{n_{1}n_{|c|}}^{\tau}}{w_{n_{1}n_{2}}^{\tau}w_{n_{2}n_{3}}^{\tau} \ldots w_{n_{|c|}n_{1}}^{\tau}}. \tag{15}\] Here, \(w_{mn}^{\tau}\coloneqq[e^{W\tau}]_{mn}\) is the conditional transition probability from state \(n\) to \(m\) within time \(\tau\). The proof is presented in the SM [37]. In the short- and long-time limits, we can show that \(\lim_{\tau\to 0}\mathcal{F}_{c}^{\tau}=\mathcal{F}_{c}\) and \(\lim_{\tau\to\infty}\mathcal{F}_{c}^{\tau}=0\), where \(\mathcal{F}_{c}\) denotes the thermodynamic affinity defined for the transition rate matrix. Therefore, bound (14) can be considered a finite-time generalization of the extant bound reported by Ohga and coworkers [30]. Additionally, it provides an estimation of the maximum thermodynamic affinity in temporal coarse-graining dynamics. This is highly relevant from an experimental point of view due to the limited measurement resolution and may provide new insights into determining dissipative timescales in terms of thermodynamic affinity [62]. Based on numerical evidence, we conjecture that \[\max_{c}\mathcal{F}_{c}^{\tau}\leq e^{-\Delta\tau}\max_{c}\mathcal{F}_{c} \tag{16}\] holds for some nonnegative constant \(\Delta\). Proving this inequality (even for the simplest case \(\Delta=0\)) and combining it with Eq. (14) would immediately resolve the conjecture given in Ref. [30]. _Proof of Eq._ (7).--For convenience, we define \(J=[j_{mn}]\), which is the matrix of probability currents. By simple algebraic calculations, we can transform the asymmetry of cross-correlations as follows: \[\delta C_{ba}^{\tau} = \langle b|e^{W\tau}\Pi|a\rangle-\langle a|e^{W\tau}\Pi|b\rangle \tag{17}\] \[= \langle b|e^{W\tau}\Pi|a\rangle-\langle b|\Pi e^{W^{\dagger}\tau}|a\rangle\] \[= \langle b|(e^{W\tau}\Pi-\Pi e^{W^{\dagger}\tau})|a\rangle\,.\] Here, we use the fact \(z=z^{\dagger}\) for any \(z\in\mathbb{R}\) in the second line. Note that for any matrices \(X\) and \(Y\), the following equality always holds: \[e^{X}\Pi-\Pi e^{Y}=\int_{0}^{1}e^{sX}(X\Pi-\Pi Y)e^{(1-s)Y}\,ds\,. \tag{18}\] Applying \(X=W\tau\) and \(Y=W^{\dagger}\tau\) and noting that \(J=W\Pi-\Pi W^{\dagger}\), we can proceed Eq. (17) further as follows: \[\delta C_{ba}^{\tau} = \tau\int_{0}^{1}\,\langle b|e^{sW\tau}Je^{(1-s)W^{\dagger}\tau}|a \rangle\,ds \tag{19}\] \[= \tau\int_{0}^{1}\mathrm{tr}\Big{\{}Je^{(1-s)W^{\dagger}\tau}\,|a \rangle\!\langle b|\,e^{sW\tau}\Big{\}}\,ds\,.\] Noting that \(|a\rangle=\sum_{n}\tilde{a}_{n}\big{|}v_{n}^{I}\big{\}}\), \(|b\rangle=\sum_{n}\tilde{b}_{n}\big{|}v_{n}^{I}\big{\}}\), and \(\int_{0}^{1}e^{(1-s)x+sy}ds=(e^{x}-e^{y})/(x-y)\), we can simplify the terms in Eq. (19) as \[\delta C_{ba}^{\tau} = \tau\int_{0}^{1}\sum_{m,n}\mathrm{tr}\Big{\{}J\tilde{a}_{m} \tilde{b}_{n}^{*}e^{(1-s)\lambda_{m}^{*}\tau+s\lambda_{n}\tau}\,\big{|}v_{m}^{I} \big{\}}\big{\{}v_{n}^{I}\big{\}}\big{\}}\,ds\] (20) \[= \tau\,\mathrm{tr}\Bigg{\{}J\sum_{m,n\geq 2}\tilde{a}_{m}\tilde{b}_{n}^ {*}\frac{e^{\lambda_{m}^{*}\tau}-e^{-\lambda_{n}\tau}}{(\lambda_{m}^{*}- \lambda_{n})\tau}\,\big{|}v_{m}^{I}\big{\}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! and applying the above inequalities, we can evaluate as follows: \[|\delta C_{ba}^{\tau}| \leq\tau\sum_{m,n}|j_{mn}|\sum_{m,n\geq 2}\tilde{a}_{m}\tilde{b}_{n}^{*} \frac{e^{\lambda_{m}^{*}\tau}-e^{\lambda_{n}\tau}}{(\lambda_{m}^{*}-\lambda_{n}) \tau}\Big{|}v_{m}^{l}\Big{|}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! * Marconi _et al._ [2008]U. M. B. Marconi, A. Puglisi, L. Rondoni, and A. Vulpiani, Fluctuation-dissipation: Response theory in statistical physics, Phys. Rep. **461**, 111 (2008). * Mohan and Pati [2022]B. Mohan and A. K. Pati, Quantum speed limits for observables, Phys. Rev. A **106**, 042436 (2022). * Carabba _et al._ [2022]N. Carabba, N. Hornedal, and A. del Campo, Quantum speed limits on operator flows and correlation functions, Quantum **6**, 884 (2022). * Hasegawa [2023]Y. Hasegawa, Thermodynamic correlation inequality, arXiv:2301.03060 (2023). * Dechant _et al._ [2023]A. Dechant, J. Garnier-Brun, and S.-i. Sasa, Thermodynamic bounds on correlation times, arXiv:2303.13038 (2023). * Mori and Shirai [2023]T. Mori and T. Shirai, Symmetrized Liouvillian gap in Markovian open quantum systems, Phys. Rev. Lett. **130**, 230404 (2023). * Dechant and Sasa [2021]A. Dechant and S.-i. Sasa, Improving thermodynamic bounds using correlations, Phys. Rev. X **11**, 041061 (2021). * Ohga _et al._ [2023]N. Ohga, S. Ito, and A. Kolchinsky, Thermodynamic bound on the asymmetry of cross-correlations, arXiv:2303.13116 (2023). * Shiraishi [2023]N. Shiraishi, Entropy production limits all fluctuation oscillations, arXiv:2304.12775 (2023). * Onsager [1931]L. Onsager, Reciprocal relations in irreversible processes. I., Phys. Rev. **37**, 405 (1931). * Onsager [1931]L. Onsager, Reciprocal relations in irreversible processes. II., Phys. Rev. **38**, 2265 (1931). * Barato and Seifert [2017]A. C. Barato and U. Seifert, Coherence of biochemical oscillations is bounded by driving force and network topology, Phys. Rev. E **95**, 062409 (2017). * Maes [2020]C. Maes, Frenesy: Time-symmetric dynamical activity in nonequilibria, Phys. Rep. **850**, 1 (2020). * Van Vu and Saito [2023]T. Van Vu and K. Saito, Thermodynamic unification of optimal transport: Thermodynamic uncertainty relation, minimum dissipation, and thermodynamic speed limits, Phys. Rev. X **13**, 011013 (2023). * [37]See Supplemental Material for the details of the analytical calculations and the generalizations to other scenarios, including discrete-time Markov chains, overdamped Langevin dynamics, and multiple observables, which includes Refs. [68; 69]. * Crooks [2000]G. E. Crooks, Path-ensemble averages in systems driven far from equilibrium, Phys. Rev. E **61**, 2361 (2000). * [39]Note that the dual dynamics has the same steady state as the original one. Thus, the difference between cross-correlations in these dynamics is quantified by \(\langle b|e^{W\tau}\Pi|a\rangle-\langle b|e^{\overline{W}\tau}\Pi|a\rangle= \langle b|e^{W\tau}\Pi|a\rangle-\langle b|\Pi e^{W\tau}|a\rangle=\delta C^{ \tau}_{ba}\), which is nothing but the asymmetry of cross-correlations [see Eq. (17)]. * Nolte _et al._ [2006]G. Nolte, F. C. Meinecke, A. Ziehe, and K.-R. Muller, Identifying interactions in mixed and noisy complex systems, Phys. Rev. E **73**, 051913 (2006). * Bacia _et al._ [2006]K. Bacia, S. A. Kim, and P. Schwille, Fluorescence cross-correlation spectroscopy in living cells, Nat. Methods **3**, 83 (2006). * Horowitz and Esposito [2014]J. M. Horowitz and M. Esposito, Thermodynamics with continuous information flow, Phys. Rev. X **4**, 031015 (2014). * Mitchison [2019]M. T. Mitchison, Quantum thermal absorption machines: refrigerators, engines and clocks, Contemp. Phys. **60**, 164 (2019). * Manzano _et al._ [2021]G. Manzano, D. Subero, O. Maillet, R. Fazio, J. P. Pekola, and E. Roldan, Thermodynamics of gambling demons, Phys. Rev. Lett. **126**, 080603 (2021). * Wiseman and Milburn [2009]H. M. Wiseman and G. J. Milburn, _Quantum Measurement and Control_ (Cambridge University Press, Cambridge, 2009). * Lindblad [1976]G. Lindblad, On the generators of quantum dynamical semigroups, Commun. Math. Phys. **48**, 119 (1976). * Gorini _et al._ [1976]V. Gorini, A. Kossakowski, and E. C. G. Sudarshan, Completely positive dynamical semigroups of N-level systems, J. Math. Phys. **17**, 821 (1976). * Horowitz and Parrondo [2013]J. M. Horowitz and J. M. R. Parrondo, Entropy production along nonequilibrium quantum jump trajectories, New J. Phys. **15**, 085028 (2013). * Manzano and Zambrini [2022]G. Manzano and R. Zambrini, Quantum thermodynamics under continuous monitoring: A general framework, AVS Quantum Sci. **4**, 025302 (2022). * [50]For observables with different measurement basis \(A=\sum_{n}a_{n}\,|a_{n}\rangle\!\langle a_{n}|\) and \(B=\sum_{n}b_{n}\,|b_{n}\rangle\!\langle b_{n}|\), the cross-correlation reads \(C^{\tau}_{ba}=\mathrm{tr}\{Be^{\mathcal{L}\tau}(\sum_{n}a_{n}\,\langle a_{n}| \pi|a_{n}\rangle\,|a_{n}\rangle\!\langle a_{n}|)\}\). * Baumgratz _et al._ [2014]T. Baumgratz, M. Cramer, and M. B. Plenio, Quantifying coherence, Phys. Rev. Lett. **113**, 140401 (2014). * Scovil and Schulz-DuBois [1959]H. E. D. Scovil and E. O. Schulz-DuBois, Three-level masers as heat engines, Phys. Rev. Lett. **2**, 262 (1959). * Boukobza and Tannor [2007]E. Boukobza and D. J. Tannor, Three-level systems as amplifiers and attenuators: A thermodynamic analysis, Phys. Rev. Lett. **98**, 240601 (2007). * Kalaee _et al._ [2021]A. A. S. Kalaee, A. Wacker, and P. P. Potts, Violating the thermodynamic uncertainty relation in the three-level maser, Phys. Rev. E **104**, L012103 (2021). * Van Vu and Saito [2022]T. Van Vu and K. Saito, Thermodynamics of precision in Markovian open quantum dynamics, Phys. Rev. Lett. **128**, 140602 (2022). * Schnakenberg [1976]J. Schnakenberg, Network theory of microscopic and macroscopic behavior of master equation systems, Rev. Mod. Phys. **48**, 571 (1976). * Pietzonka _et al._ [2016]P. Pietzonka, A. C. Barato, and U. Seifert, Affinity- and topology-dependent bound on current fluctuations, J. Phys. A **49**, 34LT01 (2016). * Li _et al._ [2019]J. Li, J. M. Horowitz, T. R. Gingrich, and N. Fakhri, Quantifying dissipation using fluctuating currents, Nat. Commun. **10**, 1666 (2019). * Manikandan _et al._ [2020]S. K. Manikandan, D. Gupta, and S. Krishnamurthy, Inferring entropy production from short experiments, Phys. Rev. Lett. **124**, 120603 (2020). * Van Vu _et al._ [2020]T. Van Vu, V. T. Vo, and Y. Hasegawa, Entropy production estimation with optimal current, Phys. Rev. E **101**, 042138 (2020). * Otsubo _et al._ [2020]S. Otsubo, S. Ito, A. Dechant, and T. Sagawa, Estimating entropy production by machine learning of short-time fluctuating currents, Phys. Rev. E **101**, 062106 (2020). * Cisneros _et al._ [2023]F. A. Cisneros, N. Fakhri, and J. M. Horowitz, Dissipative timescales from coarse-graining irreversibility, arXiv:2305.16197 (2023). * Govern and ten Wolde [2014]C. C. Govern and P. R. ten Wolde, Energy dissipation and noise correlations in biochemical sensing, Phys. Rev. Lett. **113**, 258102 (2014). * Cao _et al._ [2015]Y. Cao, H. Wang, Q. Ouyang, and Y. Tu, The free-energy cost of accurate biochemical oscillations, Nat. Phys. **11**, 772 (2015). * Seara _et al._ [2021]D. S. Seara, B. B. Machta, and M. P. Murrell, Irreversibility in dynamical phases and transitions, Nat. Commun. **12** (2021). * Leighton and Sivak [2018]M. P. Leighton and D. A. Sivak, Dynamic and thermo dynamic bounds for collective motor-driven transport, Phys. Rev. Lett. **129**, 118102 (2022). * Liang _et al._ [2022]S. Liang, P. D. L. Rios, and D. M. Busiello, Universal thermodynamic bounds on symmetry breaking in biochemical systems, arXiv:2212.12074 (2022). * Geva and Kosloff [1996]E. Geva and R. Kosloff, The quantum heat engine and heat pump: An irreversible thermodynamic analysis of the three-level amplifier, J. Chem. Phys. **104**, 7681 (1996). * Fan _et al._ [1955]K. Fan, O. Taussky, and J. Todd, An algebraic proof of the isoperimetric inequality for polygons, J. Wash. Acad. Sci. **45**, 339 (1955). **Supplemental Material for** **"Dissipation, quantum coherence, and asymmetry of finite-time cross-correlations"** Tan Van Vu,\({}^{1}\)1, Van Tuan Vo,\({}^{2}\) and Keiji Saito\({}^{2}\) Footnote 1: [email protected] \({}^{1}\)_Analytical quantum complexity RIKEN Hakubi Research Team,_ _RIKEN Center for Quantum Computing (RQC), 2-1 Hirosawa, Wako, Saitama 351-0198, Japan_ \({}^{2}\)_Department of Physics, Kyoto University, Kyoto 606-8502, Japan_ This Supplemental Material describes the details of the analytical calculations presented in the main text and the generalizations to other scenarios, including discrete-time Markov chains, overdamped Langevin dynamics, and multiple observables. The equations and figure numbers are pre-fixed with S [e.g., Eq. (S1) or Fig. S1]. The numbers without this prefix [e.g., Eq. (1) or Fig. 1] refer to the items in the main text. ###### Contents * S1 Generalizations of Eq. (7) to other cases * S1. Discrete-time Markov chains * S2. Overdamped Langevin dynamics * S3. Multiple observables * S2 Proofs of results in the main text * S3. Proof of Eq. (11) * S4. Analytical demonstration of the critical role of quantum coherence * S5. Proof of Eq. (13) * S3 Useful inequalities ## S1 Generalizations of Eq. (7) to other cases ### Discrete-time Markov chains Here we provide a generalization of Eq. (7) for discrete-time Markov chains. We consider a time-homogeneous irreducible Markov chain whose dynamics is governed by the master equation: \[\left|p_{t_{i}}\right\rangle=R\left|p_{t_{i-1}}\right\rangle.\] (S1) Here, \(R=\left[R_{mn}\right]\) is the stochastic matrix with \(R_{mn}\geq 0\) the transition probability from state \(n\) to \(m\). The normalization condition \(\sum_{m}R_{mn}=1\) is satisfied for all \(n\). We consider a finite duration \(\tau\) of \(K\) steps (i.e., \(t_{0}=0\) and \(t_{K}=\tau\)). The system is in a nonequilibrium steady state \(\left|\pi\right\rangle\) (i.e., \(\left|\pi\right\rangle=R\left|\pi\right\rangle\)). The steady-state entropy production and dynamical activity at each time step are given by \[\sigma :=\sum_{mn}R_{mn}\pi_{n}\ln\frac{R_{mn}\pi_{n}}{R_{nm}\pi_{m}},\] (S2) \[\gamma :=\sum_{mn}R_{mn}\pi_{n}.\] (S3) The cross-correlation between the observables can be expressed as \[C_{ba}^{\tau} :=\left\langle b(\tau)a(0)\right\rangle=\left\langle b|R^{K}\Pi|a \right\rangle.\] (S4) Let \(\{\lambda_{n}\}\) be the set of eigenvalues of \(R\) and \(\{\left|v_{n}^{l}\right\rangle,\left|v_{n}^{r}\right\rangle\}\) be the left and right eigenvectors, respectively (i.e., \(\left\langle v_{n}^{l}\right|R=\left\langle v_{n}^{l}\right|\lambda_{n}\) and \(R\left|v_{n}^{r}\right\rangle=\lambda_{n}\left|v_{n}^{r}\right\rangle\)). Note that \(1=\lambda_{1}>\left|\lambda_{2}\right|\geq\dots\) and \(\left|v_{1}\right\rangle\propto\left|1\right\rangle\). The spectral gap can thus be defined as \(g\coloneqq-\ln\left|\lambda_{2}\right|\geq 0\). Evidently, \(\left|\lambda_{n}\right|\leq e^{-g}\) for any \(n\geq 2\). Now, we can calculate the asymmetry of cross-correlations as follows: \[\delta C_{ba}^{\tau} =\left\langle b|R^{K}\Pi|a\right\rangle-\left\langle a|R^{K}\Pi|b\right\rangle\] \[=\left\langle b|R^{K}\Pi-\Pi(R^{l})^{K}|a\right\rangle.\] (S5) Note that for any matrices \(X\) and \(Y\), the following relation holds: \[X^{K}\Pi-\Pi Y^{K}=\sum_{k=0}^{K-1}X^{k}(X\Pi-\Pi Y)Y^{K-1-k}.\] (S6) Applying this relation for \(X=R\) and \(Y=R^{\dagger}\) and noting that \(J=R\Pi-\Pi R^{\dagger}\), we can proceed further as \[\delta C_{ba}^{\tau} =\left\langle b|R^{K}\Pi-\Pi(R^{\dagger})^{K}|a\right\rangle\] \[=\sum_{k=0}^{K-1}\left\langle b|R^{k}(R\Pi-\Pi R^{\dagger})(R^{ \dagger})^{K-1-k}|a\right\rangle\] \[=\sum_{k=0}^{K-1}\operatorname{tr}\bigl{\{}J(R^{\dagger})^{K-1 -k}\left|a\right\rangle\!\!\left\langle b\right|R^{k}\bigr{\}}\] \[=\sum_{k=0}^{K-1}\sum_{m,n}\operatorname{tr}\bigl{\{}Ja_{m}b_{n} ^{*}(\lambda_{m}^{*})^{K-1-k}\lambda_{n}^{k}\left|v_{m}^{l}\right\rangle\! \!\left\langle v_{n}^{l}\right|\bigr{\}}.\] Note that \(J\left|1\right\rangle=\left|0\right\rangle\) and \(\left\langle 1\right|J=\left\langle 0\right|\). Consequently, the asymmetry of cross-correlations can be upper bounded as \[\left|\delta C_{ba}^{\tau}\right| =\left|\sum_{k=0}^{K-1}\sum_{m,n\geq 2}\operatorname{tr} \bigl{\{}Ja_{m}b_{n}^{*}(\lambda_{m}^{*})^{K-1-k}\lambda_{n}^{k}\left|v_{m}^{ l}\right\rangle\!\!\left\langle v_{n}^{l}\right|\bigr{\}}\right|\] \[\leq Ke^{-(K-1)g}\sum_{m,n}\left|j_{mn}\right|\sum_{m,n\geq 2} \left|a_{m}b_{n}^{*}|\right.\] \[\leq Ke^{-(K-1)g}\left\|a\right\|_{*}\left\|b\right\|_{*}\sigma \Phi\!\left(\frac{\sigma}{2\gamma}\right)^{-1},\] (S7) which yields the desired generalization for discrete-time systems: \[\frac{\left|\delta C_{ba}^{\tau}\right|}{\left\|a\right\|_{*}\left\|b\right\|_ {*}}\leq Ke^{-(K-1)g}\sigma\Phi\!\left(\frac{\sigma}{2\gamma}\right)^{-1}.\] (S8) ### Overdamped Langevin dynamics We consider a \(d\)-dimensional overdamped Langevin system. Let \(p_{t}(\mathbf{x})\) denote the probability density function of finding the system in state \(\mathbf{x}\) at time \(t\). The time evolution of \(p_{t}(\mathbf{x})\) is described by the Fokker-Planck equation: \[\dot{p}_{t}(\mathbf{x})=\mathcal{L}[p_{t}(\mathbf{x})]=-\nabla\cdot\mathbf{j}_{t}(\mathbf{x}),\] (S9) where \(\mathcal{L}[p(\mathbf{x})]\coloneqq-\nabla\cdot[\mathbf{f}(\mathbf{x})p(\mathbf{x})-D\nabla p (\mathbf{x})]\) is the Fokker-Planck operator, \(\mathbf{f}(\mathbf{x})\) is the force vector, and \(D=\operatorname{diag}(D_{1},\dots,D_{d})\) is the matrix of diffusion coefficients. Consider the adjoint operator \(\tilde{\mathcal{L}}\), which is defined as \[\tilde{\mathcal{L}}[p(\mathbf{x})]\coloneqq\mathbf{f}(\mathbf{x})\cdot\nabla p(\mathbf{x})+ \nabla\cdot D\nabla p(\mathbf{x}).\] (S10) The operator \(\tilde{\mathcal{L}}\) is also known as the generator of the backward Fokker-Planck equation. Define the inner product \[\left\langle u(\mathbf{x}),v(\mathbf{x})\right\rangle\coloneqq\int d\mathbf{x}\,u(\mathbf{x})v (\mathbf{x}).\] (S11) For any functions \(u(\mathbf{x})\) and \(v(\mathbf{x})\) such that \(u(\mathbf{x})\mathbf{f}(\mathbf{x})v(\mathbf{x})\), \(u(\mathbf{x})D\nabla v(\mathbf{x})\), and \(v(\mathbf{x})D\nabla u(\mathbf{x})\) vanish at infinity, we prove that \[\left\langle u(\mathbf{x}),\mathcal{L}[v(\mathbf{x})]\right\rangle=\left\langle v(\mathbf{x }),\tilde{\mathcal{L}}[u(\mathbf{x})]\right\rangle.\] (S12) Indeed, by exploiting the boundary conditions, we can show as \[\left\langle u(\mathbf{x}),\mathcal{L}[v(\mathbf{x})]\right\rangle =-\int d\mathbf{x}\,u(\mathbf{x})\nabla\cdot[\mathbf{f}(\mathbf{x})v(\mathbf{x})-D \nabla v(\mathbf{x})]\] \[=\int d\mathbf{x}\,[\mathbf{f}(\mathbf{x})v(\mathbf{x})-D\nabla v(\mathbf{x})]\cdot \nabla u(\mathbf{x})\] \[=\int d\mathbf{x}\,[v(\mathbf{x})\mathbf{f}(\mathbf{x})\cdot\nabla u(\mathbf{x})+v( \mathbf{x})\nabla\cdot D\nabla u(\mathbf{x})]\] \[=\int d\mathbf{x}\,v(\mathbf{x})[\mathbf{f}(\mathbf{x})\cdot\nabla u(\mathbf{x})+ \nabla\cdot D\nabla u(\mathbf{x})]\] \[=\left\langle v(\mathbf{x}),\tilde{\mathcal{L}}[u(\mathbf{x})]\right\rangle.\] (S13) Using this relation, we can immediately derive that \[\left\langle u(\mathbf{x}),e^{\mathcal{L}\tau}[v(\mathbf{x})]\right\rangle=\left\langle v (\mathbf{x}),e^{\tilde{\mathcal{L}}\tau}[u(\mathbf{x})]\right\rangle.\] (S14) We consider the case where the system is in a nonequilibrium steady state \(\pi(\mathbf{x})\) (i.e., \(\mathcal{L}[\pi(\mathbf{x})]=-\nabla\cdot\mathbf{j}(\mathbf{x})=0\)). The entropy production rate can be calculated as \[\sigma=\int d\mathbf{x}\,\frac{\mathbf{j}(\mathbf{x})\cdot D^{-1}\mathbf{j}(\mathbf{x})}{\pi(\mathbf{ x})}.\] (S15) The cross-correlation between observables \(a(\mathbf{x})\) and \(b(\mathbf{x})\) can be expressed as \[C^{\tau}_{ba}\coloneqq(b(\mathbf{x}_{\tau})a(\mathbf{x}_{0}))=\int d\mathbf{x}\,b(\mathbf{x}) e^{\tilde{\mathcal{L}}\tau}[a(\mathbf{x})\pi(\mathbf{x})].\] (S16) Using the equality (S14), the asymmetry of cross-correlations can thus be calculated as \[\delta C^{\tau}_{ba} =\int d\mathbf{x}\,b(\mathbf{x})e^{\tilde{\mathcal{L}}\tau}[a(\mathbf{x})\pi (\mathbf{x})]-\int d\mathbf{x}\,a(\mathbf{x})e^{\tilde{\mathcal{L}}\tau}[b(\mathbf{x})\pi(\bm {x})]\] \[=\int d\mathbf{x}\,b(\mathbf{x})\Big{\{}e^{\tilde{\mathcal{L}}\tau}[a(\bm {x})\pi(\mathbf{x})]-\pi(\mathbf{x})e^{\tilde{\mathcal{L}}\tau}[a(\mathbf{x})]\Big{\}}\] \[=\int d\mathbf{x}\,b(\mathbf{x})\int_{0}^{1}ds\,\frac{d}{ds}e^{s\tilde{ \mathcal{L}}\tau}\Big{\{}e^{(1-s)\tilde{\mathcal{L}}\tau}[a(\mathbf{x})]\pi(\mathbf{ x})\Big{\}}\] \[=\tau\int d\mathbf{x}\,b(\mathbf{x})\int_{0}^{1}ds\,e^{s\tilde{\mathcal{ L}}\tau}\big{\{}\mathcal{L}[\pi(\mathbf{x})\circ]-\pi(\mathbf{x})\tilde{\mathcal{L}}[ \circ]\big{\}}\Big{[}e^{(1-s)\tilde{\mathcal{L}}\tau}a(\mathbf{x})\Big{]}.\] (S17) From the stationarity, we also obtain that \[\mathcal{L}[\pi(\mathbf{x})q(\mathbf{x})]-\pi(\mathbf{x})\tilde{\mathcal{L}} [q(\mathbf{x})]\] \[=-\nabla\cdot\left\{\mathbf{f}(\mathbf{x})\pi(\mathbf{x})q(\mathbf{x})-D\nabla[\pi (\mathbf{x})q(\mathbf{x})]\right\}-\pi(\mathbf{x})[\mathbf{f}(\mathbf{x})\cdot\nabla q(\mathbf{x})+ \nabla\cdot D\nabla q(\mathbf{x})]\] \[=-2\mathbf{f}(\mathbf{x})\pi(\mathbf{x})\cdot\nabla q(\mathbf{x})-q(\mathbf{x})\nabla \cdot[\mathbf{f}(\mathbf{x})\pi(\mathbf{x})]+\nabla\cdot D\nabla[\pi(\mathbf{x})q(\mathbf{x})]- \pi(\mathbf{x})\nabla\cdot D\nabla q(\mathbf{x})\] \[=-2\mathbf{f}(\mathbf{x})\pi(\mathbf{x})\cdot\nabla q(\mathbf{x})-q(\mathbf{x})\nabla \cdot[\mathbf{f}(\mathbf{x})\pi(\mathbf{x})]+\nabla\pi(\mathbf{x})\cdot D\nabla q(\mathbf{x})+ \nabla q(\mathbf{x})\cdot D\nabla\pi(\mathbf{x})+q(\mathbf{x})\nabla\cdot D\nabla\pi(\mathbf{x})\] \[=-2\nabla q(\mathbf{x})\cdot[\mathbf{f}(\mathbf{x})\pi(\mathbf{x})-D\nabla\pi( \mathbf{x})]-q(\mathbf{x})\nabla\cdot[\mathbf{f}(\mathbf{x})\pi(\mathbf{x})-D\nabla\pi(\mathbf{x})]\] \[=-2\nabla q(\mathbf{x})\cdot\mathbf{j}(\mathbf{x}).\] (S18) Let \(\{\lambda_{n}\}\) be the discrete spectrum of operator \(\tilde{\mathcal{L}}\) and \(\{\varphi_{n}(\mathbf{x})\}\) be the set of corresponding eigenfunctions: \[\tilde{\mathcal{L}}\varphi_{n}(\mathbf{x})=\lambda_{n}\varphi_{n}(\mathbf{x}),\] (S19) Assume that \(a(\mathbf{x})\) and \(b(\mathbf{x})\) can be expanded in terms of the eigenfunctions as \[a(\mathbf{x})=\sum_{n}a_{n}\varphi_{n}(\mathbf{x}),\] (S20) \[b(\mathbf{x})=\sum_{n}b_{n}\varphi_{n}(\mathbf{x}).\] (S21) The eigenvalue \(\lambda_{1}=0\) corresponds to the eigenfunction \(\varphi_{1}(\mathbf{x})=1\). Other eigenvalues have a negative real part, \(0>\text{Re}\{\lambda_{2}\}\geq\text{Re}\{\lambda_{3}\}\geq\dots\); thus, the spectral gap can be defined as \(g\coloneqq-\text{Re}\{\lambda_{2}\}>0\). Using Eqs. (S17) and (S18), we can evaluate the asymmetry of cross-correlations as \[\delta C_{ba}^{\tau} =-\tau\int_{0}^{1}ds\int d\mathbf{x}\,b(\mathbf{x})e^{s\mathcal{L}\tau} \Big{[}\Big{\{}2\nabla e^{(1-s)\tilde{\mathcal{L}}\tau}a(\mathbf{x})\Big{\}}\cdot \mathbf{j}(\mathbf{x})\Big{]}\] \[=-\tau\int_{0}^{1}ds\int d\mathbf{x}\Big{[}\Big{\{}2\nabla e^{(1-s) \tilde{\mathcal{L}}\tau}a(\mathbf{x})\Big{\}}\cdot\mathbf{j}(\mathbf{x})\Big{]}e^{s\tilde {\mathcal{L}}\tau}[b(\mathbf{x})]\] \[=-\tau\int_{0}^{1}ds\int d\mathbf{x}\bigg{[}\bigg{\{}\sum_{n}2\nabla e ^{(1-s)\tau\lambda_{n}}a_{n}\varphi_{n}(\mathbf{x})\bigg{\}}\cdot\mathbf{j}(\mathbf{x}) \bigg{]}\sum_{m}e^{s\tau\lambda_{m}}b_{m}\varphi_{m}(\mathbf{x})\] \[=-\tau\int d\mathbf{x}\int_{0}^{1}ds\sum_{m,n}e^{s\tau\lambda_{m}+(1 -s)\tau\lambda_{n}}b_{m}a_{n}[2\varphi_{m}(\mathbf{x})\nabla\varphi_{n}(\mathbf{x})] \cdot\mathbf{j}(\mathbf{x})\] \[=-\tau\int d\mathbf{x}\sum_{m,n}\frac{e^{\lambda_{m}\tau}-e^{\lambda _{n}\tau}}{(\lambda_{m}-\lambda_{n})\tau}b_{m}a_{n}[2\varphi_{m}(\mathbf{x})\nabla \varphi_{n}(\mathbf{x})]\cdot\mathbf{j}(\mathbf{x})\] \[=-\tau\int d\mathbf{x}\sum_{m,n\geq 2}\frac{e^{\lambda_{m}\tau}-e^{ \lambda_{n}\tau}}{(\lambda_{m}-\lambda_{n})\tau}b_{m}a_{n}[\varphi_{m}(\mathbf{x })\nabla\varphi_{m}(\mathbf{x})-\varphi_{m}(\mathbf{x})\nabla\varphi_{n}(\mathbf{x})] \cdot\mathbf{j}(\mathbf{x})\] \[=\tau e^{-g\tau}\int d\mathbf{x}\,\mathbf{\varphi}_{\tau}(\mathbf{x})\cdot \mathbf{j}(\mathbf{x}),\] (S22) where we use the facts \(\int[\varphi_{1}(\mathbf{x})\nabla\varphi_{n}(\mathbf{x})]\cdot\mathbf{j}(\mathbf{x})\,d\mathbf{x }=\int[\varphi_{m}(\mathbf{x})\nabla\varphi_{1}(\mathbf{x})]\cdot\mathbf{j}(\mathbf{x})\,d \mathbf{x}=0\) and define \[\mathbf{\varphi}_{\tau}(\mathbf{x})\coloneqq\sum_{m,n\geq 2}\frac{e^{(\lambda_{m}+g)\tau} -e^{(\lambda_{n}+g)\tau}}{(\lambda_{m}-\lambda_{n})\tau}b_{m}a_{n}[\varphi_{n }(\mathbf{x})\nabla\varphi_{m}(\mathbf{x})-\varphi_{m}(\mathbf{x})\nabla\varphi_{n}(\mathbf{x })].\] (S23) Applying the Cauchy-Schwarz inequality, we obtain \[|\delta C_{ba}^{\tau}| \leq\tau e^{-g\tau}\bigg{[}\int d\mathbf{x}\,\frac{\mathbf{j}(\mathbf{x}) \cdot D^{-1}\mathbf{j}(\mathbf{x})}{\pi(\mathbf{x})}\bigg{]}^{1/2}\bigg{[}\int d\mathbf{x}\, \pi(\mathbf{x})\mathbf{\varphi}_{\tau}(\mathbf{x})\cdot D\mathbf{\varphi}_{\tau}(\mathbf{x}) \bigg{]}^{1/2}\] \[=\tau e^{-g\tau}\sqrt{\sigma}\bigg{[}\int d\mathbf{x}\,\pi(\mathbf{x}) \mathbf{\varphi}_{\tau}(\mathbf{x})\cdot D\mathbf{\varphi}_{\tau}(\mathbf{x})\bigg{]}^{1/2}.\] (S24) For any vector \(\mathbf{z}=[z_{1},\dots,z_{d}]^{\top}\), define \(|\mathbf{z}|\coloneqq[|z_{1}|,\dots,|z_{d}|]^{\top}\). Since \[\bigg{|}\frac{e^{(\lambda_{m}+g)\tau}-e^{(\lambda_{n}+g)\tau}}{(\lambda_{m}- \lambda_{n})\tau}\bigg{|}\leq 1\ \forall m,n\geq 2,\] (S25) we can upper bound the last term in Eq. (S24) as \[\int d\mathbf{x}\,\pi(\mathbf{x})\mathbf{\varphi}_{\tau}(\mathbf{x})\cdot D\mathbf{\varphi}_{\tau} (\mathbf{x})\leq\int d\mathbf{x}\,\pi(\mathbf{x})\mathbf{\phi}(\mathbf{x})\cdot D\mathbf{\phi}(\mathbf{x }),\] (S26) where we define \[\mathbf{\phi}(\mathbf{x})\coloneqq\sum_{m,n\geq 2}|b_{m}\|a_{n}\|\varphi_{n}(\mathbf{x}) \nabla\varphi_{m}(\mathbf{x})-\varphi_{m}(\mathbf{x})\nabla\varphi_{n}(\mathbf{x})|.\] (S27) Define \(\chi_{ba}\coloneqq[\int d\mathbf{x}\,\pi(\mathbf{x})\mathbf{\phi}(\mathbf{x})\cdot D\mathbf{\phi}( \mathbf{x})]^{1/2}\), we obtain the following thermodynamic bound on the asymmetry of cross-correlations: \[\frac{|\delta C_{ba}^{\tau}|}{\chi_{ba}}\leq\tau e^{-g\tau}\sqrt{\sigma}.\] (S28) ### Multiple observables The result (7) can be generalized to the case of multi-time and multi-observables, where the observables \(\left(\big{|}o^{1}\big{\}},\ldots,\big{|}o^{M}\big{\}}\right)\right)\rightrightarrows=\sigma\) are respectively measured at times \((t_{1},\ldots,t_{M})\rightrightarrows\tau\). Here, \(M\geq 2\) is an arbitrary integer number and \(0=t_{1}<t_{2}<\cdots<t_{M}=\tau\). In this case, the cross-correlation can be defined as \[C^{\mathbf{\tau}}_{\mathbf{o}}=\big{\langle}o^{1}(t_{1})\ldots o^{M}(t_{M})\big{\rangle},\] (S29) where \(o^{m}(t)\) takes the value of \(o_{n}^{m}\) if the system is in state \(n\) at time \(t\). Define the reversed observation times \(\tilde{\mathbf{\tau}}\coloneqq(\tau-t_{1},\ldots,\tau-t_{M})\). Then, the following asymmetry of cross-correlations is a quantity of interest: \[\delta C^{\mathbf{\tau}}_{\mathbf{o}}\coloneqq C^{\mathbf{\tau}}_{\mathbf{o}}-C^{\mathbf{\tau}}_{ \mathbf{o}}.\] (S30) We find that this asymmetry is consistently limited by dissipation and decreases exponentially at the rate of the spectral gap: \[\frac{|\delta C^{\mathbf{\tau}}_{\mathbf{o}}|}{\chi_{\mathbf{o}}}\leq\sigma\Phi\Big{(} \frac{\sigma}{2\gamma}\Big{)}^{-1}\sum_{k=2}^{M}(t_{k}-t_{k-1})e^{-g(t_{k}-t_{ k-1})}.\] (S31) Here, \(\chi_{\mathbf{o}}\coloneqq\|o^{1}\|_{*}\|o^{M}\|_{*}\prod_{m=2}^{M-1}\|o^{m}\|_{ \infty}\big{(}\sum_{n=1}^{N}\|v_{n}^{r}\|_{2}\big{)}^{M-2}\) and \(\|z\|_{\infty}=\max_{n}|z_{n}|\) for any vector \(|z\rangle\). Interestingly, bound (S31) indicates that the degree of decay depends on the time interval between consecutive measurements. In what follows, we present the proof of Eq. (S31). For convenience, we define \(O_{m}=\text{diag}(o_{1}^{m},\ldots,o_{N}^{m})\). Then, \(C^{\mathbf{\tau}}_{\mathbf{o}}\) can be explicitly expressed as \[C^{\mathbf{\tau}}_{\mathbf{o}}=\big{\langle}o^{M}\big{|}\prod_{k=M-1}^{1}e^{W(t_{k+1} -t_{k})}O_{k}\big{|}\pi\big{\rangle}=\big{\langle}o^{M}\big{|}e^{W(t_{M}-t_{M -1})}O_{M-1}\ldots O_{2}e^{W(t_{2}-t_{1})}\Pi\big{|}o^{1}\big{\rangle}.\] (S32) Consequently, the asymmetry can be calculated as follows: \[\delta C^{\mathbf{\tau}}_{\mathbf{o}} =\big{\langle}o^{M}\big{|}e^{W(t_{M}-t_{M-1})}O_{M-1}\ldots O_{2 }e^{W(t_{2}-t_{1})}\Pi\big{|}o^{1}\big{\rangle}-\big{\langle}o^{1}\big{|}e^{W( t_{2}-t_{1})}O_{2}\ldots O_{M-1}e^{W(t_{M}-t_{M-1})}\Pi\big{|}o^{M}\big{\rangle}\] \[=\big{\langle}o^{M}\big{|}e^{W(t_{M}-t_{M-1})}O_{M-1}\ldots O_{2 }e^{W(t_{2}-t_{1})}\Pi\big{|}o^{1}\big{\rangle}-\big{\langle}o^{M}\big{|}\Pi e ^{W^{\dagger}(t_{M}-t_{M-1})}O_{M-1}\ldots O_{2}e^{W^{\dagger}(t_{2}-t_{1})} \big{|}o^{1}\big{\rangle}\] \[=\big{\langle}o^{M}\big{|}e^{W(t_{M}-t_{M-1})}O_{M-1}\ldots O_{2 }e^{W(t_{2}-t_{1})}\Pi-\Pi e^{W^{\dagger}(t_{M}-t_{M-1})}O_{M-1}\ldots O_{2}e ^{W^{\dagger}(t_{2}-t_{1})}\big{|}o^{1}\big{\rangle}.\] (S33) Noticing that \([O_{k},\Pi]=0\) for all \(k\) and \(e^{Wt}=\sum_{i}e^{\lambda_{i}t}\big{|}v_{i}^{r}\big{\rangle}\!\!\big{\langle}v_ {i}^{l}\big{|}\), we can further transform \(\delta C^{\mathbf{\tau}}_{\mathbf{o}}\) as \[\delta C^{\mathbf{\tau}}_{\mathbf{o}} =\big{\langle}o^{M}\big{|}\sum_{k=2}^{M}e^{W(t_{M}-t_{M-1})}O_{M-1 }\ldots e^{W(t_{k+1}-t_{k})}O_{k}\Big{[}e^{W(t_{k}-t_{k-1})}\Pi-\Pi e^{W^{ \dagger}(t_{k}-t_{k-1})}\Big{]}O_{k-1}e^{W^{\dagger}(t_{k-1}-t_{k-2})}\ldots O _{2}e^{W^{\dagger}(t_{2}-t_{1})}\big{|}o^{1}\big{\rangle}\] \[=\sum_{k=2}^{M}(t_{k}-t_{k-1})\sum_{i,j\geq 2}\frac{e^{\lambda_{i}(t_{k}-t _{k-1})}-e^{\lambda_{j}^{*}(t_{k}-t_{k-1})}}{(\lambda_{i}-\lambda_{j}^{*})(t_{k }-t_{k-1})}\left\{o^{M}\big{|}\prod_{m=M-1}^{k}e^{W(t_{m+1}-t_{m})}O_{m}\big{|}v _{i}^{r}\right\}\big{\langle}v_{i}^{l}\big{|}J\big{|}v_{j}^{l}\big{\rangle} \big{\langle}v_{j}^{r}\big{|}\prod_{m=k-1}^{2}O_{m}e^{W^{\dagger}(t_{m}-t_{m-1 })}\big{|}o^{1}\big{\rangle}.\] (S34) Next, we upper bound the terms in Eq. (S34). Note that for any \(i,j\geq 2\), the following inequality holds: \[\left|\frac{e^{\lambda_{i}(t_{k}-t_{k-1})}-e^{\lambda_{j}^{*}(t_{k}-t_{k-1})}}{( \lambda_{i}-\lambda_{j}^{*})(t_{k}-t_{k-1})}\right|\leq e^{-g(t_{k}-t_{k-1})}.\] (S35) In addition, since \(\text{Re}\{\lambda_{n}\}\leq 0\), we can evaluate as follows: \[\big{|}\big{\langle}o^{M}\big{|}\prod_{m=M-1}^{k}e^{W(t_{m+1}-t_{m})}O_{m} \big{|}v_{i}^{r}\big{\rangle}\big{|}=\left|\sum_{1\leq i_{M-1},\ldots,i_{k}\leq N }e^{\sum_{m=M-1}^{k}\lambda_{i_{m}}(t_{m+1}-t_{m})}\left\langle o^{M}\big{|}v_ {i_{M-1}}^{r}\right\rangle\prod_{m=M-1}^{k}\big{\langle}v_{i_{m}}^{l}\big{|}O_{m }\big{|}v_{i_{m-1}}^{r}\right\rangle\] \[\leq\sum_{1\leq i_{M-1},\ldots,i_{k}\leq N}\left|\left\langle \alpha^{M}|v_{i_{M-1}}^{r}\right\rangle\prod_{m=M-1}^{k}\left|O_{m}\right|_{ \infty}\left\langle v_{i_{m}}^{l}|O_{m}|v_{i_{m-1}}^{r}\right\rangle\right|\] \[\leq\sum_{1\leq i_{M-1},\ldots,i_{k}\leq N}\left|\left\langle \alpha^{M}|v_{i_{M-1}}^{r}\right\rangle\right|\prod_{m=M-1}^{k}\left|O_{m} \right|_{\infty}\left\|v_{i_{m}}^{l}|_{2}|v_{i_{m-1}}^{r}|_{2}\] \[=\prod_{m=M-1}^{k}\left|O_{m}\right|_{\infty}\left(\sum_{1\leq i_{ M-1},\ldots,i_{k}\leq N}|\tilde{\alpha}_{i_{M-1}}^{M}|\prod_{m=M-1}^{k} \left\|v_{i_{m-1}}^{r}\right\|_{2}\right)\] \[\leq\|\alpha^{M}\|_{*}\left(\sum_{n=1}^{N}\left|v_{n}^{r}\right\| _{2}\right)^{M-1-k}\|v_{i}^{r}\|_{2}\prod_{m=M-1}^{k}\|O_{m}\|_{\infty}.\] (S36) where \(i_{k-1}\equiv i\). Likewise, we can also obtain \[\left|\left\langle v_{j}^{r}\right|\prod_{m=k-1}^{2}O_{m}e^{W^{ \dagger}(t_{m}-t_{m-1})}\big{|}\circ^{1}\right\rangle\leq\|\circ^{1}\|_{*} \left(\sum_{n=1}^{N}\left|v_{n}^{r}\right|_{2}\right)^{k-3}\|v_{j}^{r}\|_{2} \prod_{m=k-1}^{2}\|O_{m}\|_{\infty}.\] (S37) By combining Eqs. (S36) and (S37), we arrive at the following inequality: \[\sum_{i,j\geq 2}\left|\left\langle\alpha^{M}\right|\prod_{m=M-1}^{k}e^{W(t_{ m+1}-t_{m})}O_{m}\big{|}v_{j}^{r}\right\rangle\left\langle v_{j}^{r}\right| \prod_{m=k-1}^{2}O_{m}e^{W^{\dagger}(t_{m}-t_{m-1})}\big{|}\circ^{1}\right\rangle \leq\|\circ^{1}\|_{*}\|\circ^{M}\|_{*}\prod_{m=2}^{M-1}\|O_{m}\|_{\infty}\left( \sum_{n=1}^{N}\|v_{n}^{r}\|_{2}\right)^{M-2}.\] (S38) Noting that \(|\left\langle v_{i}^{l}|J|v_{j}^{l}\right\rangle|\leq\sum_{m,n}|j_{mn}|\leq \sigma\Phi\big{(}\sigma/2\gamma\big{)}^{-1}\) and using Eqs. (S35) and (S38), we obtain \[|\delta C_{\mathbf{o}}^{\mathbf{\tau}}|\leq\|\circ^{1}\|_{*}\|_{*}\|\circ^{M}\|_{*} \prod_{m=2}^{M-1}\|O_{m}\|_{\infty}\left(\sum_{n=1}^{N}\|v_{n}^{r}\|_{2}\right) ^{M-2}\sum_{k=2}^{M}(t_{k}-t_{k-1})e^{-g(t_{k}-t_{k-1})}\sigma\Phi\big{(} \sigma/2\gamma\big{)}^{-1}.\] (S39) Defining \(\chi_{\mathbf{o}}\coloneqq\|\circ^{1}\|_{*}\|\circ^{M}\|_{*}\prod_{m=2}^{M-1}\|O_ {m}\|_{\infty}\left(\sum_{n=1}^{N}\left|v_{n}^{r}\right|_{2}\right)^{M-2}\), the generalization of Eq. (7) can be attained as \[\frac{|\delta C_{\mathbf{o}}^{\mathbf{\tau}}|}{\chi_{\mathbf{o}}}\leq\sigma\Phi\Big{(}\frac {\sigma}{2\gamma}\Big{)}^{-1}\sum_{k=2}^{M}(t_{k}-t_{k-1})e^{-g(t_{k}-t_{k-1})}.\] (S40) ## S2 Proofs of results in the main text ### Proof of Eq. (11) For convenience, we define \(w_{mn}^{k}=|\left\langle m|L_{k}|n\right\rangle|^{2}\) and \(j_{mn}^{k}=w_{mn}^{k}\pi_{n}-w_{nm}^{k^{\prime}}\pi_{m}\). Using these terms, the rates of entropy production and dynamical activity can be expressed as [1] \[\sigma =\frac{1}{2}\sum_{k}\sum_{n,m}j_{mn}^{k}\ln\frac{w_{mn}^{k}\pi_{n }}{w_{nm}^{k^{\prime}}\pi_{m}},\] (S41) \[\gamma =\frac{1}{2}\sum_{k}\sum_{n,m}(w_{mn}^{k}\pi_{n}+w_{nm}^{k^{\prime }}\pi_{m}).\] (S42) Furthermore, it was proved that [1] \[\sum_{k}\sum_{n,m}|j_{mn}^{k}|\leq\sigma\Phi\Big{(}\frac{\sigma}{2\gamma} \Big{)}^{-1},\] (S43) which will be used later. On the other hand, it can be shown that the relation \(\left\langle X,\mathcal{L}(Y)\right\rangle=\left\langle\tilde{\mathcal{L}}(X),Y\right\rangle\) holds for any operators \(X\) and \(Y\), where \(\left\langle X,Y\right\rangle=\operatorname{tr}\!\left\{X^{\dagger}Y\right\}\). As a consequence, we can prove that \[\left\langle X,e^{\mathcal{L}t}(Y)\right\rangle=\left\langle e^{\tilde{ \mathcal{L}}t}(X),Y\right\rangle\] (S44) for any \(t\geq 0\). Noting that \(A\) and \(B\) are self-adjoint operators, we can calculate the asymmetry as follows: \[\delta C^{\tau}_{ba} =\left\langle B,e^{\mathcal{L}\tau}(A\pi)\right\rangle-\left\langle A,e^{\mathcal{L}\tau}(B\pi)\right\rangle\] \[=\left\langle B,e^{\mathcal{L}\tau}(A\pi)\right\rangle-\left\langle e ^{\tilde{\mathcal{L}}\tau}(A),B\pi\right\rangle\] \[=\left\langle B,e^{\mathcal{L}\tau}(A\pi)\right\rangle-\left\langle B,\pi e^{\tilde{\mathcal{L}}\tau}(A)\right\rangle\] \[=\left\langle B,e^{\mathcal{L}\tau}(A\pi)-\pi e^{\tilde{ \mathcal{L}}\tau}(A)\right\rangle.\] (S45) Since \([A,\pi]=0\), the following equality holds: \[e^{\mathcal{L}\tau}(A\pi)-\pi e^{\tilde{\mathcal{L}}\tau}(A) =\int_{0}^{1}ds\,\frac{d}{ds}\big{[}e^{s\mathcal{L}\tau}(\pi e^{ (1-s)\tilde{\mathcal{L}}\tau}(A))\big{]}\] \[=\tau\int_{0}^{1}ds\,e^{s\mathcal{L}\tau}\Big{[}\mathcal{L}(\pi e ^{(1-s)\tilde{\mathcal{L}}\tau}(A))-\pi\tilde{\mathcal{L}}(e^{(1-s)\tilde{ \mathcal{L}}\tau}(A))\Big{]}.\] (S46) Using this equality, we can proceed further as \[\delta C^{\tau}_{ba} =\tau\int_{0}^{1}ds\left\langle B,e^{s\mathcal{L}\tau}\Big{[} \mathcal{L}(\pi e^{(1-s)\tilde{\mathcal{L}}\tau}(A))-\pi\tilde{\mathcal{L}}(e^ {(1-s)\tilde{\mathcal{L}}\tau}(A))\Big{]}\right\rangle\] \[=\tau\int_{0}^{1}ds\left\langle e^{s\tilde{\mathcal{L}}\tau}(B), \mathcal{L}(\pi e^{(1-s)\tilde{\mathcal{L}}\tau}(A))-\pi\tilde{\mathcal{L}}(e^ {(1-s)\tilde{\mathcal{L}}\tau}(A))\right\rangle\] \[=\tau\int_{0}^{1}ds\left\langle B_{s\tau},\mathcal{L}(\pi A_{(1-s )\tau})-\pi\tilde{\mathcal{L}}(A_{(1-s)\tau})\right\rangle.\] (S47) For any operator \(X\), we can express \(X\) in terms of \(\{V_{n}\}\) as \(X=\sum_{n}z_{n}^{x}V_{n}\), where \(\{z_{n}^{x}\}\) are complex coefficients. Note that \(V_{1}=\mathbb{1}\) and \(X_{t}=\sum_{n}z_{n}^{x}e^{\lambda_{n}t}V_{n}\). Define \(\bar{X}_{t}\coloneqq X_{t}-z_{1}^{x}\mathbb{1}=\sum_{n\geq 2}z_{n}^{x}e^{ \lambda_{n}t}V_{n}\). Since \(\big{\{}\mathbb{1},\mathcal{L}(\pi A_{t})-\pi\tilde{\mathcal{L}}(A_{t}) \big{\}}=\big{\{}B_{t},\mathcal{L}(\pi\mathbb{1})-\pi\tilde{\mathcal{L}}( \mathbb{1})\big{\}}=0\), the asymmetry of cross-correlations can be written as \[\delta C^{\tau}_{ba}=\tau\int_{0}^{1}ds\left\langle\bar{B}_{s\tau},\mathcal{L }(\pi\bar{A}_{(1-s)\tau})-\pi\tilde{\mathcal{L}}(\bar{A}_{(1-s)\tau})\right\rangle.\] (S48) The term inside the integral can be expressed as \[\left\langle\bar{B}_{s\tau},\mathcal{L}(\pi\bar{A}_{(1-s)\tau})- \pi\tilde{\mathcal{L}}(\bar{A}_{(1-s)\tau})\right\rangle\] \[=\left\langle\bar{B}_{s\tau},-i[H,\pi\bar{A}_{(1-s)\tau}]-i\pi[H, \bar{A}_{(1-s)\tau}]\right\rangle\] \[+\sum_{k}\left\langle\bar{B}_{s\tau},L_{k}\pi\bar{A}_{(1-s)\tau} L_{k}^{\dagger}-\pi L_{k}^{\dagger}\bar{A}_{(1-s)\tau}L_{k}+(\pi L_{k}^{\dagger}L_{k}A_ {(1-s)\tau}-L_{k}^{\dagger}L_{k}\pi\bar{A}_{(1-s)\tau})/2\right\rangle.\] (S49) We individually evaluate the terms in Eq. (S49). The first term can be upper bounded as follows: \[\big{|}\big{\langle}\bar{B}_{s\tau},-i[H,\pi\bar{A}_{(1-s)\tau}]-i \pi[H,\bar{A}_{(1-s)\tau}]\big{\rangle}\big{|}\] \[\leq\big{|}\big{\langle}\bar{B}_{s\tau},[H,\pi\bar{A}_{(1-s)\tau} ]\big{\rangle}\big{|}+\big{|}\big{\langle}\bar{B}_{s\tau},\pi[H,\bar{A}_{(1-s )\tau}]\big{\rangle}\big{|}\] \[=\Big{|}\sum_{n}\pi_{n}\left\langle n|\bar{A}_{(1-s)\tau}(\bar{B }_{s\tau}H-H\bar{B}_{s\tau})|n\right\rangle\Big{|}+\Big{|}\sum_{n}\pi_{n} \left\langle n|(H\bar{A}_{(1-s)\tau}-\bar{A}_{(1-s)\tau}H)\bar{B}_{s\tau}|n \right\rangle\Big{|}.\] (S50) Moreover, the first term in Eq. (S50) can be bounded as follows: \[\Big{|}\sum_{n}\pi_{n}\left\langle n|\bar{A}_{(1-s)\tau}(\bar{B }_{s\tau}H-H\bar{B}_{s\tau})|n\right\rangle\Big{|}\] \[\leq\Big{|}\sum_{n,m(\equiv n)}\pi_{n}\left\langle n|\bar{A}_{(1-s )\tau}|m\right\rangle\left\langle m|\bar{B}_{s\tau}H-H\bar{B}_{s\tau}|n\right\rangle \Big{|}+\Big{|}\sum_{n}\pi_{n}\left\langle n|\bar{A}_{(1-s)\tau}|n\right\rangle \left\langle n|\bar{B}_{s\tau}H-H\bar{B}_{s\tau}|n\right\rangle\Big{|}\] \[\leq\|\bar{B}_{s\tau}H-H\bar{B}_{s\tau}\|_{\infty}\sum_{n,m(\equiv n )}\pi_{n}|\left\langle n|\bar{A}_{(1-s)\tau}|m\right\rangle|+\Big{|}\sum_{n,m( \equiv n)}\pi_{n}\left\langle n|\bar{A}_{(1-s)\tau}|n\right\rangle\left\langle n |\bar{B}_{s\tau}|m\right\rangle\left\langle m|H|n\right\rangle-\left\langle n |H|m\right\rangle\left\langle m|\bar{B}_{s\tau}|n\right\rangle\Big{|}\] \[\leq 2\|\bar{B}_{s\tau}\|_{\infty}\|H\|_{\infty}\sum_{n\equiv m}| \left\langle n|\bar{A}_{(1-s)\tau}|m\right\rangle|+2|\bar{A}_{(1-s)\tau}\|_{ \infty}\|H\|_{\infty}\sum_{n\equiv m}|\left\langle n|\bar{B}_{s\tau}|m\right\rangle|\] \[\leq 2e^{-g\tau}\|H\|_{\infty}\big{[}\|B\|_{\infty}C_{\ell_{1}}(A_{(1-s )\tau})+\|A\|_{\ast}C_{\ell_{1}}(B_{s\tau})\big{]}.\] (S51) Here, we use the facts that \[\sum_{n\neq m}|\bra{n}\bar{X}_{t}|m\rangle\,|=\sum_{nm}|\bra{n}X_{t}- z_{1}^{x}\mathbbm{1}|m\rangle\,|=\sum_{nm}|\bra{n}X_{t}|m\rangle\,|=e^{-gt}C_{\ell_{1}}(X_{t}),\] (S52) \[\|\bar{X}_{t}\|_{\infty}=\|\sum_{n\geq 2}z_{n}^{x}e^{\lambda_{n}t}V_{ n}\|_{\infty}\leq\sum_{n\geq 2}|z_{n}^{x}\|e^{\lambda_{n}t}\|V_{n}\|_{\infty} \leq e^{-gt}\sum_{n\geq 2}|z_{n}^{x}|=e^{-gt}\|X\|_{*}.\] (S53) It is worth noting that \(C_{\ell_{1}}(X_{t})\) is upper bounded by a constant for all times: \[C_{\ell_{1}}(X_{t})=\sum_{nm}|\bra{n}\sum_{k\geq 2}z_{k}^{x}e^{(\lambda_{k}s) t}V_{k}|m\rangle\,|\leq\sum_{nm}\sum_{k\geq 2}|z_{k}^{x}\|V_{k}\|_{\infty}=N(N-1) \|X\|_{*}.\] (S54) Therefore, the last quantity in Eq. (S51) always decays exponentially at the rate of the spectral gap \(g\). Likewise, we also obtain the following bound for the second term in Eq. (S50): \[\Big{|}\sum_{n}\pi_{n}\left\langle n|(H\bar{A}_{(1-s)\tau}-\bar{A}_{(1-s)\tau} H)\bar{B}_{s\tau}|n\rangle\,\right|\leq 2e^{-g\tau}\|H\|_{\infty}\big{[}\|B\|_{*}C_{ \ell_{1}}(A_{(1-s)\tau})+\|A\|_{*}C_{\ell_{1}}(B_{s\tau})\big{]}.\] (S55) Consequently, by combining Eqs. (S50), (S51), and (S55), we arrive at the following upper bound of the first term in Eq. (S49): \[\big{|}\big{\langle}\bar{B}_{s\tau},-i[H,\pi\bar{A}_{(1-s)\tau}]-i \pi[H,\bar{A}_{(1-s)\tau}]\big{\rangle}\big{|}\leq 4e^{-g\tau}\|H\|_{\infty} \big{[}\|B\|_{*}C_{\ell_{1}}(A_{(1-s)\tau})+\|A\|_{*}C_{\ell_{1}}(B_{s\tau}) \big{]}.\] (S56) Now, it remains to evaluate the second term in Eq. (S49), which can be calculated as follows: \[\sum_{k}\big{\langle}\bar{B}_{s\tau},L_{k}\pi\bar{A}_{(1-s)\tau}L _{k}^{\dagger}-\pi L_{k}^{\dagger}\bar{A}_{(1-s)\tau}L_{k}+(\pi L_{k}^{\dagger }L_{k}A_{(1-s)\tau}-L_{k}^{\dagger}L_{k}\pi\bar{A}_{(1-s)\tau})/2\big{\rangle}\] \[=\sum_{k}\sum_{n}\pi_{n}\left\langle n|\bar{A}_{(1-s)\tau}L_{k}^{ \dagger}\bar{B}_{s\tau}L_{k}-L_{k}^{\dagger}\bar{A}_{(1-s)\tau}L_{k}\bar{B}_{ s\tau}+(L_{k}^{\dagger}L_{k}\bar{A}_{(1-s)\tau}\bar{B}_{s\tau}-\bar{A}_{(1-s) \tau}\bar{B}_{s\tau}L_{k}^{\dagger}L_{k})/2|n\rangle\right.\] \[=\sum_{k}\sum_{n,m,m^{\prime}}\pi_{n}\big{(}\bra{n}\bar{A}_{(1-s) \tau}|n^{\prime}\rangle\,\langle n^{\prime}|L_{k}^{\dagger}|m^{\prime}\rangle \,\langle m^{\prime}|\bar{B}_{s\tau}|m\rangle\,\langle m|L_{k}|n\rangle- \langle n|L_{k}^{\dagger}|m\rangle\,\langle m^{\prime}|L_{k}|n^{\prime}\rangle \,\langle n^{\prime}|\bar{B}_{s\tau}|n\rangle\big{)}\] \[+\sum_{k}\sum_{n,m,m^{\prime}}\pi_{n}\big{(}\bra{n}L_{k}^{\dagger}| m\rangle\,\langle m|L_{k}|n^{\prime}\rangle\,\langle n^{\prime}|\bar{A}_{(1-s) \tau}\bar{B}_{s\tau}|n\rangle-\langle n|\bar{A}_{(1-s)\tau}\bar{B}_{s\tau}|n^ {\prime}\rangle\,\langle n^{\prime}|L_{k}^{\dagger}|m\rangle\,\langle m|L_{k}| n\rangle\big{)}/2.\] (S57) Collecting the terms that contain only the diagonal elements of \(\bar{A}_{(1-s)\tau}\) and \(\bar{B}_{s\tau}\), we can upper bound them as \[\Big{|}\sum_{k}\sum_{n,m}\pi_{n}\big{(}\bra{n}\bar{A}_{(1-s)\tau} |n\rangle\,\langle n|L_{k}^{\dagger}|m\rangle\,\langle m|\bar{B}_{s\tau}|m \rangle\,\langle m|L_{k}|n\rangle-\langle n|L_{k}^{\dagger}|m\rangle\,\langle m |\bar{A}_{(1-s)\tau}|m\rangle\,\langle m|L_{k}|n\rangle\,\langle n|\bar{B}_{s \tau}|n\rangle\big{)}\] \[+\sum_{k}\sum_{n,m}\pi_{n}\big{(}\bra{n}\!L_{k}^{\dagger}|m\rangle \,\langle m|L_{k}|n\rangle\,\langle n|\bar{A}_{(1-s)\tau}\bar{B}_{s\tau}|n \rangle-\langle n|\bar{A}_{(1-s)\tau}\bar{B}_{s\tau}|n\rangle\,\langle n|L_{k}^ {\dagger}|m\rangle\,\langle m|L_{k}|n\rangle\big{)}/2\Big{|}\] \[=\Big{|}\sum_{k}\sum_{n,m}\pi_{n}\big{(}\bra{n}\!\bar{A}_{(1-s) \tau}|n\rangle\,\langle m|\bar{B}_{s\tau}|m\rangle\,w_{mn}^{k}-\langle m|\bar{A}_ {(1-s)\tau}|m\rangle\,\langle n|\bar{B}_{s\tau}|n\,w_{mn}^{k}\big{)}\Big{|}\] \[=\Big{|}\sum_{k}\sum_{n,m}\left\langle n|\bar{A}_{(1-s)\tau}|n \rangle\,\langle m|\bar{B}_{s\tau}|m\rangle\,\Big{(}w_{mn}^{k}\pi_{n}-w_{nm}^{k \prime}\pi_{m}\Big{)}\right\|\] \[\leq\max_{m,n}|\langle n|\bar{A}_{(1-s)\tau}|n\rangle\,\langle m| \bar{B}_{s\tau}|m\rangle\,|\sum_{k}\sum_{n,m}|w_{mn}^{k}\pi_{n}-w_{nm}^{k \prime}\pi_{m}|\] \[\leq\|\bar{A}_{(1-s)\tau}\|_{\infty}\|\bar{B}_{s\tau}\|_{\infty}\sum_ {k}\sum_{m,n}|j_{mn}^{k}|\] \[\leq e^{-g\tau}\|A\|_{*}\|B\|_{*}\sigma\Phi(\sigma/2\gamma)^{-1}.\] (S58) For the terms that involve the non-diagonal elements of \(\bar{A}_{(1-s)\tau}\) and \(\bar{B}_{s\tau}\), we can evaluate them as follows: \[\Big{|}\sum_{k,n,m(nn)}\pi_{n}\left\langle n|\bar{A}_{(1-s)\tau}|m \rangle\,\langle m|L_{k}^{\dagger}\bar{B}_{s\tau}L_{k}|n\rangle\right| \leq\|\bar{B}_{s\tau}\|_{\infty}\sum_{k}\|L_{k}\|_{\infty}^{2} \sum_{n\neq m}\pi_{n}|\langle n|\bar{A}_{(1-s)\tau}|m\rangle\,|\] \[\leq e^{-g\tau}\|B\|_{*}\sum_{k}\|L_{k}\|_{\infty}^{2}C_{\ell_{1}}(A_ {(1-s)\tau}),\] (S59) \[\leq e^{-gr}\|A\|_{*}\sum_{k}\|L_{k}\|_{\infty}^{2}C_{\ell_{1}}(B_{s \tau}),\] (S60) \[\Big{|}\sum_{k,n,m,m^{\prime}(\neq m)}\pi_{n}\left\langle n|L_{k}^{ \dagger}|m^{\prime}\right\rangle\left\langle m^{\prime}|\bar{A}_{(1-s)\tau}|m \right\rangle\left\langle m|L_{k}|n\right\rangle\left\langle n|\bar{B}_{s\tau} |n\right\rangle\Big{|} \leq\|\bar{B}_{s\tau}\|_{\infty}\sum_{k}\|L_{k}\|_{\infty}^{2}\sum_ {n\neq m}|\langle n|\bar{A}_{(1-s)\tau}|m\rangle|\] \[\leq e^{-gr}\|B\|_{*}\sum_{k}\|L_{k}\|_{\infty}^{2}C_{\ell_{1}}(A_ {(1-s)\tau}),\] (S62) and \[\Big{|}\sum_{k,n,m(\neq n)}\pi_{n}\left\langle n|\bar{A}_{(1-s) \tau}\bar{B}_{s\tau}|m\right\rangle\left\langle m|L_{k}^{\dagger}L_{k}|n \right\rangle\Big{|}\] \[\leq\sum_{k}\|L_{k}\|_{\infty}^{2}\sum_{n\neq m}\pi_{n}|\langle n |\bar{A}_{(1-s)\tau}\bar{B}_{s\tau}|m\rangle|\] \[\leq\sum_{k}\|L_{k}\|_{\infty}^{2}\sum_{n\neq m}\left[\pi_{n}| \langle n|\bar{A}_{(1-s)\tau}|m\rangle\left\langle m|\bar{B}_{s\tau}|m\right \rangle|+\sum_{m^{\prime}(\neq m)}\pi_{n}|\langle n|\bar{A}_{(1-s)\tau}|m^{ \prime}\rangle\left\langle m^{\prime}|\bar{B}_{s\tau}|m\right\rangle|\right]\] \[\leq e^{-gr}\sum_{k}\|L_{k}\|_{\infty}^{2}\big{[}\|B\|_{*}C_{ \ell_{1}}(A_{(1-s)\tau})+\|A\|_{*}C_{\ell_{1}}(B_{s\tau})\big{]}.\] (S63) By combining all these inequalities, the following upper bound for the second term in Eq. (S49) is immediately derived: \[\Big{|}\sum_{k}\big{\langle}\bar{B}_{s\tau},L_{k}\pi\bar{A}_{(1-s )\tau}L_{k}^{\dagger}-\pi L_{k}^{\dagger}\bar{A}_{(1-s)\tau}L_{k}+(\pi L_{k}^ {\dagger}L_{k}A_{(1-s)\tau}-L_{k}^{\dagger}L_{k}\pi\bar{A}_{(1-s)\tau})/2 \big{\rangle}\Big{|}\] \[\leq 3e^{-gr}\sum_{k}\|L_{k}\|_{\infty}^{2}\big{[}\|B\|_{*}C_{ \ell_{1}}(A_{(1-s)\tau})+\|A\|_{*}C_{\ell_{1}}(B_{s\tau})\big{]}+e^{-gr}\|A \|_{*}\|B\|_{*}\sigma\Phi(\sigma/2\gamma)^{-1}.\] (S64) Finally, by inserting Eqs. (S56) and (S64) to Eq. (S48), we obtain the following bound on the asymmetry of cross-correlations: \[|\delta C_{ba}^{\tau}|\leq\tau e^{-gr}\Bigg{[}\mathcal{C}+\|A\|_{*}\|B\|_{*} \sigma\Phi\bigg{(}\frac{\sigma}{2\gamma}\bigg{)}^{-1}\Bigg{]},\] (S65) where \(\mathcal{C}\) is the quantum coherence term given by \[\mathcal{C}\coloneqq(4\|H\|_{\infty}+3\sum_{k}\|L_{k}\|_{\infty}^{2})\int_{0} ^{1}ds\big{[}\|B\|_{*}\hat{C}_{\ell_{1}}(A_{(1-s)\tau})+\|A\|_{*}\hat{C}_{\ell _{1}}(B_{s\tau})\big{]}.\] (S66) #### s.2.1 Analytical demonstration of the critical role of quantum coherence Here we show that quantum coherence plays a pivotal role in constraining the asymmetry of cross-correlations. Specifically, we present a case wherein the asymmetry of cross-correlations can persist even when irreversible entropy production is zero. This implies that the quantum coherence term \(\mathcal{C}\) is inevitable in the derived bound (11). We consider a three-level maser--the prototype for quantum heat engines that rely on quantum coherence to perform work [2; 3; 4]. The engine is simultaneously coupled to a hot and a cold heat bath and interacts with a classical electric field. The Markovian dynamics is governed by the local master equation with the Hamiltonian \(H_{t}=H_{0}+V_{t}\) and jump operators \(L_{1}=\sqrt{\alpha_{h}(N_{h}+1)}\sigma_{13}\), \(L_{1^{\prime}}=\sqrt{\alpha_{h}N_{h}}\sigma_{31}\), \(L_{2}=\sqrt{\alpha_{c}(N_{c}+1)}\sigma_{23}\), and \(L_{2^{\prime}}=\sqrt{\alpha_{c}N_{c}}\sigma_{32}\). Here, \(H_{0}=\omega_{1}\sigma_{11}+\omega_{2}\sigma_{22}+\omega_{3}\sigma_{33}\) is the bare Hamiltonian, \(V_{t}=\Omega\big{(}e^{i\omega_{0}t}\sigma_{12}+e^{-i\omega_{0}t}\sigma_{21} \big{)}\) is the external classical field, \(\sigma_{ij}=|\epsilon_{i}\rangle\!\langle\epsilon_{j}|\), and \(\alpha_{x}\) and \(N_{x}\) are the decay rate and the thermal occupation number for \(x\in\{h,c\}\), respectively. To remove the time dependence of the full Hamiltonian, it is convenient to rewrite operators in the rotating frame \(X\to U_{t}^{\dagger}XU_{t}\), where \(U_{t}=e^{-i\bar{H}t}\) and \(\bar{H}=\omega_{1}\sigma_{11}+(\omega_{1}+\omega_{0})\sigma_{22}+\omega_{3} \sigma_{33}\). In this rotating frame, the master equation reads \[\dot{\varrho}_{t}=-i[H,\varrho_{t}]+\sum_{k=1}^{2}\big{(}\mathcal{D}[L_{k}] \varrho_{t}+\mathcal{D}[L_{k^{\prime}}]\varrho_{t}\big{)},\] (S67) where \(H=-\Delta\sigma_{22}+\Omega(\sigma_{12}+\sigma_{21})\) and \(\Delta=\omega_{0}+\omega_{1}-\omega_{2}\). It was shown that the master equation (S67) is valid when the driving field is weak [5]. After some algebraic calculations, we can show that the steady-state density matrix reads \[\pi=\pi_{11}\ket{\epsilon_{1}}\!\!\bra{\epsilon_{1}}+\pi_{22}\ket{\epsilon_{2 }}\!\!\bra{\epsilon_{2}}+\pi_{12}\ket{\epsilon_{1}}\!\!\bra{\epsilon_{2}}+ \pi_{12}^{*}\ket{\epsilon_{2}}\!\!\bra{\epsilon_{1}}+(1-\pi_{11}-\pi_{22}) \ket{\epsilon_{3}}\!\!\bra{\epsilon_{3}},\] (S68) where \[\pi_{11} =\mathcal{F}^{-1}\big{\{}\alpha_{c}\alpha_{h}N_{c}(N_{h}+1)[4 \Delta^{2}+(\alpha_{c}N_{c}+\alpha_{h}N_{h})^{2}]+4\Omega^{2}\big{(}\alpha_{c} N_{c}+\alpha_{h}N_{h}\big{)}\big{(}\alpha_{c}+\alpha_{h}+\alpha_{c}N_{c}+ \alpha_{h}N_{h}\big{)}\big{\}},\] (S69) \[\pi_{22} =\mathcal{F}^{-1}\big{\{}\alpha_{c}\alpha_{h}N_{h}(N_{c}+1)[4 \Delta^{2}+(\alpha_{c}N_{c}+\alpha_{h}N_{h})^{2}]+4\Omega^{2}\big{(}\alpha_{c} N_{c}+\alpha_{h}N_{h}\big{)}\big{(}\alpha_{c}+\alpha_{h}+\alpha_{c}N_{c}+ \alpha_{h}N_{h}\big{)}\big{\}},\] (S70) \[\pi_{12} =\mathcal{F}^{-1}\big{\{}2i\alpha_{c}\alpha_{h}\Omega(N_{c}-N_{h} )(-2i\Delta+\alpha_{c}N_{c}+\alpha_{h}N_{h})\big{\}},\] (S71) \[\mathcal{F} =\alpha_{c}\alpha_{h}(3N_{c}N_{h}+N_{c}+N_{h})[4\Delta^{2}+( \alpha_{c}N_{c}+\alpha_{h}N_{h})^{2}]+4\Omega^{2}(\alpha_{c}N_{c}+\alpha_{h}N_ {h})\big{[}\alpha_{c}(3N_{c}+2)+\alpha_{h}(3N_{h}+2)\big{]}.\] (S72) Likewise, the irreversible entropy production rate is given by \[\sigma=4(N_{c}-N_{h})\alpha_{c}\alpha_{h}\big{(}\alpha_{c}N_{c}+\alpha_{h}N_{ h})\Omega^{2}\ln\left(\frac{N_{c}N_{h}+N_{c}}{N_{c}N_{h}+N_{h}}\right).\] (S73) Now we consider the simple case of \(N_{h}=N_{c}\). In this case, we have \(\sigma=0\), \(\pi_{11}=\pi_{22}\), and \(\pi_{12}=0\), which yield \[\pi=\pi_{11}(\ket{\epsilon_{1}}\!\!\bra{\epsilon_{1}}+\ket{\epsilon_{2}}\!\! \bra{\epsilon_{2}}+(1-2\pi_{11})\ket{\epsilon_{3}}\!\!\bra{\epsilon_{3}}.\] (S74) For any real numbers \(\theta\) and \(\phi\), the density matrix \(\pi\) can also be written as \[\pi=\pi_{11}(\ket{1}\!\!\bra{1}+\ket{2}\!\!\bra{2})+(1-2\pi_{11})\ket{3}\!\! \bra{3},\] (S75) where \(\ket{1}=e^{i\phi}\cos(\theta)\ket{\epsilon_{1}}+\sin(\theta)\ket{\epsilon_{2}}\), \(\ket{2}=-\sin(\theta)\ket{\epsilon_{1}}+e^{-i\phi}\cos(\theta)\ket{\epsilon_{ 2}}\), and \(\ket{3}=\ket{\epsilon_{3}}\). We consider observables \(A=\ket{1}\!\!\bra{1}\) and \(B=\ket{3}\!\!\bra{3}\). We need only show that the asymmetry of cross-correlations is nonzero for this measurement basis. It is thus sufficient to prove for the short-time regime \(\tau\ll 1\). In this region of operational time \(\tau\), the asymmetry of cross-correlations can be analytically expanded in terms of \(\tau\) as \[\delta C_{ba}^{\tau} =\text{tr}\big{\{}B[\mathbb{1}+\tau\mathcal{L}+\tau^{2}\mathcal{L }^{2}/2](A\pi)-B\pi[\mathbb{1}+\tau\tilde{\mathcal{L}}+\tau^{2}\tilde{ \mathcal{L}}^{2}/2](A)\big{\}}+O(\tau^{3})\] \[=\text{tr}\big{\{}B[\mathcal{L}(A\pi)-\pi\tilde{\mathcal{L}}(A)] \big{\}}\tau+\text{tr}\big{\{}B[\mathcal{L}^{2}(A\pi)-\pi\tilde{\mathcal{L}}^{ 2}(A)]\big{\}}\frac{\tau^{2}}{2}+O(\tau^{3})\] \[=\frac{N_{h}(N_{h}+1)\Omega(\alpha_{c}-\alpha_{h})\sin(2\theta) \sin(\phi)}{3N_{h}+2}\tau^{2}+O(\tau^{3}).\] (S76) As can be observed, although the first-order term is zero, the second-order term does not vanish and \(\delta C_{ba}^{\tau}\) is thus nonzero (i.e., \(|\delta C_{ba}^{\tau}|>0\)). Since \(\sigma=0\), the quantum coherence term \(\mathcal{C}\) is the only term that bounds \(\delta C_{ba}^{\tau}\) in this case: \[|\delta C_{ba}^{\tau}|\leq\tau e^{-gr}\mathcal{C}.\] (S77) ### Proof of Eq. (13) Define \(T\coloneqq e^{W\tau}\Pi-\Pi\), which satisfies \(T\ket{1}=\ket{0}\) and \(T^{\dagger}\ket{1}=\ket{0}\). In other words, \(T_{nn}=-\sum_{m(\pi n)}T_{mn}=-\sum_{m(\pi n)}T_{nm}=-\sum_{m(\pi n)}(T_{mn}+T_{ nm})/2\). For \(m\neq n\), we have \(T_{mn}=[e^{T\tau}]_{mn}\pi_{n}\geq 0\), which is nothing but the joint probability of observing the initial and final states. First, we prove that \[\tau\sigma\geq D\big{(}T\|T^{\dagger}\big{)}=\sum_{m,n}T_{mn}\ln\frac{T_{mn}}{T _{nm}}.\] (S78) To this end, let \(\Gamma\) be a stochastic trajectory of system states and \(p(\Gamma)\) be the path probability of finding \(\Gamma\). Utilizing the phase-space representation of the total entropy production and the monotonicity of KL divergence under coarse-graining, Eq. (S78) can be proved as \[\tau\sigma=D\big{(}p(\Gamma)\|p(\tilde{\Gamma})\big{)} \geq D\big{(}\Lambda[p(\Gamma)]\|\Lambda[p(\tilde{\Gamma})]\big{)}\] \[=D\big{(}T\|T^{\dagger}\big{)},\] (S79) where \(\Lambda\) is a coarse-grained map that reduces the path probability to the probability of observing only the initial and final states. For convenience, we define \(\ell_{mn}=\sqrt{(a_{m}-a_{n})^{2}+(b_{m}-b_{n})^{2}}\), \(\Omega_{mn}=(a_{n}b_{m}-a_{m}b_{n})/2\), and \(J_{mn}=T_{mn}-T_{nm}\). Note that \(J_{mn}\) differs from \(j_{mn}\). Using these quantities, we can express the asymmetry of cross-correlations as \[\delta C_{ba}^{\tau}=\left\langle b|T-T^{\dagger}|a\right\rangle =\sum_{m\gamma n}(T_{mn}-T_{nm})(a_{n}b_{m}-a_{m}b_{n})\] \[=2\sum_{m\gamma n}J_{mn}\Omega_{mn}.\] (S80) Likewise, we can calculate the decay of auto-correlation as \[D_{a}^{\tau}=-\left\langle a|T|a\right\rangle =\sum_{m\neq n}\left(\frac{T_{mn}+T_{nm}}{2}a_{n}^{2}-T_{mn}a_{m} a_{n}\right)\] \[=\frac{1}{2}\sum_{m>n}(T_{mn}+T_{nm})(a_{m}-a_{n})^{2},\] (S81) which leads to \[D_{a}^{\tau}+D_{b}^{\tau}=\frac{1}{2}\sum_{m>n}(T_{mn}+T_{nm})\ell_{mn}^{2}.\] (S82) Using Eqs. (S80) and (S82), we can prove the first argument of Eq. (13) as follows: \[\frac{|\delta C_{ba}^{\tau}|^{2}}{D_{a}^{\tau}+D_{b}^{\tau}} =\frac{8(\sum_{m>n}J_{mn}\Omega_{mn})^{2}}{\sum_{m>n}(T_{mn}+T_{ nm})\ell_{mn}^{2}}\] \[\leq\frac{2\|a^{2}+b^{2}\|_{\infty}(\sum_{m>n}|J_{mn}|\ell_{mn}) ^{2}}{\sum_{m>n}(T_{mn}+T_{nm})\ell_{mn}^{2}}\] \[\leq 2\|a^{2}+b^{2}\|_{\infty}\sum_{m>n}\frac{(T_{mn}-T_{nm})^{2}} {T_{mn}+T_{nm}}\] \[\leq\|a^{2}+b^{2}\|_{\infty}\sum_{m>n}(T_{mn}-T_{nm})\ln\frac{T_{ mn}}{T_{nm}}\] \[=\|a^{2}+b^{2}\|_{\infty}D(T\|T^{\dagger})\] \[\leq\|a^{2}+b^{2}\|_{\infty}\tau\sigma.\] (S83) Here, we use the triangle inequality and \(|\Omega_{mn}|\leq\ell_{mn}\|a^{2}+b^{2}|_{\infty}^{1/2}/2\)[6] in the second line, the Cauchy-Schwarz inequality in the third line, inequality \(2(x-y)^{2}/(x+y)\leq(x-y)\ln(x/y)\) in the fourth line, and Eq. (S78) in the last line. The second argument of Eq. (13) can be proved by a similar strategy as in Ref. [6]. For any directed edge \(e=(m\gets n)\), we define \(x_{e}=x_{mn}\) for an arbitrary variable \(x\), its reversed edge \(\tilde{e}=(n\gets m)\), and \(\mathcal{E}=\left\{e\,|\,J_{e}>0\right\}\). The discrete isoperimetric inequality [7; 8] implies that \[4|c|\tan\frac{\pi}{|c|}|\Omega_{c}|\leq\ell_{c}^{2},\] (S84) where we define \(X_{c}=\sum_{eee}X_{e}\) for variable \(X\) and cycle \(c\in\mathcal{C}\). Since \(\sum_{m}J_{mn}=0\), we can always find a uniform decomposition of cycles \(\mathcal{C}\) with appropriate orientations and associated positive currents \(\{J^{c}\}_{c\in\mathcal{C}}\) such that \(J_{e}=\sum_{c}J^{c}S_{e}^{c}\) for any \(e\in\mathcal{E}\), where \(S_{e}^{c}=1\) if \(e\in c\) and zero otherwise [9]. Using this decomposition, equality \(\sum_{cc\mathcal{C}}J^{c}X_{c}=\sum_{ee\mathcal{C}}J_{e}X_{e}\) can be derived; thus, \(\delta C_{ba}^{\tau}=2\sum_{e\in\mathcal{E}}J_{e}\Omega_{e}=2\sum_{ec\mathcal{ C}}J^{c}\Omega_{c}\). By utilizing this equality, Eq. (S84), and the monotonicity of function \(x\tan(\pi/x)\) over \([3,\infty)\), the asymmetry of cross-correlations can be upper bounded as \[|\delta C_{ba}^{\tau}| \leq\sum_{c\in\mathcal{C}}J^{c}\ell_{c}^{2}\Bigg{(}2|c|\tan\frac{ \pi}{|c|}\Bigg{)}^{-1}\] (S85) \[\leq\left(2N\tan\frac{\pi}{N}\right)^{-1}\max_{c}\ell_{c}\sum_{m >n}|J_{mn}|\ell_{mn}.\] (S86) Subsequently, following the same procedure as in Eq. (S83) leads to the desired result. ### Proof of Eq. (14) We follow the approach in Ref. [7]. Note that observables \(a\) and \(b\) can be arbitrarily rescaled without altering the ratio \(|\delta C_{a}^{\tau}|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\leq\int_{0}^{1}\left|e^{sz_{1}+(1-s)z_{2}}\right|ds\] \[\leq\int_{0}^{1}ds\] \[=1.\] (S93)
2304.14513
Double-light-sheet, Consecutive-overlapping Particle Image Velocimetry for the Study of Boundary Layers past Opaque Objects
Investigation of external flows past arbitrary objects requires access to the information in the boundary layer and the inviscid flow to paint a full picture of their characteristics. However, in laser diagnostic techniques such as particle image velocimetry (PIV), limitations like the size of the sample, field of view and magnification of the camera, and the size of the area of interest restrict access to some or part of this information. Here, we present a variation on the two-dimensional, two-component (2D-2C) PIV to access flows past samples larger than the field of view of the camera. We introduce an optical setup to use one laser to create a double-light-sheet illumination to access both sides of a non-transparent sample and employ a Computer Numerically Controlled (CNC) carrier to move the camera in consecutive-overlapping steps to perform the measurements. As a case study, we demonstrate the capability of this approach in the study of the boundary layer over a finite-size slender plate. We discuss how access to micro-scale details of a macro-scale flow can be used to explore the local behavior of the flow in terms of velocity profiles and the shear stress distribution. The boundary layers are not fully captured by the Blasius theory and are affected by a distribution of pressure gradient which in comparison results in regions of more attached or detached profiles. Ultimately, we show that the measurements can also be used to investigate the forces experienced by the body and decompose their effects into different components.
Shuangjiu Fu, Shabnam Raayai-Ardakani
2023-04-27T20:04:13Z
http://arxiv.org/abs/2304.14513v3
Double-light-sheet, Consecutive-overlapping Particle Image Velocimetry for the Study of Boundary Layers past Opaque Objects ###### Abstract Investigation of external flows past arbitrary objects requires access to the information in the boundary layer and the inviscid flow to paint a full picture of their characteristics. However, in laser diagnostic techniques such as particle image velocimetry (PIV), limitations like the size of the sample, field of view and magnification of the camera, and the size of the area of interest restrict access to some or part of this information. Here, we present a variation on the two-dimensional, two-component (2D-2C) PIV to access flows past samples larger than the field of view of the camera. We introduce an optical setup to use one laser to create a double-light-sheet illumination to access both sides of a non-transparent sample and employ a Computer Numerically Controlled (CNC) carrier to move the camera in consecutive-overlapping steps to perform the measurements. As a case study, we demonstrate the capability of this approach in the study of the boundary layer over a finite-size slender plate. We discuss how access to micro-scale details of a macro-scale flow can be used to explore the local behavior of the flow in terms of velocity profiles and the shear stress distribution. The boundary layers are not fully captured by the Blasius theory and are affected by a distribution of pressure gradient which in comparison results in regions of more attached or detached profiles. Ultimately, we show that the measurements can also be used to investigate the forces experienced by the body and decompose their effects into different components. Boundary Layers, Wakes, Laser Diagnostic Techniques ## 1 Introduction External flows past objects make up a substantial portion of flows that are of interest to fluid mechanics researchers and aero-/hydrodynamic applications. Exploration of these flows inherently requires access to both the viscous boundary layer (near-field) and the far-field inviscid flow. Idealized models of these flows assume the bodies of interest are suspended in a sea of fluid without any boundaries, have certain distributions of pressure gradients, or even very specific geometric boundaries. However, in real-life experimental scenarios in wind or water tunnels, flows are usually bound by near or far boundaries, the pressure distribution is not fully under the control of the operator (Liepmann, 1943; Schlichting _et al._, 2014), and samples can come in complex geometric shapes without closed-form mathematical definitions (Vollsinger _et al._, 2005; Pennycuick _et al._, 1996). This has thus resulted in discrepancies between reported measurements (Chauhan _et al._, 2009), especially in studies of boundary layers. Methods such as streamlining and sharpening of the leading and trailing edges (Grek _et al._, 1996), and careful surface adjustments or designs for control of pressure gradients (Liepmann, 1943; Bross _et al._, 2019) have proven to be challenging but promising in recreating some of these cases in laboratory scales. In recent decades, laser diagnostic techniques such as particle image velocimetry (PIV) (Adrian & Westerweel, 2011) have greatly advanced our experimental toolboxes to gain better measurements and understanding of flow fields. This knowledge of the flow kinematics added to the measured dynamic responses of the flow via load-cells or balances has been expanding the range of the available data to better assess the applicability of the available models and the need to move toward upgraded theoretical and numerical models of the flows, as well as more suitable closure models for turbulent flows. However, PIV measurement can also be limiting in the extent of information that can be gathered within one experiment. For example, the field of view of the camera and magnification used in the imaging defines the extent of the region of investigation (Michalek _et al._, 2022), and at times can only be limited to a high-resolution view of the boundary layer (Abu Rowin & Ghaemi, 2019), or a lower resolution view of a wide area in the flow incorporating more details of the far-field information (Terra _et al._, 2016). In addition, refractive index-matching is not always a viable option, and non-transparent objects can place portions of the flow in shadows (Kim _et al._, 2015; Nair _et al._, 2023; Du _et al._, 2022). This can restrict studies of asymmetric flows or samples. Even for symmetric samples, it adds additional uncertainties to the measurements if the assumption of symmetry is not fully met in the experimental setup. Here, we propose a variation of the 2D-2C PIV procedure with simultaneous 2-axis load measurements to study the flow field in both the boundary layer and the inviscid far field of an arbitrary opaque sample at a high resolution. To achieve this, we use a single laser and double light-sheet illumination strategy, combined with a consecutive-overlapping image acquisition technique supported by a single camera maneuvered by a Computer Numerically Controlled (CNC) stage (as opposed to multi-camera setups for measurements of large fields of view (Parikh _et al._, 2023)). To demonstrate the capability of this technique, we focus on the case of a boundary layer past a finite-length flat plate. In addition to having access to theoretical models for them, boundary layers over flat plates have been serving as references for comparison in studies of textures (Grek _et al._, 1996; Walsh & Lindemann, 1984; Bechert _et al._, 2000; Raayai-Ardakani & McKinley, 2017, 2019; Vukoslavcevic _et al._, 1992; Du _et al._, 2022), roughness elements (Kim _et al._, 2015; Michalek _et al._, 2022), or super-hydrophobic surfaces (Xu _et al._, 2021). Among the experimental studies available on boundary layers, strategies such as focusing on partial locations within the length of the plate (Grek _et al._, 1996; Xu _et al._, 2021), single side measurements (Grek _et al._, 1996), or installation of a sample as part of the wind/water tunnel's wall (Walsh & Lindemann, 1984; Bechert _et al._, 2000; Abu Rown & Ghaemi, 2019) have been previously considered. This paper is organized in the following manner: in Sec. 2 we discuss the experimental facility and procedure and in Sec. 3 we present the results of the experiments performed on a slender, finite-length flat plate sample with a streamlined leading edge; In Sec. 3.1 and Sec. 3.2 we cover the details of the far-field of the flow, and in Sec. 3.3 compare and contrast the data against the first order boundary layer theory as described by Prandtl and Blasius (Schlichting _et al._, 2014), and ultimately show in Sec. 3.4 and Sec. 3.5 that the flow, local shear stress distribution, and load measurements can be used to decompose the total forces experienced by the sample into various components. ## 2 Experimental method ### Flow facility and sample of interest The experiments are performed in a 2 m long water tunnel with a rectangular cross-section of \(20\times 27\) cm\({}^{2}\) where the water height is kept at 20 cm during the experiments (see Fig. 1(a)). Experiments are performed at three free-stream velocities less than 0.25 m/s (0.122, 0.185, and 0.242 m/s) where the turbulence intensity of the free stream is less than or about 1%. The free-stream velocity is controlled and set via the main computer and analog output (AO0) through a Data Acquisition (DAQ) system (NI DAQ USB-6001) connected to the tunnel. A separate flow meter measures the flow rate of the pump and is set to communicate with the main computer via an analog input of the DAQ (AI2). The sample of interest is a slender, symmetric plate of 100 mm long (\(L\)), 50 mm wide (\(b\)), and 5 mm in thickness (\(h\)) with a streamlined elliptical leading edge (see Fig. 1(b)) and is fabricated using 3D printing (Formlabs Form3B 3D printer and colored photopolymer resin). The leading edge of the sample within \(0\leqslant x\leqslant 25\) mm is streamlined in an elliptic form with a 1/10 ratio of the semi-minor and semi-major axes and past that the two sides of the sample in \(25\leqslant x\leqslant 100\) mm are flat, ending at a blunt trailing edge. The PIV measurements are performed in the middle of the sample (\(z\approx 2.5\) cm) to reduce the effect of the top and bottom boundaries. Using a connecting rod on the top, the sample is connected to a 2-axis load cell consisting of Linear Variable Differential Transformers (LVDT), and suspended in the tunnel at a distance of 76 cm from the tunnel entrance. The motor and the pump are separated from the experimental area via damping rubber cushions isolating the two from the main experimental area. The rod connecting the sample to the load cell is protected by a streamlined shield to minimize the impact of the rod on the load measurements and avoid unwanted wakes behind the connecting rod and above the sample. The load cell is set to communicate with the main computer via two of the analog inputs (AI0 and AI1) of the DAQ system. ### 2d-2c Piv The velocity field is measured using a 2D-2C PIV procedure. The setup consists of a double-pulsed Nd:YAG laser (Evergreen EVG00200, Quantel Laser) operated at 15 Hz repetition rate and nominal output energies of 10 mJ or 20 mJ per pulse for different free stream velocities, a high-speed camera (Chronos 2.1, Kron Technologies Inc.) at a Figure 1: (a) Schematic of the experimental water tunnel facility and the PIV components (front view and side view), with an active cross-sectional area of \(0.2\times 0.2\) m\({}^{2}\) and 2 m length. The global coordinate system shown as \((X,Y,Z)\) is used during the experimental procedure to control the location of the sample and camera. (b) Schematic of the slender sample (front and bottom views) with elliptic leading edge and the top handle used for connection to the load cell (left) and an image of the actual sample (right). Local \((x,y,z)\) directions will be used in the analysis of the results. resolution of 720 \(\times\) 1920 pixels with a 100 mm macro lens (Canon EF 100mm f/2.8L Macro Lens), and a timing unit (Arduino Teensy Board) with an Arduino program similar to Ref. Teensy Timer Tool (2023) uploaded on the board and used for synchronizing the instances of laser pulses and camera capture. The timing between the two laser/camera shots is set using an analog output from the main computer to the timing unit via a DAQ output (AO1). For velocities less than 0.2 m/s the timing is set at \(\delta t=1000\)\(\mu\)s and for free-stream velocity of 0.242 m/s the timing is set to \(\delta t=900\)\(\mu\)s. The camera is situated underneath the tunnel and its location is automatically controlled using a CNC motorized stage in all three directions. Water is seeded with 10 \(\mu\)m hollow glass particles (TSI incorporated). To access the velocity field on both sides of the opaque sample with only one light source, we use a double light-sheet strategy as illustrated in Figs. 1(a) and 2(a). In this method, various optical elements are configured so that the incoming linearly polarized laser beam is divided into two beams using a half-wave plate (HWP) and a polarizing beam-splitter (PBS) and directed toward the Front and Back of the tunnel via multiple mirrors (M1-3, M5-6, Thorlabs Nd:YAG Mirrors, 524 - 532 nm) where two lens combinations (2 spherical (L Sph) and one cylindrical (L Syl)) are used to create light sheets (about 1 mm thick) to illuminate the Front and Back sides of the sample. L1 and L4 are spherical lenses with \(+300\) mm focal lengths, L2 and L5 are cylindrical lenses with \(-50\) mm focal lengths, and L3 and L6 are spherical lenses with \(-100\) mm focal lengths. These two light sheets are parallel to each other and in the absence of the sample, the two would meet to create a sheet with nearly double the illumination intensity. The field of view of the camera thus captures the flow on both sides of the sample without any shadows (Fig. 2). To access the boundary layer (near-wall) information, the imaging magnification is set to each pixel capturing 15-16 \(\mu\)m. As a result, the field of view of the camera is limited to about 11.5 mm of the sample (720 px, in the streamwise direction) at a time while the length of the total area of interest (sample, leading edge, and trailing edge) is about 180 mm. Thus, to image the whole sample, using the CNC stage, the camera is swept in consecutive-overlapping steps (about \(40-50\%\) overlap) covering the entire length of the sample as well as a few steps before the leading edge and after the trailing Figure 2: (a) Schematic of a double-sheet setup for simultaneous illumination of two sides of an opaque sample. The laser beam is divided into two beams through an HWP (half-wave plate) and a PBS (polarizing beam-splitter) and is guided through mirrors toward two sets of identical light-sheet optics (lens combinations). (b) Snapshots of a series of images acquired using the consecutive-overlapping technique covering the entire length of the sample as well as before and after the sample. (c) The fully stitched view of the images from part (b) showing the full view of the sample. The region of flow in \(y>0\) is denoted as “Front” and the region in \(y<0\) is denoted as “Back” throughout the text. The starting point of the leading edge is the local origin of the \((x,y)\) coordinate used. edge (Fig. 2). The light sheet optics (Lenses L1-6 in Fig.2(a)) are manually moved to illuminate the respective fields of view. Each field of view captures about 25 mm in the \(y\) direction (about 12.5 mm on either side of the center-line of the sample (\(y=0\)) which is sufficient to extract the boundary layer information and have access to the inviscid far-field information. At each location, 50 image pairs are captured and grouped together as a function of the global location (\(X\)) of the left edge of the images. (It can be shown that 25 image pairs are enough for the mean of measurements to converge to their final values, and for extra caution, here 50 image pairs have been used.) The global locations and the physical size of the pixels are then used to stitch the images together to form the view of the entire sample. An example of the series of overlapping images and the final stitched view are shown in Figs. 2(b) and 2(c). Throughout the text, regions of flow in \(y>0\) are denoted as "Front" and regions in the \(y<0\) area are noted as "Back". ### Experimental procedures The main computer controlling the experimental procedure is set to communicate with the various components either through the DAQ (as discussed earlier) or directly via USB interfaces. After the sample and load cell are installed and secured, the free-stream velocity of the tunnel is set and the flow is left to reach a steady state. Then we set the timing \(\delta t\) to synchronize the laser and camera according to the chosen free-stream velocity and the magnification. As per the camera manual, the frame rate of the camera is set to be slightly larger than \(1/\delta t\) for the hardware to be able to process the capture signals from the timing unit properly. The CNC camera stage is driven via a USB interface and the open source package Open Builds CONTROL (OpenBuilds 2023) to move the camera to the location of interest. At this point, everything is ready for each set of experiments. Then using the multi-processing capabilities of the computer, the load measurement and camera capture are set to take place simultaneously. The entire measurement procedure is controlled via an in-house Python script, sending the signal to start and stop the experiments. After the experiments, the same Python script directs the acquired images and data (loads and the flow rate of the tunnel) to be saved in their appropriate locations either on an external hard drive or on the computer's hard drive. This procedure is then repeated for each imaging location in consecutive-overlapping steps as shown in Fig. 2(b). ### Image-processing Using the stitched image (Fig. 2(c)), the boundary of the sample is rotated and fitted with the mathematically defined contour (elliptic leading edge, and the flat boundaries) and the starting point of the leading edge is set as the origin of the local \(x\) direction, and the zero in the \(y\) direction is set on the plane of symmetry of the sample. As a result, the flat boundaries of the sample are located at \(y=\pm 2.5\) mm and parallel to the \(x\) axis. (Not necessarily parallel to the free-stream velocity as discussed later in Sec. 3.2) This way, the exact wall-normal (\(\hat{n}\)) and tangential directions (\(\hat{t}\)) can be identified to allow for further analysis of the boundary layer information in Sec. 3.3. The PIV images are processed with an in-house Python script partly using the open-source software OpenPIV (Liberzon _et al._, 2020) and an additional in-house correction loop for close-to-the-wall regions. The portion of the image pairs slightly far from the wall (further than 64 pixels away) are analyzed using functions from the OpenPIV package with \(32\times 32\) windows and a search area of \(64\times 64\) with 85% overlap (resolution of about 63 \(\mu\)m per vector). However, due to the large shear rate close to the wall boundary, to avoid bias errors close to the walls (Kahler _et al._, 2012), the first 64 pixels in the near-wall region are analyzed with an in-house cross-correlation scheme with a rectangular window of \(32\times 16\) (smaller height in the normal direction) to reduce the averaging effects of square windows. In all experiments, the timing \(\delta t\) is chosen in a way that the fastest particles displace at a maximum of half of the window size minus one pixel to ensure that the slowest particles close to the wall have enough time to displace at least one pixel. ## 3 Results and discussion ### Velocity fields The results of the mean velocity \(u\), and \(v\) (velocities in \(x\) and \(y\) directions respectively) around the entire sample, including both near- and far-field are presented in contour plots shown in Fig. 3 (normalized by the free-stream velocity) for three Reynolds numbers \(\mathrm{Re}_{L}=12,200\), \(\mathrm{Re}_{L}=18,500\), and \(\mathrm{Re}_{L}=24,200\). Here the global Reynolds number, \(\mathrm{Re}_{L}=\rho U_{\infty}L/\mu\) is defined based on the total length of the sample and the free-stream velocity \(U_{\infty}\), and \(\rho\) and \(\mu\) are the density and viscosity of water respectively. In all cases the region of the stagnation point at the leading edge is visible and due to the finite thickness of the sample, the flow in the inviscid area of either side is faster than the free-stream velocity. Overall the three cases have similar contour distributions in both directions and no clear difference is visible among the normalized velocities of various cases. As is expected from the ranges of the Reynolds numbers tested, the flow past the plate remains laminar and only at about \(0.05L\) away from the trailing edge of the sample, turbulent kinetic energy (Fig. 4(a-c)), and Reynolds shear stress (Fig. 4(d-f)) become visible and the location of the largest turbulent kinetic energy or the Reynolds shear stress is located at about \(0.15L\) from the trailing edge for all cases. In addition, as it is seen from the vorticity distribution (Fig. 5, normalized by a reference shear rate calculated based on \(\dot{\gamma}=U_{\infty}/\delta=(U_{\infty}/L)\sqrt{\mathrm{Re}_{L}}\)) and the accompanying streamlines in the wake of the samples, the extent of the separation bubble has a similar size across the various cases, with the bubble ending at around \(0.1L-0.12L\) past the trailing edge of the sample. Lastly as seen in the contours of \(v\) in Fig. 3, the velocity distribution in the leading edge area is not symmetric about the \(y=0\) line, in the same way as is expected from a symmetric sample aligned in the stream-wise direction. This hints at the possibility of a slight angle of attack in the sample placement with respect to the free-stream velocity. This angle is not visually detectable during the experiments, however, it can be deduced from the velocity fields. ### Estimation of the angle of attack To estimate the small angle of attack \(\alpha\) in the experiments, we use potential flow analysis. We assume the velocity field, far from the boundaries and in the vicinity of the leading edge can be described in terms of flow past an elliptical boundary. To factor in the angle of attack, we assume the free-stream velocity \(U_{\infty}\) can be decomposed into \(U_{p}\) in the \(x\) direction, and \(V_{p}\) in the \(y\) directions with \(\tan\alpha=V_{p}/U_{p}\) (see Fig. 6(a)). Due to linearity, the complex potential (\(w(z_{p})\) where \(z_{p}=x+iy\)) of this flow is thus the superposition of complex potentials of the flow of \(U_{p}\) in the horizontal direction (\(w_{\parallel}\)) and the flow of \(V_{p}\) in the vertical direction (\(w_{\perp}\)) past an ellipse of the same orientation. The closed-form solution of flow past an ellipse can be found using conformal mapping and the Zhukhovsky transformation between the complex variable \(z_{p}=\zeta+b^{2}/\zeta\) and \[\zeta=\frac{1}{2}z_{p}-\frac{1}{2}\sqrt{z_{p}^{2}-4b^{2}} \tag{1}\] where an ellipse in \(z_{p}\) plane of the form \[\frac{x^{2}}{\left(a+\frac{b^{2}}{a}\right)^{2}}+\frac{y^{2}}{\left(a-\frac{b^ {2}}{a}\right)^{2}}=1 \tag{2}\] is transformed into a circle of radius \(a>b\) in \(\zeta\) plane. Figure 4: Contours of normalized turbulent kinetic energy (left) and Reynolds shear stress (right) for Reynolds numbers of (a,d) \(\mathrm{Re}_{L}=12,200\), (b,e) \(\mathrm{Re}_{L}=18,500\), and (c,f) \(\mathrm{Re}_{L}=24,200\) past the trailing edge of the sample. Figure 5: Contours of vorticity normalized by reference \(\dot{\gamma}\), for (a) \(\mathrm{Re}_{L}=12,200\), (b) \(\mathrm{Re}_{L}=18,500\), and (c) \(\mathrm{Re}_{L}=24,200\). Streamlines of the flow in the vicinity of the wall and the wake are also overlaid on top of the contours to show the extent of the separation bubble behind the sample. Figure 3: Contours of mean velocities \(u\) (left) and \(v\) (right) normalized by the free-stream velocity \(U_{\infty}\) at global Reynolds numbers (a,b) \(\mathrm{Re}_{L}=12,200\), (c,d) \(\mathrm{Re}_{L}=18,500\), and (e,f) \(\mathrm{Re}_{L}=24,200\). To account for near-wall viscous effects and the thickness of the boundary layer, we assume that the thickness of the sample as seen by the inviscid flow is larger than the sample boundaries. We represent this by assuming an ellipse with a semi-minor axis of \(h/2+\Delta\) instead of \(h/2\). Therefore, using the dimensions of the sample (Fig. 6(a)), and the above definition we have: \[a+\frac{b^{2}}{a}=c \tag{11}\] and \[a-\frac{b^{2}}{a}=\frac{h}{2}+\Delta. \tag{12}\] Thus, the full complex potential can be written as \[w=w_{\parallel}+w_{\perp}=U_{p}\left(\zeta+\frac{a^{2}}{\zeta}\right)-iV_{p} \left(\zeta+\frac{a^{2}}{\zeta}\right) \tag{13}\] with the velocities found using the chain rule \[\frac{dw}{dz_{p}}=u-iv=\frac{dw}{d\zeta}\frac{d\zeta}{dz_{p}} \tag{14}\] which are functions of the geometry of the elliptical leading edge, the angle of attack, and Figure 6: (a) Schematic of a half-ellipse with \(c\) as the semi-major axis, and \(h\) as the semi-minor axis, in a flow with free-stream velocity \(U_{\infty}\) at an angle of attack \(\alpha\), decomposed into \(U_{p}\) in the \(x\) direction, and \(V_{p}\) in the \(y\) direction where \(\tan\alpha=V_{p}/U_{p}\). The effect of the boundary layer on the potential flow is represented by enlarging the semi-minor axis with \(\Delta\) on both sides of the ellipse. (b) Velocity components in \(x\) (left) and \(y\) (right) at various \(x\) locations as a function of \(y\) for an example case with \(\mathrm{Re}_{L}=24,200\). Dashed lines are the potential flow fits with their respective \(\alpha\) and \(\Delta\). (c) The distribution of the calculated angle of attack from the fitting at various cross-sections as a function of \(x/L\) for 3 cases at \(\mathrm{Re}_{L}=12,200\), \(\mathrm{Re}_{L}=18,500\), and \(\mathrm{Re}_{L}=24,200\). Far enough from the sample (\(x/L<-0.1\)), the angle of attack of all cases is on average \(0.5^{\circ}\). Contours of theoretical velocity in (d) \(x\) and (e) \(y\) directions calculated using the potential flow theory, with average \(\alpha=0.5^{\circ}\), and \(\Delta=0.166h\) from the fittings of part (b). Color limits (color bars) are the same as those in Fig. 3. the average added thickness, \(\Delta\). Now, given the geometry of the elliptical leading edge, and the measured velocity distribution, one can fit the above models to the measured velocities to find estimates for \(\alpha\) and \(\Delta\). Fig. 6(b) gives velocities \(u\) (left) and \(v\) (right) as a function of \(y\) in a few \(x\) locations (Symbols) in the flow field upstream of the leading edge for \(\mathrm{Re}_{L}=24,200\) (\(x/L<0\)) and the dashed lines represent fitted results from potential flow solution. The error bars (which are very small) represent the 95% confidence intervals. From the curve-fitting algorithm, one can see that on average the angle of attack is found to be around \(\alpha\approx 0.5^{\circ}\) for this case (Figs. 6(b) and 6(c)). The same process has been repeated for multiple locations in the upstream of all three cases and as shown in Fig. 6(c) all cases on average experience a similar angle of attack of \(\alpha\approx 0.5^{\circ}\) using the fits for \(x/L<-0.1\). Past \(x/L=-0.1\) the near wall effects result in the potential flow model deviating from the measurements and thus they are not included in the calculation of the average angle of attack. It should be noted that these experiments were performed in one sitting where the sample was set up at the beginning of the day, the experiments repeated for the three velocities and then the sample is retrieved at the end and throughout the day the location of the sample is not adjusted or changed. Hence we do not expect the angle of attack of the sample to change between the three experiments as confirmed. This also confirms that the fixture was well secured in place and the flow (as expected) did not move or re-locate the position of the sample throughout the experiments. Lastly, for comparison, contours of the normalized potential flow solutions, \(u/U_{\infty}\) and \(v/U_{\infty}\) are plotted in Figs. 6(d-e) with the average value of \(\alpha\) and \(\Delta\) calculated from Fig. 6(b) for the upstream and early elliptical portion of the leading edge and the results match with the experimental contours very well. Especially with the inclusion of the angle of attack, the \(v/U_{\infty}\) contours show the line of \(v=0\) which matches the experimental results very well and as expected is not symmetric about the line of \(y=0\). ### Boundary layer To quantify the effect of the flow field on the wall, velocities in the boundary layers are used to calculate the distribution of the shear stress on both sides of the sample. This requires the calculation of the velocity gradient close to the wall. Numerical differentiation techniques with finite difference schemes only use a small portion of velocities adjacent to the wall (where the uncertainties can be larger than the rest of the profile) and are thus prone to cause large numerical errors. Therefore, to characterize the boundary layers and also calculate more accurate estimates of the velocity gradients at the wall, we choose to find the best possible functional fit to all the velocity measurements at each wall-normal direction, \(\hat{n}\) (instead of finding a linear fit for a few points in the vicinity of the wall). The functional form used for fitting is not chosen randomly and we employ the family of the Falkner-Skan solutions which are theoretical self-similar solutions to the boundary layer equations. From Falkner-Skan theory (Schlichting _et al._, 2014), the boundary layer in the tangential direction, \(\hat{t}\), to the wall (see Fig. 7(a)) is defined using the local Reynolds number \(\mathrm{Re}_{x}\) and a parameter \(m\), covering solutions for boundary layer profiles with positive, \(m>0\), that are more attached than the Blasius solution (\(m=0\)) and those that are more separated from the wall at negative values of \(m\) within \(0>m>-0.092\) where \(-0.092\) is the lowest possible value mathematically. Thus, \(u_{t}\) is defined as a function of the local Reynolds number \(\mathrm{Re}_{x}=\rho xU(x)/\mu\), \(n\) and \(m\) \[u_{t}=\mathcal{G}(\mathrm{Re}_{x},n;m) \tag{10}\] and in the self-similar form of the Falkner-Skan theory, the velocity is written as \[\frac{u_{t}}{U}=\mathcal{F}^{\prime}(\eta) \tag{10}\] with \(\eta\) defined as \[\eta=\frac{n}{x}\sqrt{\text{Re}_{x}\left(\frac{m+1}{2}\right)}. \tag{11}\] Knowing the spatial location \((x,n)\) and the velocity distribution \(u_{t}(x,n)\) from the experimental data, we employ a least-square fitting algorithm to find the best \(m\) for the velocity profiles at different \(x\) locations along the length of the sample on either side. As the input to the curve-fitting, we use the measured velocity profiles \(u_{t}\) from all 50 image pairs and their corresponding wall-normal, \(n\), locations from the wall up to the location where \(u_{t}\) is the maximum in the given normal direction. We consider this maximum location as the edge of the boundary layer where the inner solution (boundary layer solution) meets the outer solution from the inviscid flow (Kundu _et al._, 2015). As a result of this \(U(x)\), used in the definition of the local Reynolds number \(\text{Re}_{x}\), is not a constant and as shown earlier in Sec. 3.1 is nearly always higher than \(U_{\infty}\). Therefore, the local Reynolds number is \(\text{Re}_{x}=\rho U(x)x/\mu\) is always larger than \(\rho U_{\infty}x/\mu\). The family of Falkner-Skan functions is implemented in the form of an ODE solver with the CasADi package (Andersson _et al._, 2019) in Python. In case of the presence of outliers within the data, the curve-fitting procedure is augmented with a RANSAC (RANdom SAmple Consensus) (Fischler & Bolles, 1981) algorithm, and only the experimental data identified as inliers are used in the final curve-fitting procedure. The inlier threshold in the algorithm is set to ensure that more than 95% of the data are considered inliers. In the flat part of the samples (past the elliptic leading edge), the wall-normal direction is the same as the \(y\) direction (Fig. 7(a)), which is not the case within the elliptical leading edge. In this region, we find the normal to the wall, \(n\), at every \(x\) location from the equation of the corresponding ellipse and use two-dimensional interpolation to find the distribution of the velocity in the local tangential direction, \(u_{t}(n)=u\cos\theta+v\sin\theta\) where \(\theta\) is the angle of the local tangent at the wall (Fig. 7(a)) and use these values and the corresponding calculated normals in the curve-fitting procedure. It should be noted that the Falkner-Skan family of solutions is generated based on the theoretical assumption of flow past a wedge with a local free-stream speed defined as \(U(x)\propto x^{m}\), where \(m\) is constant throughout the \(x\) direction. However, in this work, we assume that \(m\) is just a mathematical parameter and only use the family of Falkner-Skan solutions as a set of mathematical functions available to describe the shape of the boundary layers. The purpose here is not to find the closest Falkner-Skan fit for the entire flow but to find the local best fits to the experimental velocity profiles at each \(x\) location for more accurate post-processing steps, specifically calculations of the shear stress distribution. Velocity profiles in the boundary layer at 5 different locations along the length of the sample operated at the global \(\text{Re}_{L}=18,500\) (calculated with \(U_{\infty}\)), on the Front and Back sides, are presented in Figs. 7(b-f) as a function of the similarity variable \(\eta\) (Eq. 11). Since the thickness of the boundary layer increases over the length, as we progress to larger \(x/L\), more points are available for curve-fitting as is visible in Figs. 7(b-f). The first thing to note in all these figures is that even though the sample is symmetric, the small angle of attack in the experiments leads the velocity profiles on either side of the samples to take different shapes at the same location as shown by the different fitted values of for each side. At \(x/L=0.1\), which is within the elliptic leading edge, the velocity profile on the Back has an \(m>0\) while the Front profile is nearly close to a Blasius profile with \(m\) slightly larger than \(0\). Then moving to \(x/L=0.3\), the profiles on either side look more detached than the Blasius solution with negative \(m\) values. However, at \(x/L=0.5\) the profile on the Back side moves to a more attached form with \(m=0.014\) increasing to \(m=0.047\) and \(m=0.105\) at \(x/L=0.7\) and \(x/L=0.9\) respectively. This is while at \(x/L=0.5\) and \(x/L=0.7\) the velocity profiles on the Front still maintain a negative \(m\) and only at \(x/L=0.9\) the velocity profile becomes more attached to the surface with \(m=0.025\). As is clear from all cases, the \(m\) values for the velocity profiles on the Back are always larger than those on the Front (flow is more attached on the Back than the Front). Instead of presenting all the velocity profiles along the length, the distribution of the parameter \(m\) as a function of the local Reynolds number \(\mathrm{Re}_{x}=\rho U(x)x/\mu\) is illustrated in Fig. 8 for the Front and Back sides of the sample for all the three experimental cases. The distribution of \(m\) in the different cases is qualitatively similar and only stretched out when plotted as a function of the local Reynolds number \(\mathrm{Re}_{x}\). If plotted as a function of the normalized horizontal location \(x/L\), the distribution of the \(m\) for the different cases are similar with a slight bit of upward shift moving toward higher \(\mathrm{Re}_{L}\) cases. Hence one observes a stronger dependence on the length of the sample than on the local value of the Reynolds number. The second point to highlight is that throughout the length, \(m\) for all the cases starts Figure 7: (a) Schematic of the coordinate transformation from \((x,y)\) to \((t,n)\) (tangent and normal components) in the leading edge area. Past the leading edge \(\hat{t}\) and \(\hat{n}\) are the same as \(x\) and \(y\) coordinates. Velocity profiles on the Front (square) and Back (circle) of the sample as a function of \(\eta\), at \(5\) different positions along the length, (b) \(x/L=0.1\), (c) \(x/L=0.3\), (d) \(x/L=0.5\), (e) \(x/L=0.7\), and (f) \(x/L=0.9\), with the corresponding Falkner-Skan fits for experiments performed at \(\mathrm{Re}_{L}=18,500\). The Blasius solution is shown with a solid black line on all the plots. The error bars represent the \(95\%\) confidence intervals. at \(m>0\) which corresponds to a region of favorable pressure gradient, and then crosses over to \(m<0\) where the flow then experiences an adverse pressure gradient, but this does not last all the way and toward the trailing edge again the flow experiences a favorable pressure gradient and \(m\) crosses over to \(m>0\). Thus, even though 75% of the length of the sample consists of a flat plate (on either side), the local velocity in the boundary layers does not fully follow the Blasius boundary layer at a zero pressure gradient and the resulting \(m\) parameter shows a distribution along the length where the profiles are initially more attached and then more detached and later more attached to the wall compared to the Blasius solution. ### Shear stress distribution Knowing the mathematical form of the Falkner-Skan solutions as well the distribution of the \(m\) parameter, the local shear stress distribution along each side of the plate can be calculated using \(m\) and derivatives of Eq. (10) with respect to \(n\) as \[\tau(x)=\left.\frac{\partial u_{t}}{\partial n}\right|_{n=0}=\left.\left(\frac {m+1}{2}\right)^{0.5}\left.\frac{\rho U(x)^{2}}{\sqrt{\mathrm{Re}_{x}}} \mathcal{F}^{\prime\prime}\right|_{\eta=0} \tag{12}\] and the skin friction coefficient is then determined by \[C_{f}(x)=\frac{\tau(x)}{\frac{1}{2}\rho U(x)^{2}}. \tag{13}\] The variation in \(m\) as seen in Fig. 8 results in a shear stress distribution different from that of the Blasius boundary layer. Using Eqs. (12) and (13) one can see that as shown in Fig. 8(c) for \(m<0\) cases where the boundary layer is less attached to the wall than the Blasius solution (\(m=0\)) the shear stress experienced is less than the \(m=0\) case and for more attached boundary layers with \(m>0\), the shear stress can reach as much as 1.5 times the shear stress from the Blasius solution at \(m=0.1\). The variations in the skin friction coefficient on the Front and Back of the sample tested at three different Reynolds numbers are shown in Fig. 9(a-c) as a function of the local Reynolds number. In all cases, the skin friction coefficient experienced on the Front side is slightly lower than that of the Back which is also visible in the distribution of the \(m\) where at the same location \(m\) on the Front side is slightly lower than the \(m\) on the Back. In the case with \(\mathrm{Re}_{L}=12,200\), the skin friction coefficient on the Front and Back sides of the sample are the closest, and as the global \(\mathrm{Re}_{L}\) is increased by increasing Figure 8: Distribution of the parameter \(m\) for boundary layers on the (a) Front and (b) Back sides of the samples operated at \(\mathrm{Re}_{L}=12,200\), \(\mathrm{Re}_{L}=18,500\), and \(\mathrm{Re}_{L}=24,200\) as a function of the local Reynolds number. Note that as discussed in the text, the local Reynolds number at \(x=L\) is larger than \(\mathrm{Re}_{L}\). (c) The ratio of the skin friction coefficient of the Falkner-Skan family of boundary layers normalized by the skin friction coefficient of the Blasius boundary layer (\(m=0\)) as a function of \(m\). the \(U_{\infty}\) the difference between the two sides become more visible (Figs. 9(b) and 9(c)). (Again, it should be noted that the local Reynolds number \(\mathrm{Re}_{x}\) at \(x=L\) is larger than the \(\mathrm{Re}_{L}\) calculated using \(U_{\infty}\).) While in the leading edge area, the skin friction coefficient decreases at a rate faster than \(\mathrm{Re}_{x}^{-0.5}\), past the elliptic leading edge and in the flat part of the sample, the skin friction coefficient experiences a slower rate of change compared with the \(\mathrm{Re}_{x}^{-0.5}\). Also, close to the trailing edge, the skin friction coefficient reverses its course and goes through an increasing trend which has also been predicted in second-order models of the boundary layer over a flat finite plate (Dennis & Dunwoody, 1966). Similar to \(m\), the qualitative trends in the shear stress results, for both the Front and Back sides, show a strong dependence on the length of the sample, more than the local Reynolds number as is seen in Figs. 9(d) and 9(e). In addition, early on, within the leading edge area, an increase in the \(\mathrm{Re}_{L}\) results in an increase in the skin friction recorded at similar local Reynolds numbers, however, as we move to the flat area, the skin frictions cross over each other where for the rest of the length, the case with the lowest \(\mathrm{Re}_{L}\) experiences the largest skin friction coefficient among all. ### Forces Drag force is the force component experienced by the sample in the streamwise direction. Here due to the small angle of attack, the drag force on the sample is written as \[D=F_{x}\ \cos\alpha+F_{y}\sin\alpha \tag{12}\] where \(\cos\alpha=0.999962\) and \(\sin\alpha=0.0087\). So, we assume that \(D\approx F_{x}\). Total drag experienced by a finite-thickness sample is decomposed into contributions Figure 9: Distribution of the skin friction coefficient on the Front (square) and Back (circle) of the sample at (a) \(\mathrm{Re}_{L}=12,200\), (b) \(\mathrm{Re}_{L}=18,500\), and \(\mathrm{Re}_{L}=24,200\). The dash-dotted lines denote the location of the end of the elliptical leading edge and the black dashed lines are the theoretical shear stress calculated from the first-order boundary layer theory (Blasius Solution). All the skin friction results are also separated for (d) Front and (e) Back sides, which are overlaid on each other. Local Reynolds number is calculated based on the maximum velocity \(U(x)\) along the corresponding normal which is also used in the curve-fitting process. from the viscous effect in the boundary layer and the pressure distribution around the sample. The vicious part of the drag force is calculated from the integral of the shear stress distribution (Fig. 9) on either side of the sample \[D_{\rm viscous}=\int_{0}^{L}\boldsymbol{\tau}\cdot\mathbf{n}dA=b\left(\int_{0}^ {L}\tau_{{}_{\rm Front}}(x)dx+\int_{0}^{L}\tau_{{}_{\rm Back}}(x)dx\right) \tag{13}\] where it is assumed that shear stress distribution measured in the mid-section is constant throughout the span of the sample (i.e. \(\tau(x,z)\approx\tau(x)\)). In the elliptic leading edge of the sample, the element of the integral is \(\tau(x)\cos\theta dt\) which is the same as \(\tau(x)dx\) (see Fig. 7(a)), and thus the latter form is applicable to the entire length of the sample. While the tested sample is slender (\(h/L=0.05\)), the finite thickness of the sample has non-negligible effects on the distribution of pressure in the flow and as a result, the total drag force includes a contribution from pressure, also known as form drag. However, the effect of the pressure distribution cannot be found independently and is found in a cumulative manner with the viscous drag using a rectangular control volume (Fig. 10(a)) \[-D_{\rm CV}\ +\overbrace{\int_{S_{\rm Inlet}}pdA-\int_{S_{\rm Outlet}}pdA}^{ \rm Pressure\ Force\ on\ Boundaries}=\overbrace{\sum_{i}\int_{S_{i}}\underbrace{ \rho\tilde{u}(\tilde{\mathbf{u}}\cdot\mathbf{n})}_{M}dA_{i}}^{\rm Momentum} \tag{14}\] where \(i\in\) [Inlet, Outlet, Front, Back], and the total of the reaction force in the \(x\) direction (which is, according to Newton's third law, the negative of the total force applied on the sample, i.e. \(-D_{\rm CV}\)) and the pressure forces equals the variations in the momentum crossing the boundaries of the control volume. But it should be noted that the momentum terms in Eq. (14) also include the effect of the pressure distribution. Therefore, \(D_{\rm form}=D_{\rm CV}-D_{\rm viscous}\). In steady-state form, Eq. (14) can be expanded to include the effect of the Reynolds stresses in the flow field and written in terms of a Reynolds-averaged integral momentum (RAIM) conservation equation (Ferreira _et al._, 2021) in the form of \[-D_{\rm CV}\ +b\!\int(p_{{}_{\rm Inlet}}-p_{{}_{\rm Outlet}})dy= \tag{15}\] \[\rho b\left(\int_{\rm Outlet}(uu+\overline{u^{\prime}u^{\prime} })dy-\int_{\rm Inlet}(uu+\overline{u^{\prime}u^{\prime}})dy\right)+\] \[\rho b\left(\int_{\rm Front}(uv+\overline{u^{\prime}v^{\prime}}) dx-\int_{\rm Back}(uv+\overline{u^{\prime}v^{\prime}})dx\right)\] with \(\tilde{u}=u+u^{\prime}\) and \(\tilde{v}=v+v^{\prime}\) where \(u\) and \(v\) are the means of the velocity components and \(u^{\prime}\) and \(v^{\prime}\) are the fluctuation terms. This formulation thus includes the effect of the Reynolds stresses on the total force calculations, and even though small they have all been incorporated in the current analysis. In an ideal setup, where the experiments are performed in unbounded flows with access to far-field information farther than multiple body lengths away, one can choose the control volume boundaries far enough where the local pressure and velocity at the boundaries are back to \(U_{\infty}\) and \(p_{\infty}\). There, only the momentum components of the control volume would be sufficient for finding the reaction force as commonly discussed in fluid mechanics textbooks (Kundu _et al._, 2015; Batchelor, 2000). In such a case, the pressure difference across the inlet and outlet faces will be zero. However, with the physical limits available in the experimental setup, the local pressure especially downstream of the flow does not fully recover to \(p_{\infty}\). Hence, we use the two-dimensional Reynolds-averaged Navier-Stokes equations and directional integration to find the pressure distribution on the boundaries of the control volume where local pressure along a horizontal (\(y\) constant) line and vertical line (\(x\) constant) can be calculated using \[p(x)-p(x_{\mathrm{ref}})=\int_{x_{\mathrm{ref}}}^{x}-\rho\left(u\frac{\partial u }{\partial x}+v\frac{\partial u}{\partial y}\right)+\mu\left(\frac{\partial^{2 }u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}\right)-\rho\left( \frac{\partial\overline{u^{\prime}u^{\prime}}}{\partial x}+\frac{\partial \overline{u^{\prime}v^{\prime}}}{\partial y}\right)dx \tag{16}\] and \[p(y)-p(y_{\mathrm{ref}})=\int_{y_{\mathrm{ref}}}^{y}-\rho\left(u\frac{ \partial v}{\partial x}+v\frac{\partial v}{\partial y}\right)+\mu\left(\frac{ \partial^{2}v}{\partial x^{2}}+\frac{\partial^{2}v}{\partial y^{2}}\right)- \rho\left(\frac{\partial\overline{u^{\prime}v^{\prime}}}{\partial x}+\frac{ \partial\overline{v^{\prime}v^{\prime}}}{\partial y}\right)dy \tag{17}\] respectively. Here, assuming points 1 and 2 (Fig. 10(a)) are at \(p_{\infty}\) and we use equation (17) in both positive (1 \(\rightarrow\)2) and negative (2 \(\rightarrow\)1) directions to calculate \(p^{+}(y)\) and \(p^{-}(y)\) Figure 10: (a) Schematic of the rectangular control volume used here, the Inlet is located at \(x/L=-0.35\), the Front and Back boundaries are located at \(y/L=\pm 0.12\), and the Outlet boundary is moved from \(x/L=1.00\) up to \(x/L=1.42\). Distribution of the (b) momentum terms (M, Eq. (14)) and (c) the pressure at the Inlet and a few Outlet positions for \(\mathrm{Re}_{L}=12,200\). (d) Distribution of the Momentum (M) on the Front and Back boundaries for \(\mathrm{Re}_{L}=12,200\). (e) The calculated integrals of the Momentum terms (blue circles), and negative of the integral of pressure (red squares), and the negative of the resulting total drag (green stars) as a function of the location of the Outlet boundary for \(\mathrm{Re}_{L}=12,200\). All calculated force integrals are non-dimensionalized in the form of a drag coefficient. The dashed black line is the mean of the negative of the total drag force values. on the Inlet and use an average of the two (\(p_{{}_{\rm Inlet}}(y)=(p^{+}(y)+p^{-}(y))/2\)) for the pressure distribution at \(x/L=-0.35\) (Fig. 10(c)) which is nearly constant at \(p_{\infty}\). From there, we integrate Eq. (16) in the positive direction from \(\mbox{\raisebox{-0.86pt}{1}}\rightarrow\mbox{\raisebox{-0.86pt}{3}}\) and \(\mbox{\raisebox{-0.86pt}{2}}\rightarrow\mbox{\raisebox{-0.86pt}{4}}\) on the Front and Back to calculate the pressure distribution \(p(x)\) on either boundary. For boundaries at \(|y|/L>=0.08\) the pressure distribution calculated this way is identical to the pressure calculated from the Bernoulli equation (flow response is inviscid). Below that as one gets closer to the wall, the viscous effects result in larger and larger deviations from that of the Bernoulli equation. Now, knowing the pressure at points \(\mbox{\raisebox{-0.86pt}{3}}\) and \(\mbox{\raisebox{-0.86pt}{4}}\) (Fig. 10(a)), then we once again use Eq. (17) and integrate it in both positive (\(\mbox{\raisebox{-0.86pt}{3}}\rightarrow\mbox{\raisebox{-0.86pt}{4}}\)) and negative (\(\mbox{\raisebox{-0.86pt}{4}}\rightarrow\mbox{\raisebox{-0.86pt}{3}}\)) directions and use an average of the two (\(p_{{}_{\rm Outlet}}(y)=(p^{+}(y)+p^{-}(y))/2\)) for the pressure distribution on the Outlet. A few examples are shown in Fig. 10(c) for pressure distribution at different Outlet positions (\(x_{{}_{\rm Outlet}}\)). The size of the control volume should not matter in the calculation of the \(D_{\rm CV}\) as long as all the forces applied to the control volume are accounted for. Thus, we choose the boundary of the Front and Back position to be far away from the boundary layer so that there are no shear stresses applied on those boundaries. Here we present the results for the Front and Back boundaries fixed at \(|y|/L=0.12\). Similarly, we keep the Inlet far from the leading edge where the inlet velocity is nearly constant at \(x/L=-0.35\) (Fig. 10(b)) and move the outlet boundary from the trailing edge of the sample up to \(x/L=1.42\). The choice of the location of the Inlet allows for the distribution of the pressure and momentum terms (\(M\)) to be constant along this boundary as shown in Figs. 10(b) and 10(c) (grey diamonds). However, along the Outlet, the momentum distribution (\(\rho(u^{2}+\overline{u^{\prime}u^{\prime}})\)) follows the form of velocity deficits expected from a wake (Fig. 10(b)). As a result, at the trailing edge of the sample, the pressure is also lower and raises as one moves further away from the trailing edge. This pressure distribution is visible very close to the trailing edge and as one moves away and the wake diffuses away, the pressure reaches to near constant at \(x/L\approx 1.3\) but does not fully recover to \(p_{\infty}\) (the inviscid velocity also stays larger than \(U_{\infty}\) and does not fully recover within the region of study). On the Front and Back the distributions of the momentum (\(\rho(uv+\overline{u^{\prime}v^{\prime}})\), see Fig. 10(d)) follow very similar trends (just with opposite signs) with Front(Back) experiencing an increase(decrease) due to flow being pushed away from the sample at the leading edge area and then a decrease(increase) past the trailing edge due to the flow being pulled toward the center-line. However, the small angle of attack results in the two momentum distributions being symmetric about a non-zero value where in the leading edge area the momentum term is positive on the \(y/L=0.12\) line while it is zero on the \(y/L=-0.12\). Within a control volume, in addition to momentum, mass also needs to be conserved. However a quick survey of the flow rates in and out of the 4 boundaries of the control volume shown in Fig. 10(a) shows that the total mass flow in and out of the control volume is not strictly zero and there is potential leakage in the up/down ward directions due to the three-dimensional nature of the problem. Thus, to account for this, we assume that the control volume is 3D where the fifth and sixth boundaries are at \(z=0\) (Bottom) and \(z=b\) (Top) locations. Even with access to 2D velocity distribution, we can use the continuity equation to find the total mass flux across both the Top and Bottom boundaries as \[\dot{m}=-\sum_{i}\int_{S_{i}}\mathbf{\tilde{u}.n}dA \tag{18}\] where \(i\in[\text{Inlet},\text{ Outlet},\text{ Front},\text{ Back}]\). Then one can estimate the momentum flux through these two surfaces as \[M_{\text{\tiny Top+Bottom}}=\dot{m}\overline{u} \tag{3.19}\] with \(\overline{u}\) as an average of the \(u\) along all other four boundaries \[\overline{u}=\frac{1}{4}\left(\sum_{i}\frac{\int ud\ell}{\int d\ell}\right) \tag{3.20}\] Therefore Eq. (3.14) is updated to \[-D_{\text{CV}}\ +\overbrace{\int_{S_{\text{\tiny Inlet}}}pdA-\int_{S_{\text{ \tiny Outlet}}}pdA}^{\text{Pressure Force on Boundaries}}=\overbrace{\sum_{i}\int_{S_{i}} \underbrace{\rho\tilde{u}(\tilde{\mathbf{u}}\cdot\mathbf{n})}_{M}dA_{i}+M_{ \text{\tiny Top+Bottom}}}^{\text{Momentum}} \tag{3.21}\] Putting all the terms together, we can find the total of the pressure forces, total of the momentum contributions, and \(D_{CV}\) experienced by the sample as a function of different Outlet positions (\(x_{\text{Outlet}}\)) as presented in Fig. 10(e). As it can be seen, very close to the trailing edge (\(1<x/L<1.1\)), the negative of the total pressure forces and the total integral of the Momentum terms are not constant and they go through a variation with a reduction in the magnitude of both, where the sum of the two results in a constant force experienced by any control volume as a function of the location of the Outlet planes within (\(1<x_{\text{Outlet}}/L<1.12\)). Past this point, the variations in the (negative) of the pressure forces and the integral of the Momentum contribution subsidies. However, the scatter in the Pressure term increases due to the increase in numerical errors in the calculation of the derivatives of \(v\) in the wake area where the magnitude of the velocity decreases as \(x\) is increased. In addition, both the Momentum and Pressure terms look like they are oscillating about a constant mean force (dashed black line in Fig. 10(e)). Also, as was shown in Fig. 4, from \(x/L\approx 1.05\) the fluctuation terms start to gain strength and the flow slowly becomes more turbulent with the location of the largest turbulent kinetic energy and Reynolds shear stress being around \(x/L\approx 1.15\). The appearance and enhancement of the turbulence statistics also coincide with the region of this oscillatory behavior in the force calculation and require further investigation as to its nature. In addition, \(x\approx 1.1-1.12\) is also the ending point of the separation bubble behind the sample, and the vortex shedding behind this point could possibly affect the results beyond \(x\approx 1.12\)(Chopra & Mittal, 2019), which needs to be further investigated using a higher frequency or fully time-resolved measurement. Lastly, the total drag force is also affected by the three-dimensional nature of the sample and the flow which is not fully captured with a 2D-2C PIV measurement. This total load can be found from the load-cell measurements that we conducted simultaneous to the PIV measurements and a summary of all the forces is presented in terms of drag coefficients, \(C_{D}=D/(1/5\rho U_{\infty}^{2}(2Lb))\) in Fig. 11(a). As shown in the figure, each level of the analysis presented here allows us to capture contributions of the different phenomena on the drag force from the viscous and pressure parts, to the effects of the finite 3D nature of the sample. As expected, \(D_{\text{viscous}}<D_{\text{CV}}<D_{\text{total}}\). On average the viscous part of the drag is about \(40-45\%\) and the form drag is about \(30-38\%\) of the total drag, leaving about \(25-30\%\) to the 3D effects of the sample. Overall, due to the slender nature of the sample, the viscous drag takes the largest portion of the total drag, however, the pressure drag is not negligible (as is usually the assumption when dealing with flow past a flat plate) and the contribution rises as the Reynolds number is increased. While the drag coefficient due to viscous effects follows a \(\mathrm{Re}_{L}^{-0.5}\) trend, the drag coefficient from the control volume follows a lower rate of change, close to \(\mathrm{Re}_{L}^{-0.2}\) trend. As the Reynolds number is increased there is a lower decrease in the \(C_{D}\) due to the form drag compared to the \(C_{D}\) from the viscous drag. Ultimately, the total drag coefficient also follows a slower rate of decrease than the \(\mathrm{Re}_{L}^{-0.5}\) trend from the boundary layer theory. In addition, the total viscous drag turns out to be slightly larger than the total viscous drag from the Blasius solution and that can be attributed to the shear stress distribution along the length of the sample being higher, lower, then higher than that of the Blasius solution and thus the differences canceling each other out in the integral. We can use a similar process to also calculate the lift forces that the sample is experiencing as the result of the small angle of attack and the asymmetry in the flow. For this, the lift is decomposed into the lift due to the pressure difference at the Top and Bottom boundary of the control volume (\(L_{\mathrm{pressure}}\)), then the lift due to pressure and momentum contributions together \(L_{\mathrm{CV}}\), and ultimately the total lift from the load cell and they are presented in Fig. 11(b). (Note that the lift is normalized by the area of one side of the sample, while drag is normalized by the entire wetted surface area \(2Lb\).) Clearly, the slight asymmetry in the flow is able to result in detectable lift values for all the cases with \(L_{\mathrm{CV}}/D_{\mathrm{CV}}=0.3\), \(0.24\), and \(0.49\), and \(L_{\mathrm{Total}}/D_{\mathrm{Total}}=0.26\), \(0.19\), and \(0.65\) respectively for the 3 cases investigated. However, with the angle of attack being less than \(1^{\circ}\), the assumption of \(D\approx F_{x}\) and \(L\approx F_{y}\) is valid as the contributions from \(F_{y}\) to the drag or \(F_{x}\) to the lift would only account for about \(0.1\%\) of the total values. ## 4 Conclusion Here we present a double-light sheet, consecutive-overlapping imaging strategy to perform particle image velocimetry experiments with opaque samples that are larger than the field of view of the imaging. We present steps to perform such experiments only Figure 11: (a) Drag coefficient and (b) Lift coefficient for the sample at three different Reynolds numbers and decomposed in terms of various effects. Drag is decomposed in terms of the viscous drag, drag calculated using the control volume, and the total drag measured using the load cell. Lift is decomposed in terms of the pressure difference between the Front and Back, lift found from control volume analysis, and lift measured with the load cell. Error bars represent the 95% confidence intervals of the load cell measurements and they decrease as the magnitude of the measured load increases. using one laser light source and one camera which is more cost-effective than increasing the number of light sources and/or cameras. Using this method, one can gather data both in the near-wall and the far-field of flow past arbitrary objects and the results can be effectively used for understanding the characteristics of the kinematics and dynamics of different types of external flow problems. To demonstrate the capabilities of this technique, we present the results of experiments performed with a slender flat plate sample that is streamlined at the leading edge at three Reynolds numbers. We present the full view of the velocity distribution and the resulting turbulent kinetic energy and Reynolds shear stress distribution in the wake of the sample. Access to the high-resolution data in the far field of the samples accompanied by the potential flow model of the flow is used to find the angle of attack of the sample which is less than one degree and not visible to the eye during the experimental setup. With access to the high-resolution details of the flow in the boundary layers, we explore the characteristics of the velocity profiles as a function of the local Reynolds number. Employing the family of Falkner-Skan boundary layer solutions and the parameter \(m\) in this theory, we fit this model locally to the velocity profiles. Local distribution of this parameter \(m\) found as a function of the local Reynolds number shows that for a slim plate with finite thickness and finite length, the behavior of the boundary layer does not follow the Blasius solution. Early at the leading edge, the profiles are more attached to the wall (\(m>0\)) and then slowly as \(m\) is decreased, the profiles move to \(m<0\) and they become more detached where slightly after the end of the elliptic leading edge (\(x/L\approx 0.34\)) the lowest \(m\) occurs, and afterward as we move toward the trailing edge, \(m\) increases until it moves to \(m>0\) where the profiles are then again more attached compare to the Blasius solution continuing until the end of the length of the plate. This behavior is similar on both sides of the plate, however, due to the slight angle of attack the profiles are more attached on the Back of the sample rather than on the Front. As a result of this, we can then calculate the local shear stress distribution from the velocity measurements which are very similar to the distribution of \(m\). We see that close to the leading edge, the plate experiences shear stress levels more than that captured by the Blasius solution, and then slightly after the end of the elliptical leading edge the shear stress becomes less than the Blasius solution. Then toward the trailing edge, the shear stress takes an increasing trend and goes above the Blasius solution. Again, we see that due to the limited length of the plate, the flow cannot be fully captured by the first-order boundary layer theory. In addition, the velocity and shear stress distribution can be used effectively to calculate the total forces exerted on the experimental sample and to decompose the forces into various phenomena at work. The integral of the shear stress offers insight into the total viscous drag force experienced by the sample, while a control volume analysis is used to get a cumulative measure of both viscous and form drag. One can see that a careful assessment of all momentum contributions and pressure forces is required to ensure that the control volume analysis is able to capture the drag force on the sample irrespective of the boundaries chosen. Ultimately, using the load cell measurements, we see that the 3D nature of the sample clearly has some effects on the total forces exerted on the sample compared to what can be captured from the 2D-2C PIV analysis. Overall, this experimental platform can be effectively used to study and analyze the near- and far-field flow past objects with complex geometries. Even without idealized flow scenarios and samples, access to the entire flow field allows us to explore various aspects of the flow ranging from extracting a more accurate measure of the angle of the attack of the flow, to better characterization of the boundary layers and the local shear stress distributions and ultimately finding the forces exerted on the sample both using the PIV data and via the load-cell. The ability to collect high-resolution data of such flows will allow us to develop better models as well as more detailed explorations of flows past complex geometries such as but not limited to textured surfaces and roughness elements. Thus, introducing a few additional optical elements such as beam splitters, and a fully computer-controlled position adjustment of the camera, offer a cost-effective way of expanding on the capabilities of a 2D-2C PIV technique without the need for additional light sources and cameras, and with a similar approach, this procedure can be expanded to PIV experiments with multi-light sheet strategies of illumination for access to hard-to-reach spaces of more complex geometric samples, and/or 2D camera sweeps for consecutive-overlapping captures of flow past larger objects. ## Acknowledgement This work is supported by the Rowland Fellows program at Harvard University. The authors thank Dr. Yeonsu Jung for his assistance and discussions in the early stage of this research. In addition, the authors would like to express gratitude to Richard Christopher Stokes for his support with the electronics, undergraduate researchers Lars Caspersen and Mayesha Soshi for their help, Dr. Prasoon Suchandra for discussions regarding the pressure calculations, and Prof. Leah Mendelson for helpful troubleshooting suggestions. ## Declaration of interests The authors report no conflict of interest.
2305.13155
Multi-scale lattice relaxation in general twisted trilayer graphenes
We present comprehensive theoretical studies on the lattice relaxation and the electronic structures in general non-symemtric twisted trilayer graphenes. By using an effective continuum model, we show that the relaxed lattice structure forms a patchwork of moir\'e-of-moir\'e domains, where a moir\'e pattern given by layer 1 and 2 and another pattern given by layer 2 and 3 become locally commensurate. The atomic configuration inside the domain exhibits a distinct contrast between chiral and alternating stacks, which are determined by the relative signs of the two twist angles. In the chiral case, the electronic band calculation reveals a wide energy window ($>$ 50 meV) with low density of states, featuring sparsely distributed highly one-dimensional electron bands. These one-dimensional states exhibit a sharp localization at the boundaries between super-moir\'e domains, and they are identified as a topological boundary state between distinct Chern insulators. The alternating trilayer exhibits a coexistence of the flat bands and a monolayer-like Dirac cone, and it is attributed to the formation of moir\'e-of-moir\'e domains equivalent to the mirror-symmetric twisted trilayer graphene.
Naoto Nakatsuji, Takuto Kawakami, Mikito Koshino
2023-05-22T15:42:24Z
http://arxiv.org/abs/2305.13155v2
# Multi-scale lattice relaxation in general twisted trilayer graphenes ###### Abstract We present comprehensive theoretical studies on the lattice relaxation and the electronic structures in general non-symemtric twisted trilayer graphenes. By using an effective continuum model, we show that the relaxed lattice structure forms a patchwork of moire-of-moire domains, where a moire pattern given by layer 1 and 2 and another pattern given by layer 2 and 3 become locally commensurate. The atomic configuration inside the domain exhibits a distinct contrast between chiral and alternating stacks, which are determined by the relative signs of the two twist angles. In the chiral case, the electronic band calculation reveals a wide energy window (\(>\) 50 meV) with low density of states, featuring sparsely distributed highly one-dimensional electron bands. These one-dimensional states exhibit a sharp localization at the boundaries between super-moire domains, and they are identified as a topological boundary state between distinct Chern insulators. The alternating trilayer exhibits a coexistence of the flat bands and a monolayer-like Dirac cone, and it is attributed to the formation of moire-of-moire domains equivalent to the mirror-symmetric twisted trilayer graphene. ## I Introduction Two-dimensional moire materials have been the focus of extensive research in recent years. These systems exhibit a long-range moire pattern resulting from lattice mismatch, which profoundly influences their electronic properties. Twisted bilayer graphene (TBG), as the most prominent example of a moire system, exhibits the generation of flat bands due to the moire superlattice effect, leading to a variety of correlated quantum phases [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. In addition to the extensive study of twisted bilayers in the past decade, the scope of investigation has extended to encompass multilayer systems including three or more layers. Particular attention has recently been directed towards twisted trilayer graphene (TTG), which consists of three graphene layers arranged in a specific rotational configuration [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48]. The system is characterized by twist angles \(\theta^{12}\) and \(\theta^{23}\) [Fig. 2], which represent the relative rotation of layer 2 to 1, and 3 to 2, respectively. The special case of \(\theta^{12}=-\theta^{23}\) is called the mirror symmetric TTG [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36], where layer 1 and layer 3 are aligned precisely, resulting in a single moire periodicity. Recent transport measurements observed correlated insulator phases and robust superconductivity in mirror-symmeric TIGs at a certain magic angle [32; 33; 34; 35; 36]. Beyond the symmetric case, TTG offers a vast parameter space that remains largely unexplored. In general TIGs with \(\theta^{12}\neq-\theta^{23}\), the system has two different moire patterns originating from the interference of layer 1 and 2 and that of layer 2 and 3 [20; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48]. These two periodicities are generally incommensurate, giving rise to a quasi-crystalline nature in the system [44; 49; 50]. When the two moire periods are close but slightly different, in particular, an interference of competing moire structures generate a super-long range moire-of-moire pattern [37; 38; 39; 40]. Similar situation occurs also in composite multilayer systems consisting of graphene and hexagonal boron nitride [51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62]. Previous researches investigated the electronic properties of general TIGs with various angle pairs by using several theoretical approaches [20; 40; 41; 42; 43; 44; 45; 46; 47; 48]. Recent experimental study also reported superconductivity in some asymmetric TTGs [44]. Generally, twisted moire systems are under a strong influence of lattice relaxation in the moire scale, which also significantly modifies the electronic properties. In TBG, for instance, an in-plane lattice distortion forms commensurate AB(Bernal)-stacking domains [63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77], and it opens energy gaps in the electronic spectrum to isolate low-energy flat bands [72; 73; 77]. The lattice relaxation occurs also in trilayer moire systems, where the moire-of-moire period superstructure was observed [78; 79; 80]. Such a large-scale relaxation was also theoretically simulated for various trilayer systems [37; 47; 48; 60; 61; 37]. In this paper, we study the lattice relaxation and the electronic band structure in non-symmetric TTGs. TTG is classified into two groups depending on the relative direction of rotation angles; the cases of \(\theta^{12}\cdot\theta^{23}>0\) and \(<0\), which are refereed to as chiral and alternating TTGs, respectively [42; 43; 44]. Here we consider chiral and alternating TTGs having various combinations of twist angles (\(\theta^{12},\theta^{23}\)). We obtain the optimized lattice structure using the effective continuum approach used for TBG [72; 81; 82], and compute the electronic structure by a continuum band calculation method including the lattice relaxation [77]. We find that there are two distinct length-scale relaxations in the moire-of-moire and moire scales, which give rise to a formation of a patchwork of super-moire domains as schematically shown in Fig. 1. In theses domains, the first moire pattern given by layer 1 and 2 (moire 12) and the second pattern by layer 2 and 3 (moire 23) are deformed to become commensurate. The atomic configuration inside the domain exhibits a distinct contrast between chiral and alternating TTGs: In the chiral case, the two moire patterns are arranged such that the AA spots of moire 12 and those of moire 23 repel to each other, leading to shifted configurations [Fig. 1(a)]. In the alternating case, in contrast, the AA spots attract each other, resulting in a fully overlapped structure equivalent to the mirror-symmetric TTG [Fig. 1(b)]. The energetic stability of these super-moire domain formations can be explained by considering a competition of lattice relaxation in the two moire patterns. In the band calculation, we find that the spectrum of the chiral TTG has an energy window more than 50 meV wide with low density of state, where highly one-dimensinoal electron bands are sparsely distributed. The wave function of the one-dimensional bands is sharply localized at the boundary between the super-moire domains. By calculating the Chern number of the local band structure of the commensurate domain, the one-dimensional state is shown to be a topological boundary state between distinct Chern insulators. On the other hand, the alternating TTG exhibits a coexistence of the flat bands and a monolayer-like Dirac cone, resembling the energy spectrum of the mirror-symmetric TTG [32; 33; 34; 35; 36]. Here the moire-of-moire relaxation significantly reduces the hybridization of the Dirac cone with other states, restoring its highly-dispersive feature. The paper is organized as follows. In Sec. II, we define the lattice structure of TTG and introduce the continuum method to calculate of the lattice relaxation and the electronic band structure. In Sec. III, we investigate the chiral TTGs. We obtain the relaxed lattice structure and demonstrate the formation of the moire-of-moire domain pattern in Sec. III.1. We calculate the band structure including the lattice relaxation in Sec. III.2, where we show the emergence of the one-dimensional boundary states on the domain walls. In Sec. IV, we conduct similar analyses for the alternating TTGs. ## II Model ### Geometry of TTG We define a TTG by stacking three graphene layers labeled by \(l=1,2\) and \(3\), with relative twist angles \(\theta^{12}\) (layer 1 to 2) and \(\theta^{23}\) (layer 2 to 3). The configuration is schematically depicted in Fig. 2(a) and (b), for the chiral case (\(\theta^{12}\cdot\theta^{23}>0\)) and the alternating case (\(\theta^{12}\cdot\theta^{23}<0\)), respectively. The primitive lattice vectors of layer \(l\) are defined by \(\mathbf{a}_{i}^{(l)}=R(\theta^{(l)})\mathbf{a}_{i}\) where \(\mathbf{a}_{1}=a(1,0)\) and \(\mathbf{a}_{2}=a(1/2,\sqrt{3}/2)\) are the lattice vectors of unrotated monolayer graphene, \(a=0.246\) nm is the graphene's lattice constant. \(R\) is the rotation matrix, and \(\theta^{(l)}\) is the absolute twist angle of layer \(l\) given by \(\theta^{(1)}=-\theta^{12}\), \(\theta^{(2)}=0\) and \(\theta^{(3)}=\theta^{23}\). Accordingly, the primitive reciprocal lattice vectors become \(\mathbf{b}_{i}^{(l)}=R(\theta^{(l)})\mathbf{b}_{i}\) where \(\mathbf{b}_{1}=(2\pi/a)(1,-1/\sqrt{3})\) and \(\mathbf{b}_{2}=(2\pi/a)(0,2/\sqrt{3})\) are the reciprocal lattice vectors without rotation. The Dirac points of graphene layer \(l\) are intrinsically located at the corners of Brillouin zone (BZ), \(K_{\xi}^{(l)}=-\xi\left(2\mathbf{b}_{1}^{(l)}+\mathbf{b}_{2}^{(l)}\right)/3\) where \(\xi=\pm 1\) is the valley index. In this paper, we consider TTGs with small twist angles (\(|\theta^{12}|,|\theta^{23}|\leqslant 10^{\circ}\)). Then the system is governed by two competing moire patterns, one from the layer 1 and 2 and the other from layer 2 and 3. The reciprocal lattice vectors for these moire patterns are given by \(\mathbf{G}_{i}^{ll^{\prime}}=\mathbf{b}_{i}^{(l)}-\mathbf{b}_{i}^{(r)}\) where \((l,l^{\prime})=(1,2)\) or \((2,3)\). The moire lattice vectors can be obtained from \(\mathbf{G}_{i}^{ll^{\prime}}\cdot\mathbf{L}_{j}^{ll^{\prime}}=2\pi\delta_{ij}\), and explicitly written as \[\mathbf{L}_{1}^{12} =\frac{a}{2\sin\left(\theta^{12}/2\right)}R(-\theta^{12}/2) \begin{pmatrix}0\\ -1\end{pmatrix}\] \[\mathbf{L}_{1}^{23} =\frac{a}{2\sin\left(\theta^{23}/2\right)}R(+\theta^{23}/2) \begin{pmatrix}0\\ -1\end{pmatrix}, \tag{1}\] and \(\mathbf{L}_{i}^{ll^{\prime}}=R(60^{\circ})\mathbf{L}_{i}^{ll^{\prime}}\). The moire lattice constant is given by \(L^{ll^{\prime}}=|\mathbf{L}_{1}^{ll^{\prime}}|=|\mathbf{L}_{2}^{ll^{\prime}}|=a/|2\sin \left(\theta^{ll^{\prime}}/2\right)|\). When absolute twist angles are close (\(|\theta^{12}|\approx|\theta^{23}|\)), an interference between the two moire patterns gives rise to a higher order structure called a moire-of-moire pattern as shown in Fig. 2. Here the upper and lower rows Figure 1: Schematic illustration of the moiré-of-moiré domain structures in (a) chiral TTG and (b) alternating TTG with close twist angles. Right figures represents relative arrangements of two moiré patterns within the domains, where blue and red dots indicate AA stacking of moiré 12 (between layer 1 and 2) and of moiré 23 (between layer 2 and 3), respectively (See also Fig. 2). Figure 2: (a) Schematics of moiré-of-moiré pattern of chiral TTG, where blue and red dots represent AA stacking points of moiré 12 (between layer 1 and 2) and of moiré 23 (between layer 2 and 3), respectively. The insert panel illustrates the stacking structure of a chiral TTG, where green, black and orange represent the layer 1, 2 and 3 respectively. (b) Local structures of moiré-of-moiré pattern in (a), where circles, filled triangles, and empty triangles indicate AA, AB, and BA stacking of individual moiré patterns. (c) Local atomic structures at specific points in (b), where \(A_{I}\) and \(B_{I}\) are the graphene’s sublattice in layer \(I\). The lower panels [(d), (e) and (f)] are the corresponding figures for the alternate TTG. correspond to the chiral and alternating structures, respectively. For the chiral twist, the left panel [Fig. 2(a)] illustrates the overlapped moire patterns where blue and red dots represent the AA spots of moire 12 and 23, respectively. The local structure can be viewed as a pair of non-twisted moire superlattices with a relative translation, as illustrated in Fig. 2(b). Here shaded and empty triangles represent AB, and BA stacking regions of individual moire patterns, respectively. By defining AB and BA points (the centers of triangles) by \(\alpha\) and \(\beta\), respectively, the local stacking configuration of the two moire patterns is labeled by \(\alpha\alpha\), \(\alpha\beta\) and \(\beta\alpha\). Figure 2(c) depicts the local structure in the atomic scale. Here \(A_{I}\) and \(B_{I}\) represent the graphene's sublattice in layer \(I\). We define the sublattice \(C_{I}\) as the center of the hexagon in the honeycomb lattice. For instance, BAC-stacking represents \(B_{1}\), \(A_{2}\) and \(C_{3}\) are vertically aligned. The lower panels [Figs. 2(d), (e) and (f)] are the corresponding figures for the alternate twist. The key difference from the chiral case lies in the \(180^{\circ}\) rotation of the moire 23 (red lattice) due to the opposing sign of \(\theta^{23}\). This results in the flipping of the positions of AB and BA. Consequently, the local atomic structure (shown in the rightmost panels) differs between the chiral and alternating structures, even though the relative arrangement of AA spots is identical. We define AB and BA points in the inverted moire 23 pattern by \(\beta^{\prime}\) and \(\alpha^{\prime}\), respectively, and label the local structure in the alternating TTG by \(\alpha\alpha^{\prime}\), \(\alpha\beta^{\prime}\) and \(\beta\alpha^{\prime}\), as in Fig. 2(e). ### Commensurate TTGs Generally the two moire patterns in a TTG are not commensurate, and the spatial period of moire-of-moire pattern is infinite. However, there are special angle sets (\(\theta^{12},\theta^{23}\)) where the two patterns happen to have a finite common period. In such a case, we can express the moire-of-moire primitive lattice vectors \(\mathbf{L}_{1}\) and \(\mathbf{L}_{2}\) in terms of integers \(n,m,n^{\prime}\) and \(m^{\prime}\) as \[\mathbf{L}_{1} = n\mathbf{L}_{1}^{12}+m\mathbf{L}_{2}^{12}=n^{\prime}\mathbf{L}_{1}^{23}+m^{ \prime}\mathbf{L}_{2}^{23},\] \[\mathbf{L}_{2} = R(60^{\circ})\mathbf{L}_{1}. \tag{2}\] The moire-of-moire reciprocal lattice vectors are given by the condition \(\mathbf{G}_{l}\cdot\mathbf{L}_{j}=2\pi\delta_{ij}\). The corresponding twist angles are obtained by solving Eqs. (1) and (2) for Figure 3: (a) Two-dimensional map of (\(\theta^{12},\theta^{23}\)) of TTGs considered in this paper. The color code represents the ratio of the two moiré periods, \(\min\left(L^{12}/L^{23},L^{23}/L^{12}\right)\). Diagonal dashed lines indicate \(\theta^{12}=\pm\theta^{23}\), and a horizontal dashed line represents twisted monolayer-bilayer graphene (tMBG). (Right) Moiré-of-moiré patterns without lattice relaxation of (b) chiral TTGs (C1, C2 and C3) and (c) alternating TTGs (A1, A2 and A3). Blue and red dots indicate the AA spot of moiré 12 (between layer 1 and 2) and moiré 23 (between layer 2 and 3) respectively, and gray area represents the moiré-of-moiré unit cell. All scale bars indicate 20 nm. variables \(\theta^{12}\) and \(\theta^{23}\), as \[\theta^{12}=\theta(n,m,n^{\prime},m^{\prime}),\quad\theta^{23}=-\theta(n^{\prime},m^{\prime},n,m), \tag{3}\] where \[\theta(n,m,n^{\prime},m^{\prime})=\] \[2\tan^{-1}\frac{\sqrt{3}\left\{m\left(2n^{\prime}+m^{\prime} \right)-\left(2n+m\right)m^{\prime}\right\}}{\left(2n+m\right)\left(2n^{ \prime}+m^{\prime}\right)+3mm^{\prime}+\left(2n^{\prime}+m^{\prime}\right)^{2 }+3m^{\prime\prime}}. \tag{4}\] The spatial period of the super-moire pattern is given by \(L=L^{12}\sqrt{n^{2}+m^{2}+nm}=L^{23}\sqrt{n^{\prime 2}+m^{\prime 2}+n^{\prime}m^{ \prime}}\). In alternating TTGs with \(\theta^{12}\approx-\theta^{23}\), the relative angle between two moire lattice vectors nearly vanishes, resulting in an extremely large commensurate moire-of-moire unit cell. To treat such cases, we neglect the tiny misorientation of the moire lattice vectors \(\mathbf{L}_{j}^{12}\) and \(\mathbf{L}_{j}^{23}\), while retaining their norms. In this approximation, the moire-of-moire commensurate period is expressed as \[\mathbf{L}_{1}=n\mathbf{L}_{1}^{12}=n^{\prime}\mathbf{L}_{1}^{23},\quad\mathbf{L}_{2}=R(60^{ \circ})\mathbf{L}_{1}, \tag{5}\] instead of Eq.(2). Note that Eq. (3) does not apply to this approximate commensurate structure. In this paper, we consider commensurate chiral TTGs, C1, C2 and C3, and commensurate alternating TTGs, A1, A2 and A3, defined in Table 1. We employ the exact commensurate formulas Eqs. (2) and (3) for C1, C2, C3, and A3, while we utilize the approximate formula, Eq. (5) for A1 and A2. Figure 3(a) maps \((\theta^{12},\theta^{23})\) of these systems in two-dimensional space, where the color code represents the ratio of the two moire periods, \(\min\left(L^{12}/L^{23},L^{23}/L^{12}\right)\). The moire-of-moire structures of these TTGs without lattice relaxation are illustrated in Fig. 3(b) and (c), respectively. We show the schematics of Brillouin zone (BZ) of a chiral TTG in Fig. 4. Here green, black and orange hexagons represent the first BZ of layer 1, 2, and 3, respectively. Blue and red hexagons represent the BZ for the first moire patterns given by \(l=1,2\) and the second pattern given by \(l=2,3\), respectively. Finally, the gray hexagon is the BZ of the moire-of-moire pattern, where we label the corner points by \(\kappa\) and \(\kappa^{\prime}\), the midpoint of a side by \(\mu\) and the center by \(\gamma\). ### Continuum method for multi-scale lattice relaxation We adopt a continuum approximation [81; 72; 82] to describe the lattice relaxation on TTG. Let \(\mathbf{s}^{(l)}(\mathbf{R}_{\mathcal{X}})\) be the displacement vector of sublattice \(X=A\), or \(B\) at a two-dimensional position \(\mathbf{R}_{\mathcal{X}}\) of layer \(l=1,2,3\). Here we consider a long-rage lattice relaxation which has much longer scales than graphene's lattice constant. The displacement vectors can then be expressed by continuous functions in real space as \(\mathbf{s}^{(l)}(\mathbf{R}_{A})=\mathbf{s}^{(l)}(\mathbf{R}_{B})=\mathbf{s}^{(l)}\left(\mathbf{r}\right)\). We ignore the out-of-plane component of the displacement vector in this model, as it does not much contribute to the commensurate domain formation. The optimized lattice structure can be obtained by minimizing the total energy \(U=U_{E}+U_{B}^{12}+U_{B}^{23}\), where \(U_{E}\) is the elastic energy and \(U_{B}^{ll}\) is the interlayer binding energy between layers \(l\) and \(l^{\prime}\). We assume that \(U_{B}^{12}\) and \(U_{B}^{23}\) are given by the interlayer interaction energy of the twisted bilayer graphene [72], and neglect a remote interaction between layer 1 and 3. The \(U_{E}\) and \(U_{B}^{ll^{\prime}}\) can be expressed as functionals of the displacement field \(\mathbf{s}^{(l)}\left(\mathbf{r}\right)\). We solve the Euler-Lagrange equation to obtain the optimized \(\mathbf{s}^{(l)}\left(\mathbf{r}\right)\) self-consistently. The elastic energy of strained TTG is written in a stan Figure 4: Brillouin zone of chiral TTG. Green, black and orange hexagons represent the first Brillouin zone of graphene layer 1, 2, and 3, respectively. Blue and red hexagons represent the BZ for the moire patterns given by \(l=1,2\) and that by \(l=2,3\), respectively. Gray hexagon is the BZ of the moiré-of-moiré pattern. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \((\theta^{12},\theta^{23})\) & \((n,m,n^{\prime},m^{\prime})\) & \(L^{12}/L^{23}\) \\ \hline C1 & \((1.79^{\circ},1.58^{\circ})\) & \((2,7,2,6)\) & \(0.88\) \\ C2 & \((2.64^{\circ},2.45^{\circ})\) & \((7,7,7,6)\) & \(0.93\) \\ C3 & \((1.54^{\circ},0.64^{\circ})\) & \((7,5,3,2)\) & \(0.42\) \\ \hline A1 & \((1.48^{\circ},-1.18^{\circ})\) & \((5,0,-4,0)^{\ast}\) & \(0.80\) \\ A2 & \((1.42^{\circ},-1.22^{\circ})\) & \((7,0,-6,0)^{\ast}\) & \(0.86\) \\ A3 & \((1.47^{\circ},-0.62^{\circ})\) & \((7,12,-3,-5)\) & \(0.42\) \\ \hline \end{tabular} \end{table} Table 1: Definition of commensurate chiral TTGs (C1, C2, C3) and commensurate alternating TTGs (A1, A2, A3) considered in this paper. The asterisk (*) symbol for A1 and A2 indicates the use of the approximation of Eq. (5) to obtain the commensurate structures. dard form [83; 84] as \[U_{E}=\sum_{l=1}^{3}\frac{1}{2}\int\left[\left(\mu+\lambda\right) \left(s_{xx}^{(l)}+s_{yy}^{(l)}\right)^{2}\right.\] \[\left.+\mu\left\{\left(s_{xx}^{(l)}-s_{yy}^{(l)}\right)^{2}+4 \left(s_{xy}^{(l)}\right)^{2}\right)\right\}\mathrm{d}^{2}\mathbf{r}, \tag{6}\] where \(\lambda=3.25\) eV/\(\AA^{2}\) and \(\mu=9.57\) eV/\(\AA^{2}\) are graphene's Lame factors[70; 85], and \(s_{ij}^{(l)}=(\partial_{i}s_{j}^{(l)}+\partial_{j}s_{i}^{(l)})/2\) is the strain tensor. The interlayer binding energy of adjacent layers \((l,l^{\prime})=(1,2),(2,3)\) is given by [72] \[U_{B}^{ll^{\prime}}=\int\mathrm{d}^{2}\mathbf{r}\sum_{j=1}^{3}2V_{0} \cos\left[\mathbf{G}_{j}^{ll^{\prime}}\cdot\mathbf{r}+\mathbf{b}_{j}\cdot\left(\mathbf{s}^{( l)}-\mathbf{s}^{(l)}\right)\right], \tag{7}\] where \(\mathbf{b}_{3}=-\mathbf{b}_{1}-\mathbf{b}_{2}\), \(\mathbf{G}_{3}^{ll^{\prime}}=-\mathbf{G}_{1}^{ll^{\prime}}-\mathbf{G}_{2}^{ll^{\prime}}\). We take \(V_{0}=0.160\) eV/nm\({}^{2}\)[86; 87]. We introduce \[\mathbf{w} =\mathbf{s}^{(1)}+\mathbf{s}^{(2)}+\mathbf{s}^{(3)}\] \[\mathbf{u} =\mathbf{s}^{(1)}-2\mathbf{s}^{(2)}+\mathbf{s}^{(3)}\] \[\mathbf{v} =\mathbf{s}^{(1)}-\mathbf{s}^{(3)}, \tag{8}\] and rewrite \(U\) as a functional of \(\mathbf{w},\mathbf{u}\) and \(\mathbf{v}\). Here \(\mathbf{w}\) represents an overall translation of three layers, while \(\mathbf{u}\) and \(\mathbf{v}\) are relative slidings which are mirror-even and odd, respectively, with respect to the middle layer. In the subsequent analysis, we fix \(\mathbf{w}\) to zero and focus solely on \(\mathbf{u}\) and \(\mathbf{v}\), as \(\mathbf{w}\) does not alter the interlayer registration and therefore does not impact the formation of moire domains. The Euler-Lagrange equation is written as \[\hat{\mathbf{K}}\mathbf{u}+6V_{0}\sum_{j=1}^{3}\left\{\sin\left[\mathbf{G}_{ j}^{12}\cdot\mathbf{r}-\mathbf{b}_{j}\cdot\left(\mathbf{u}+\mathbf{v}\right)/2\right]\right.\] \[\left.+\sin\left[\mathbf{G}_{2}^{23}\cdot\mathbf{r}+\mathbf{b}_{j}\cdot\left( \mathbf{u}-\mathbf{v}\right)/2\right]\right\}\mathbf{b}_{j}=0 \tag{9}\] \[\hat{\mathbf{K}}\mathbf{v}+2V_{0}\sum_{j=1}^{3}\left\{\sin\left[\mathbf{G}_{ j}^{12}\cdot\mathbf{r}-\mathbf{b}_{j}\cdot\left(\mathbf{u}+\mathbf{v}\right)/2\right]\right.\] \[\left.-\sin\left[\mathbf{G}_{j}^{23}\cdot\mathbf{r}+\mathbf{b}_{j}\cdot\left( \mathbf{u}-\mathbf{v}\right)/2\right]\right\}\mathbf{b}_{j}=0, \tag{10}\] where \[\hat{\mathbf{K}}=\left(\begin{array}{cc}\left(\lambda+2\mu\right) \partial_{x}^{2}+\mu\partial_{y}^{2}&\left(\lambda+\mu\right)\partial_{x} \partial_{y}\\ \left(\lambda+\mu\right)\partial_{x}\partial_{y}&\left(\lambda+2\mu\right) \partial_{y}^{2}+\mu\partial_{x}^{2}\end{array}\right). \tag{11}\] We assume \(\mathbf{s}^{(l)}\)'s (so \(\mathbf{u}\) and \(\mathbf{v}\)) are periodic in the original moire-of-moire period, and define the Fourier components as \[\mathbf{u}\left(\mathbf{r}\right)=\sum_{\mathbf{G}}\mathbf{u}_{\mathbf{G}}\mathrm{e}^{\mathrm{i} \mathbf{G}\cdot\mathbf{r}},\quad\mathbf{v}\left(\mathbf{r}\right)=\sum_{\mathbf{G}}\mathbf{v}_{\mathbf{G }}\mathrm{e}^{\mathrm{i}\mathbf{G}\cdot\mathbf{r}}, \tag{12}\] where \(\mathbf{G}=m_{1}\mathbf{G}_{1}+m_{2}\mathbf{G}_{2}\) are the moire-of-moire reciprocal lattice vectors. We also introduce \(f_{\mathbf{G},j}^{ll^{\prime}}\) by \[\sin\left[\mathbf{G}_{j}^{12}\cdot\mathbf{r}-\mathbf{b}_{j}\cdot\left(\mathbf{u }+\mathbf{v}\right)/2\right] =\sum_{\mathbf{G}}f_{\mathbf{G},j}^{12}\mathrm{e}^{\mathrm{i}\mathbf{G}\cdot \mathbf{r}},\] \[\sin\left[\mathbf{G}_{j}^{23}\cdot\mathbf{r}+\mathbf{b}_{j}\cdot\left(\mathbf{u}- \mathbf{v}\right)/2\right] =\sum_{\mathbf{G}}f_{\mathbf{G},j}^{23}\mathrm{e}^{\mathrm{i}\mathbf{G}\cdot \mathbf{r}}. \tag{13}\] Eq. (9) is then written as \[\mathbf{u}_{\mathbf{G}} =-6V_{0}\sum_{j=1}^{3}\left(f_{\mathbf{G},j}^{12}+f_{\mathbf{G},j}^{23} \right)\hat{\mathbf{K}}_{\mathbf{G}}^{-1}\mathbf{b}_{j},\] \[\mathbf{v}_{\mathbf{G}} =-2V_{0}\sum_{j=1}^{3}\left(f_{\mathbf{G},j}^{12}-f_{\mathbf{G},j}^{23} \right)\hat{\mathbf{K}}_{\mathbf{G}}^{-1}\mathbf{b}_{j}, \tag{14}\] where \[\hat{\mathbf{K}}_{\mathbf{G}}=\left(\begin{array}{cc}\left(\lambda+2\mu\right)G_{x}^ {2}+\mu G_{y}^{2}&\left(\lambda+\mu\right)G_{x}G_{y}\\ \left(\lambda+\mu\right)G_{x}G_{y}&\left(\lambda+2\mu\right)G_{y}^{2}+\mu G_{x}^ {2}\end{array}\right). \tag{15}\] We obtain the optimized \(\mathbf{u}_{\mathbf{G}}\) and \(\mathbf{v}_{\mathbf{G}}\) by solving Eqs. (13) and (14) in an iterative manner. In the calculation, we only consider a finite number of the Fourier components in \(|\mathbf{G}|<3\max\left(|\mathbf{n}|,|\mathbf{m}|,|\mathbf{n}^{\prime}|,|\mathbf{m}^{\prime}| \right)\), which are sufficient to describe the lattice relaxation in the systems considered. It should be noted that the components of \(\mathbf{G}=0\) cannot be determined by this scheme, since \(\hat{\mathbf{K}}_{\mathbf{G}}\) becomes \(0\) in Eq. (14). Here we treat \(\mathbf{s}_{\mathbf{G}=0}^{(l)}\) as parameters, and perform the above iteration for different parameter choices. We finally choose the solution having the lowest total energy. The dependence on \(\mathbf{G}=0\) component arises because the moire-of-moire structure depends on a relative translation of the two moire patterns, and hence it cannot be eliminated by a shift of the origin unlike twisted bilayer graphene. Practically, it is sufficient to consider only the lateral sliding of layer \(3\) with other two layers fixed. ### Continuum Hamiltonian with lattice relaxation We compute the band structure of the TTGs by using an electronic continuum model [88; 89; 90; 91; 92; 93] that incorporates lattice relaxation [77]. The effective Hamiltonian for valley \(\xi\) is written as \[H^{(\xi)}=\left(\begin{array}{cc}H_{1}\left(\mathbf{k}\right)&U_{21}^{\dagger}\\ U_{21}&H_{2}\left(\mathbf{k}\right)&U_{32}^{\dagger}\\ U_{32}&H_{3}\left(\mathbf{k}\right),\end{array}\right). \tag{16}\] The matrix works on a six-component wave function \((\mathbf{\psi}_{A}^{(1)},\mathbf{\psi}_{B}^{(1)},\mathbf{\psi}_{A}^{(2)},\mathbf{\psi}_{B}^{(2)}, \mathbf{\psi}_{A}^{(3)},\mathbf{\psi}_{B}^{(3)})\), where \(\mathbf{\psi}_{A}^{(l)}\) represents the envelope function of sublattice \(X(=A,B)\) on layer \(l(=1,2,3)\). The \(H_{l}(\mathbf{k})\) is the \(2\times 2\) Hamiltonian of monolayer graphene and \(U_{ll^{\prime}}\) is the interlayer coupling matrix, in the presence of the lattice distortion. The \(H_{l}(\mathbf{k})\) is given by \[H_{l}(\mathbf{k})=-\hbar v\left[R\left(\theta^{(l)}\right)^{-1}\left(\mathbf{k}-\mathbf{K}_{ \xi}^{(l)}+\frac{e}{\hbar}\mathbf{A}^{(l)}\right)\right]\cdot\mathbf{\sigma}, \tag{17}\] where \(v\) is the graphene's band velocity, \(\mathbf{\sigma}=\left(\xi\sigma_{x},\sigma_{y}\right)\) and \(\sigma_{x}\), \(\sigma_{y}\) are the Pauli matrices in the sublattice space \((\mathbf{A},\mathbf{B})\). We take \(\hbar v/a=2.14\) eV [94]. The \(\mathbf{A}^{(l)}\) is the strain-induced vector potential that is given by [95, 96, 83] \[\mathbf{A}^{(l)} = \xi\frac{3}{4}\frac{\beta\gamma_{0}}{ev}\begin{pmatrix}s_{xx}^{( l)}-s_{yy}^{(l)}\\ -2s_{xy}^{(l)}\end{pmatrix}, \tag{18}\] where \(\gamma_{0}=2.7\) eV is the nearest neighbor transfer energy of intrinsic graphene and \(\beta\approx 3.14\). The interlayer coupling matrix \(U_{21}\) and \(U_{32}\) are given by \[U_{lI}=\sum_{j=1}^{3}U_{j}\mathrm{e}^{\mathrm{i}\xi\delta\mathbf{k}_{j}^{(l^{ \prime})}\cdot\mathbf{r}+\mathrm{i}\mathbf{Q}_{j}\cdot\left(\mathbf{s}^{(l^{\prime})}-\mathbf{ s}^{(l)}\right)} \tag{19}\] where we defined \[\delta\mathbf{k}_{1}^{ll^{\prime}}=\mathbf{0},\quad\delta\mathbf{k}_{2}^{ll^{ \prime}}=\xi\mathbf{G}_{1}^{ll^{\prime}},\quad\delta\mathbf{k}_{3}^{ll^{\prime}}=\xi \left(\mathbf{G}_{1}^{ll^{\prime}}+\mathbf{G}_{2}^{ll^{\prime}}\right), \tag{20}\] \[\mathbf{Q}_{1}=\mathbf{K}_{\xi},\quad\mathbf{Q}_{2}=\mathbf{K}_{\xi}+\xi\mathbf{b}_{1 },\quad\mathbf{Q}_{3}=\mathbf{K}_{\xi}+\xi\left(\mathbf{b}_{1}+\mathbf{b}_{2}\right), \tag{21}\] and \[U_{1}=\left(\begin{array}{cc}u&u^{\prime}\\ u^{\prime}&u\end{array}\right),\quad U_{2}=\left(\begin{array}{cc}u&u^{ \prime}\omega^{-\xi}\\ u^{\prime}\omega^{+\xi}&u\end{array}\right),\] \[U_{3}=\left(\begin{array}{cc}u&u^{\prime}\omega^{+\xi}\\ u^{\prime}\omega^{-\xi}&u\end{array}\right). \tag{22}\] The parameters \(u=79.7\) meV and \(u^{\prime}=95.7\) meV are interlayer coupling strength between AA/BB and AB/BA stack region, respectively[94, 97]. In the band calculation, we take Fourier components within the radius of \(|\mathbf{G}|\leq 2\max\left(|\mathbf{n}|,|\mathbf{m}|,|\mathbf{n}^{\prime}|,|\mathbf{m}^{\prime}|\right)\) as the basis of Hamiltonian. We neglect remote interlayer hoppings between layer 1 and 3. ## III Chiral TTGS ### Multi-scale lattice relaxation We study the lattice relaxation in the TTGs of C1(\(1.79^{\circ},1.58^{\circ}\)), C2(\(2.64^{\circ},2.45^{\circ}\)) and C3(\(1.54^{\circ},0.64^{\circ}\)) by using the method described in Sec. II.3. Figure 5 summarizes the optimized moire structures for the three systems. In each row, the left panel shows the moire pattern 12 (given by layer 1 and 2), and the middle panel shows moire pattern 23 (by layer 2 and 3) after the relaxation. Here the color represents the local interlayer binding energy \(U_{B}^{ll^{\prime}}\), where bright and dark regions correspond to the AA stack and AB/BA stack respectively. Tiny magenta dots indicate the original AA stack points without lattice relaxation for reference. In the right-most panel, we overlap the two moire structures in a single diagram, where blue and red points represent the AA stack of the moire 12 and 23 respectively. A rhombus in each panel represents the moire-of-moire unit cell, and all scale bars indicate 20 nm. We first consider C1 and C2 which have relatively close twist angles (\(\theta^{12},\theta^{23}\)). In the rightmost panels of Fig. 5 (a) and (b), we see that locally-commensurate \(\alpha\beta\) and \(\beta\alpha\) domains (indicated by triangles) are formed. In these domains, the lattice relaxation equalizes the two moire periods which were initially different, to achieve a commensurate structure. At the same time, we also have the lattice relaxation in a smaller scale as in twisted bilayer graphene, which shrinks AA regions and expands AB/BA regions in each of two moire patterns. Therefore we have the relaxations in the moire-of-moire scale (\(\alpha\beta/\beta\alpha\) domains) and in moire scale (AB/BA domains) at the same time. The following questions naturally arise: (i) What distribution of displacement vectors leads to the multi-scale lattice relaxation? and (ii) Why does such a structure exhibit energetic preference? These questions can be answered by examining the obtained lattice displacement as follows. Figure 6(a) shows the distribution of the displacement vector \(\mathbf{s}^{(l)}(\mathbf{r})\) on layer 1, 2 and 3 for the case of C1. The middle row, Fig. 6(b), plots a coarse-grained component \(\bar{\mathbf{s}}^{(l)}(\mathbf{r})\), which is calculated by averaging \(\mathbf{s}^{(l)}(\mathbf{r})\) over a scale of moire unit cell around the point \(\mathbf{r}\). The bottom row [Fig.6(c)] displays magnified plots of \(\mathbf{s}^{(l)}(\mathbf{r})-\bar{\mathbf{s}}^{(l)}(\mathbf{r})\) (i.e., the local component with the coarse-grained part subtracted) within the region enclosed by a dashed square in Fig.6(a). In Fig. 6(b), we clearly see that \(\bar{\mathbf{s}}^{(1)}\) and \(\bar{\mathbf{s}}^{(3)}\) rotate counter-clockwise around the center of the \(\alpha\beta\) and \(\beta\alpha\) domains, while \(\bar{\mathbf{s}}^{(2)}\) rotates in the clockwise direction. This behavior is closely linked to \(\alpha\beta/\beta\alpha\) domain formation, and it can be comprehended by examining the problem in the \(k\)-space. Figure 7 depicts the relocation of BZ corners of layer 1, 2 and 3 in the C1 system under the lattice relaxation. The panel (a) is for the original non-distorted configuration. We define \(\mathbf{q}_{1}^{12}=\mathbf{K}_{+}^{(2)}-\mathbf{K}_{+}^{(1)}\) and \(\mathbf{q}_{1}^{23}=\mathbf{K}_{+}^{(3)}-\mathbf{K}_{+}^{(2)}\), where \(\mathbf{K}_{+}^{(l)}\) is the BZ corner of layer \(l\) near \(\xi=+\) valley. The vectors \(\mathbf{q}_{1}^{12}\) and \(\mathbf{q}_{1}^{23}\) are associated with the periods of the moire pattern 12 and that of 23, respectively. When these vectors are equal, two moire periods completely match. The lattice displacement in Fig. 6(b) works precisely to align the two vectors. In the case of C1, the angle between layer 1 and 2 is larger than the angle between layer 2 and 3 (\(\theta^{12}>\theta^{23}\)), so the layer 2 rotates clockwise, and the layer 1 and layer 3 rotate counter-clockwise to achieve \(\theta^{12}=\theta^{23}\) [Fig. 7(b)]. There is still a tiny angle difference between \(\mathbf{q}_{1}^{12}\) and \(\mathbf{q}_{1}^{23}\). This can be eliminated by slightly expanding BZs layer 1 and 3, and shrinking BZ of layer 2, to finally obtain the perfect matching [Fig. 7(c)]. In the real space, this corresponds to a shrink of layer 1 and 3 and an expansion of layer 2. These changes are actually observed in Fig. 6(a), where the vector fields rotate around the center of the \(\sigma\beta/\beta\alpha\) domain. To understand the energetic stability of \(\alpha\beta/\beta\alpha\) domains, we examine the local moire-scale lattice relaxation. Let us first consider the twisted bilayer graphene, which has only a single moire pattern. There the lattice Figure 5: Relaxed moiré patterns in chiral TTGs, (a) C1 (\(\theta^{12},\theta^{23}\)) = \((1.79^{\circ},1.58^{\circ})\), (b) C2 (\(2.64^{\circ},2.45^{\circ}\)) and (c) C3 (\(1.54^{\circ},0.64^{\circ}\)). In the each row, the left and middle panels are the moiré 12 (between layer 1 and 2) and moiré 23 (between layer 2 and 3) patterns after the relaxation. The color corresponds the local interlayer binding energy \(U_{\mathbf{B}}^{ll^{\prime}}\), where bright and dark regions correspond to the AA stack and AB/BA stack respectively. Small magenta dots indicate the AA stack points without lattice relaxation for the reference. The right panel combines the two moiré patterns in a single plot, where blue and red points indicate the AA stack of the moiré 12 and 23 respectively. Black triangles represent \(\alpha\beta/\beta\alpha\) domains. A rhombus in each panel shows the moiré-of-moiré unit cell and all scale bars incidate 20 nm. relaxation takes place such that AB/BA stack region expands and AA stack region shrinks [72]. This is realized by a local interlayer rotation around AA and AB/BA stack points. Around AB/BA, specifically, the layer 1 and 2 oppositely rotate to reduce the local twist angle. The AB/BA region is then enlarged, because the length scale of the moire pattern is enlarged in decreasing the twist angle. In AA spots, on the contrary, the layer 1 Figure 6: Distribution of the displacement vector in each layer of C1:\((\theta^{12},\theta^{23})=(1.79^{\circ},1.58^{\circ})\). (a) Original non-averaged distribution \(\mathbf{s}^{(l)}(\mathbf{r})(l=1,2,3)\). (b) Coarse-grained component \(\mathbf{s}^{(l)}(\mathbf{r})\). (c) Moiré-scale component \(\mathbf{s}^{(l)}(\mathbf{r})-\mathbf{s}^{(l)}(\mathbf{r})\) in a region indicated by the white square in the top panel. Black arrows represent the displacement vector, and color indicates its norm. Red arc arrows schematically show the direction of rotation in moiré-of-moiré scale. In (a) and (b), the white rhombus represents a moiré-of-moiré unit cell, while in (c) the blue rhombus represents a moiré unit cell. and 2 rotate to increase the local twist angle to shrink the AA region. The same deformation occurs also in TTG, where all three layers undergo relaxation to expand AB/BA domain in each of the two moire patterns. However, as the middle layer \(l=2\) is shared by the two interference patterns, there can be a frustration such that, for instance, a local movement of the layer 2 leads to the expansion of the AB region in one moire pattern while causing its contraction in the other. Therefore, the relative displacement of the two moire superlattices should be determined in such a way that the middle-layer distortion can lower the total energies of the two moire patterns at the same time. Figure 8(a) is the schematic figure to illustrate the favorable local rotation of the middle layer, for the moire 12 (between \(l=1,2\)) and moire 23 (between \(l=2,3\)). The orange and green arc arrows correspond to clockwise and counterclockwise directions, respectively. Here we notice that the direction of rotation is opposite for moire 12 and moire 23, since layer 1 and layer 3 are originally twisted in opposite directions with respect to layer 2. When AA stack points of moire 12 and moire 23 are aligned (\(\alpha\alpha\) stacking), the rotation direction of layer 2 is completely frustrated as shown in Fig. 8(b), and therefore \(\alpha\alpha\) stacking is energetically unfavorable. The optimized structure is \(\alpha\beta\) stacking [Fig. 8(c)], where the rotation angles coincide in two out of three regions. When the two angles \(\theta^{12}\) and \(\theta^{23}\) are not close to each other, \(\alpha\beta/\beta\alpha\) domains do not appear any more, but still a locally-commensurate moire-of-moire structure emerges. Figure 5(c) shows the relaxed structure for the C3 TTG. Since the unit areas of the two moire patterns differ by nearly 3, we have commensurate domains where a single red triangle includes three blue triangles. We also see red AA points always come to the center of blue triangles. This can also be understood in terms of the alignment of the favorable rotation angles explained above. ### Electronic properties Using the electronic continuum model introduced in Sec. II.4, we calculate the band structure of TTGs in the presence of the lattice relaxation. Figure 9(a) and (b) show the energy bands (near \(\mathcal{K}_{+}\) valley) and the corresponding density of states (DOS) calculated for the case C1 and C2, respectively. The labels \(\kappa,\gamma,\mu,\kappa^{\prime}\) are symmetric points of the moire-of-moire BZ defined in Fig. 4. We immediately notice that the spectrum exhibits distinct energy windows characterized by relatively low DOS, which span in the enegy range of \(20~{}\mathrm{meV}<|E|<90~{}\mathrm{meV}\) for C1, and in \(90~{}\mathrm{meV}<|E|<180~{}\mathrm{meV}\) for C2. The windows are sparsely filled with energy bands. Figure 9(c) shows the Fermi surface at \(E_{F}=117~{}\mathrm{meV}\) in the C2, which is indicated by horizontal red line in Fig. 9(b). We see that the Fermi surface is composed of three intersecting lines arranged with a trigonal symmetry, indicating the dispersion is nearly one-dimensional. The band velocities of these one-dimensional bands (normal to the Fermi surface) are oriented to the moire-of-moire lattice vectors \(\mathbf{L}_{1}\), \(\mathbf{L}_{2}\) and \(\mathbf{L}_{3}(=-\mathbf{L}_{1}+\mathbf{L}_{2})\). Figure 9(d) plots Figure 7: **Relocation of BZ corners in the C1 system under the lattice relaxation. The panels depict: (a) the original non-distorted configuration, (b) the configuration with rotation included, and (c) with expansion and shrinkage taken into account. Green, black and orange line are the BZ of layer 1, 2 and 3, and gray arrows indicate the direction of rotation and expansion/shrink.** Figure 8: (a) Schematic figure of the preferred direction of the middle layer (\(l=2\)), for the moiré 12 (between \(l=1,2\)) and moire 23 (between \(l=2,3\)). Orange and green arc arrows correspond to clockwise and counterclockwise directions, respectively. Bottom row: Overlapped figures for (b) \(\alpha\alpha\) stack and (c) \(\alpha\beta\) stack. the distribution of the squared wave amplitudes of an eigenstate marked by a red point in Fig. 9(c). The wave function actually takes a highly one-dimensional form, and it is sharply localized within the domain walls dividing \(\alpha\beta\) and \(\beta\alpha\) regions. Each of the three Fermi surfaces corresponds to one-dimensional states running along the domain walls in the corresponding directions. The states with different directions are barely hybridized. We also have a low-DOS region near \(E=0\) in the C2, while this is remnant of the graphene's Dirac cone and the energy bands are not one-dimensional. The existence of one-dimensional channels on the domain walls indicates that the \(\alpha\beta\) and \(\beta\alpha\) regions are locally gapped with different topological numbers, and associated topological boundary modes emerge between the domains, as shown in Fig. 1. To verify this, we calculate the bands structures and the Chern numbers of _uniform_ TTG having \(\alpha\beta/\beta\alpha\) stacking. The Hamiltonian of such a uniform system can be obtained by assuming the BZ-corner arrangement in Fig. 7(c), where \(\mathbf{q}_{1}^{12}=\mathbf{q}_{1}^{23}=\mathbf{q}\). This corresponds to a TTG where \(\theta^{12}=\theta^{23}\) and the layer 2 is slightly expanded in relative to layer 1 and 3. The two moire periods then become identical, and we have \(\mathbf{G}_{j}^{12}=\mathbf{G}_{j}^{23}\equiv\mathbf{G}_{j}^{\rm M}\) and \(\mathbf{q}=(2\mathbf{G}_{1}^{\rm M}+\mathbf{G}_{2}^{\rm M})/3\). The Hamiltonian for this system is obtained from Eq. (16) as \[H^{(\xi)}=\left(\begin{array}{ccc}H(\mathbf{k}+\xi\mathbf{q})&U_{21}^{\dagger}&\\ U_{21}&H(\mathbf{k})&U_{32}^{\dagger}\\ &U_{32}&H(\mathbf{k}-\xi\mathbf{q}),\end{array}\right). \tag{23}\] Figure 9: (a,b) Electronic band structures and the density of states of \(K_{+}\)-valley calculated for (a) C1 and (b) C2 with the lattice relaxation incorporated. The \(k\)-space path \((\mathbf{\kappa}-\mathbf{\gamma}-\mathbf{\mu}-\mathbf{\kappa}^{\prime})\) is defined in Fig. 4. (c) Fermi surface of the C2 at \(E_{F}=117\) meV (indicated by a red dotted horizontal line in (b)). Three red arrows represent the directions of band velocities, which are parallel to the moire lattice vectors \(\mathbf{L}_{1}\), \(\mathbf{L}_{2}\), and \(\mathbf{L}_{3}(=-\mathbf{L}_{1}+\mathbf{L}_{2})\). (d) Distribution of the squared wave amplitude of an eigenstate state, indicated by a red point in (c). Red rhombus represents a moiré-of-moiré unit cell. where \[H(\mathbf{k})=-\hbar v\mathbf{k}\cdot\mathbf{\sigma}, \tag{24}\] \[U_{21}=\sum_{j=1}^{3}U_{j}\mathrm{e}^{\mathrm{i}\xi\mathbf{\delta k}_{j }\cdot\mathbf{r}},\quad U_{32}=\sum_{j=1}^{3}U_{j}\mathrm{e}^{\mathrm{i}\xi\mathbf{ \delta k}_{j}\cdot(\mathbf{r}-\mathbf{r}_{0})}\] \[\delta\mathbf{k}_{1}=\mathbf{0},\ \ \delta\mathbf{k}_{2}=\xi\mathbf{G}_{1}^{ \mathrm{M}},\ \ \delta\mathbf{k}_{3}=\xi\left(\mathbf{G}_{1}^{\mathrm{M}}+\mathbf{G}_{2}^{ \mathrm{M}}\right), \tag{25}\] and we neglect the strain-induced vector potentials which does not affect the topological nature argued here. Here \(U_{21}\) and \(U_{32}\) differ by the parameter \(\mathbf{r}_{0}\), which specifies the relative displacement between the two moire patterns. The \(\alpha\beta\) and \(\beta\alpha\) stackings correspond to \(\mathbf{r}_{0}=\left(\mathbf{L}_{1}^{\mathbf{M}}+\mathbf{L}_{2}^{\mathbf{M}}\right)/3\) and \(2\left(\mathbf{L}_{1}^{\mathbf{M}}+\mathbf{L}_{2}^{\mathbf{M}}\right)/3\) respectively, where \(\mathbf{L}_{j}^{\mathbf{M}}\) is the common moire lattice vector given by \(\mathbf{G}_{i}^{\mathbf{M}}\cdot\mathbf{L}_{j}^{\mathbf{M}}=2\pi\delta_{ij}\). Here we consider uniform \(\alpha\beta\) and \(\beta\alpha\) TTGs with \(\theta^{12}=\theta^{23}=2.54^{\circ}\), which approximate the local structures of \(\alpha\beta\) and \(\beta\alpha\) domains in the C2. Figure 10 plots the energy bands in \(\xi=+\) valley calculated by Eq. (23). We observe energy gaps in the electron and hole sides in the region \(50\,\mathrm{meV}<|E|<180\,\mathrm{meV}\), which approximately coincides with the energy window of the C2 [Fig. 9(b)]. Between the gaps in the electron and hole sides, we have two bands touching at the charge neutrality point. The total Chern number for the two-band cluster is found to be \(\mp 1\) for \(\alpha\beta\) and \(\beta\alpha\), respectively. The absolute Chern number in the upper gap can also be calculated, and it turns out to be \(\mp 1/2\) for \(\alpha\beta\) and \(\beta\alpha\), respectively. This is obtained by opening mass gap (adding asymmetric energies to A and B sublattices in all the graphene layers) to lift the band touching at the Dirac point. Since the difference of the Chern number of the upper gap between the \(\alpha\beta\) and \(\beta\alpha\) regions is 1, we have a single edge mode (per a single valley) at the domain boundary. This coincides with the number of the one-dimensional modes per a single direction in the moire-of-moire superlattice band Fig. 9. The Chern number of the valley \(\xi=-1\) is negative of \(\xi=+1\) valley due to the time reversal symmetry. Therefore the TTG is a quantized valley Hall insulator when the Fermi energy is in the energy window. The energy windows and one-dimensional domain-wall states also appear in the C1 case [Fig. 9(a)], which has a smaller moire-of-moire period. The degree of one-dimensionality is not as pronounced as in the C2 configuration, as evidenced by the appearance of small gaps at the intersections of bands. The hybridization tends to be greater when the moire-of-moire period is smaller. Finally, the band structure in Fig. 9 closely resembles the marginally-stacked twisted bilayer graphene in a strong perpendicular electric field [98; 99; 100; 101; 102; 103; 104]. There the topological one dimensional edge states arise since the AB and BA regions in the moire pattern have opposite valley Chern numbers in the electric field. The chiral TTG realizes a similar situation in the moire-of-moire scale, without the need for an applied electric field. This can be achieved in any chiral TTGs where \(\theta^{12}\) and \(\theta^{23}\) are close to each other, such that the two moire periods are comparable. ## IV Alternating TTGs ### Multi-scale lattice relaxation Alternating TTGs display distinct relaxed structures that differ entirely from the chiral cases. Figure 11 shows optimized moire structures calculated for alternating TTGs (a) A1 \((\theta^{12},\theta^{23})=(1.48^{\circ},-1.18^{\circ})\), (b) A2 \((1.42^{\circ},-1.22^{\circ})\) and (c) A3 \((1.47^{\circ},-0.62^{\circ})\), corresponding to Fig. 5 for chiral TTGs. In the A1 and A2, we observe a formation of commensurate \(\alpha\alpha^{\prime}\) domains, where AA spots of the two moire patterns completely overlaps [See Fig. 2(e)]. This is in a sharp contrast to the chiral TTGs, where AA spots are repelled to each other, giving rise to \(\alpha\beta/\beta\alpha\) domains. The atomic structure of \(\alpha\alpha^{\prime}\) domain corresponds precisely to the mirror-symmetric TTG with \(\theta^{12}=-\theta^{23}\). In A3 case [Fig. 11(c)], where the two moire periods are not comparable, we observe a different type of commensurate domain with the ratio of the lattice periods fixed at 2, reflecting the original moire-period ratio \(L^{23}/L^{12}\approx 2.3\). Here the AA stacking points of the red and blue moire lattices are vertically aligned as in \(\alpha\alpha^{\prime}\) domains observed in A1 and A2. The formation of the commensurate domains can be attributed to a specific type of lattice distortion that differs from the chiral case. Figure 12 shows the distribution of the coarse-grained displacement vector \(\bar{\mathbf{s}}^{(l)}(\mathbf{r})\) in the A1 case (corresponding to Fig. 6(b) for the chiral case). We observe that the layer 1 and layer 3 rotate anticlockwise and clock-wise directions, respectively, around \(\alpha\alpha^{\prime}\) domain center. In \(k\)-space, accordingly, the Brillouin zone corners of layer 1 and 3 move to overlap as shown in Fig. 13. This corresponds to the symmetric TTG (\(\theta^{12}=-\theta^{23}\)) where the layer 1 and layer 3 are perfectly aligned. The stability of \(\alpha\alpha^{\prime}\)-domain is also explained by considering moire-scale lattice relaxation. As discussed in Sec. III.1, the graphene layers in TTG undergo spontaneous distortion to expand the AB/BA regions for the moire patterns 12 and 23, giving a competitive environment for the shared layer 2. Figure 14(a) depicts the preferred orientation of layer 2 for the two moire patterns in alternating TTG. In contrast to the chiral stack [Fig. 8], the rotation direction is identical for both moire patterns, since layer 1 and layer 3 are rotated in the same direction relative to the layer 2. Consequently, there is no frustration when the moire lattices are arranged in an \(\alpha\alpha^{\prime}\) stack as shown in Fig. 14(b). In this structure, the motion of the shared layer 2 allows for the simultaneous relaxation of the moire patterns 12 and 23, resulting in an energy advantage compared to partially frustrated configurations like the \(\alpha\beta^{\prime}\) stack [Fig. 14(c)]. The stability of \(\alpha\alpha^{\prime}\) stack in nearly-symmetric TTGs was pointed out in the previous theoretical works [26; 22; 47; 48], and it was observed in recent experiments [78; 80]. ### Electronic properties We calculate the band structure for alternating TTGs of A1(\(1.48^{\circ},-1.18^{\circ}\)), (b)A2(\(1.42^{\circ},-1.22^{\circ}\)) using the method described in Sec. II. The energy band and DOS for A1 and A2 are displayed in Figs. 15(a) and (b), respectively. In each figure, the right and left panels correspond to the TTGs with and without the lattice relaxation, respectively. Black curves represent the energy bands, and blue straight lines indicate the intrinsic Dirac bands of layer 1 and layer 3 without the interlayer cou pling. Red dots indicate the amplitude projected onto the mirror-odd plane wave states, as defined by \[w^{(\text{odd})}_{n\mathbf{k}}=\sum_{X=A,B}|\langle\psi_{n\mathbf{k}}|\mathbf{k},X,\text{odd}\rangle|^{2},\] \[|\mathbf{k},X,\text{odd}\rangle=\frac{1}{\sqrt{2}}\left(|\mathbf{k},X,1 \rangle-|\mathbf{k},X,3\rangle\right), \tag{26}\] where \(\psi_{n\mathbf{k}}\) is the eigenstates, and \(|\mathbf{k},X,l\rangle\) is the plane wave at sublattice \(X(=A,B)\) on layer \(l\). We take the path \(\mathbf{K}_{+}^{(1)}\rightarrow\mathbf{K}_{+}^{(3)}\rightarrow\mathbf{K}_{+}^{(2)}\) on a straight line in the extended \(k\)-space, as shown in insets of Fig. 15. In the band structures with the lattice relaxation, we observe numerous flat bands concentrated around zero energy, and these bands are surrounded by a region where dispersive energy bands are sparsely distributed. These features coincide with the mirror-symmetric TTG (\(\theta^{12}=-\theta^{23}\)), where the low-energy spectrum is composed of a flat band with even parity, and a Dirac cone with odd parity against the mirror inversion [21; 22]. We see that the red dots roughly form a conical dispersion, Figure 14: (a) Schematic figure of the preferred distorting direction of the middle layer (\(l=2\)) in an alternating TTG, corresponding to Fig. 8 for a chiral TTG. Figure 12: Distribution of the coarse-grained displacement vector \(\bar{\mathbf{s}}^{(I)}(\mathbf{r})\) in A1:\((\theta^{12},\theta^{23})=(1.48^{\circ},-1.18^{\circ})\), corresponding to Fig. 6(b) for the C1. Figure 13: Relocation of BZ corners in the A1:\((\theta^{12},\theta^{23})=(1.48^{\circ},-1.18^{\circ})\) under the lattice relaxation. The panels depict (a) the original non-distorted configuration and (b) the relaxed configuration. and it is regarded as a remnant of the symmetric TTG's Dirac cone having odd parity. In the non-relaxed calculations, we notice that the flat bands and Dirac cones are strongly hybridized, and the conical dispersion of the red dots is not clearly resolved. These results suggest that the formation of \(\alpha\alpha^{\prime}\) domains (equivalent to the mirror-symmetric TTG) supports the spectral separation of the flat bands and the Dirac-cone like bands. Therefore, we expect that asymmetric TTGs slightly away from the symmetric condition \(\theta^{12}=-\theta^{23}\) acquire similar electronic properties to the symmetric TTG, through the moire-of-moire lattice relaxation. The electronic properties of TTG can be tuned by applying a perpendicular electric field. We can introduce the field effect to our model as \(H+V\), where \(H\) is the original Hamiltonian of Eq. (16), and \(V\) is the on-site potential term by perpendicular electronic field, \[V=\left(\begin{array}{cc}-\Delta\hat{I}_{2}&&\\ &0&\\ &&\Delta\hat{I}_{2}\end{array}\right). \tag{27}\] Here \(\Delta\) is the difference of the on-site energy and \(\hat{I}_{2}\) is a \(2\times 2\) unit matrix, and we simply assumed the perpendicular electric field is constant between top layer and bottom layer. Figure 16 shows the energy band of the A2 with lattice relaxation, under the perpendicular electric field \(\Delta=50\) meV and \(100\) meV. When the electric field is applied, we observe the Dirac band moves along the energy axes, and eventually the Dirac point emerges out of the flat-band cluster. We also see that the electric fields broadens the energy width of the flat band region, and enhances a hybridization between the flat bands and the dispersive bands. Figure 16: Plots similar to Fig. 7 for A2:\((1.43^{\circ},-1.28^{\circ})\) with the perpendicular electric field of (a) \(\Delta=50\) meV and (b) \(100\) meV. Figure 15: Energy bands and DOS for alternating TTGs, (a) A1:\((1.48^{\circ},1.18^{\circ})\) and (b) A2:\((1.42^{\circ},1.22^{\circ})\). The left and right panels in each figure show the results without and with the lattice relaxation, respectively. Black curves represent the energy bands, and blue straight lines indicate the intrinsic Dirac bands of layer 1 and layer 3 without the interlayer coupling. Red dots indicate the amplitude projected onto the mirror-odd plane wave states (see the text). The path is taken as \(\mathbf{K}_{+}^{(1)}\rightarrow\mathbf{K}_{+}^{(3)}\rightarrow\mathbf{K}_{+}^{(2)}\) in the extended \(k\)-space shown in the inset. Conclusion We have presented a systematic investigation on the lattice relaxation and electronic properties of general non-symmetric TTGs. For various chiral and alternating TTGs with different twist angle combinations, we employ an effective continuum approach similar to twisted bilayer graphene, to obtain the optimized lattice structure. We also computed the electronic band structure by using a continuum band calculation method incorporating lattice relaxation effects. In the calculation of the lattice relaxation, we found that there are two distinct length-scale relaxations in the moire-of-moire and moire scales, which lead to the formation of a patchwork of super-moire domains. In these domains, two moire patterns become locally commensurate with a specific relative arrangement. The chiral TTGs prefer a shifted stacking where the overlap of AA spots in the individual moire patterns is avoided. In contrast, the alternating TTGs exhibits a completely opposite behavior where AA spots are perfectly overlapped. In the band calculations, the chiral TTG exhibits an energy window where highly one-dimensional electron bands are sparsely distributed. By calculating the Chern number of the local band structure within the commensurate domains, we identify one-dimensional domain boundary states as topological boundary states between distinct Chern insulators. The alternating TTG exhibits a clear separation of the flat bands and a monolayer-like Dirac cone, as a consequence of the formation of commensurate domains equivalent to the symmetric TTG. _Note added_: During the finalization of this paper, we became aware of related preprints which partially overlap with the present work [105; 106]. ###### Acknowledgements. This work was supported in part by JSPS KAKENHI Grant Number JP20H01840, JP20H00127, JP21H05236, JP21H05232 and by JST CREST Grant Number JP-MJCR20T3, Japan.
2304.04700
Achieving Long-term Fairness in Submodular Maximization through Randomization
Submodular function optimization has numerous applications in machine learning and data analysis, including data summarization which aims to identify a concise and diverse set of data points from a large dataset. It is important to implement fairness-aware algorithms when dealing with data items that may contain sensitive attributes like race or gender, to prevent biases that could lead to unequal representation of different groups. With this in mind, we investigate the problem of maximizing a monotone submodular function while meeting group fairness constraints. Unlike previous studies in this area, we allow for randomized solutions, with the objective being to calculate a distribution over feasible sets such that the expected number of items selected from each group is subject to constraints in the form of upper and lower thresholds, ensuring that the representation of each group remains balanced in the long term. Here a set is considered feasible if its size does not exceed a constant value of $b$. Our research includes the development of a series of approximation algorithms for this problem.
Shaojie Tang, Jing Yuan, Twumasi Mensah-Boateng
2023-04-10T16:39:19Z
http://arxiv.org/abs/2304.04700v1
# Achieving Long-term Fairness in Submodular Maximization through Randomization ###### Abstract Submodular function optimization has numerous applications in machine learning and data analysis, including data summarization which aims to identify a concise and diverse set of data points from a large dataset. It is important to implement fairness-aware algorithms when dealing with data items that may contain sensitive attributes like race or gender, to prevent biases that could lead to unequal representation of different groups. With this in mind, we investigate the problem of maximizing a monotone submodular function while meeting group fairness constraints. Unlike previous studies in this area, we allow for randomized solutions, with the objective being to calculate a distribution over feasible sets such that the expected number of items selected from each group is subject to constraints in the form of upper and lower thresholds, ensuring that the representation of each group remains balanced in the long term. Here a set is considered feasible if its size does not exceed a constant value of \(b\). Our research includes the development of a series of approximation algorithms for this problem. ## 1 Introduction A set function is referred to as submodular if it follows the principle of diminishing returns, where adding an item to a larger set yields a smaller benefit. This concept is applied in various real-world scenarios such as feature selection [10], where the goal is to select the most relevant features from a large pool of potential features to use in a machine learning model; active learning [15], where the goal is to choose a set of instances for a machine learning model to learn from; exemplar-based clustering [11], where the goal is to choose a set of exemplars to represent a set of data points; influence maximization in social networks [22], where the goal is to choose a set of individuals to target in order to maximize the spread of information or influence in a network; as well as recommender system [13] and diverse data summarization [20]. The goal of submodular optimization is to choose a set of items that optimizes a submodular function while satisfying constraints such as size limitations, matroid requirements, or knapsack restrictions. In practice, items or individuals are often grouped based on attributes such as gender, race, age, religion, or other factors. However, if not properly monitored, existing algorithms may display bias and result in an over- or under-representation of certain groups in the final selected set. To address this issue, we propose the study of long-term fair submodular maximization problem. The aim is to randomly choose a subset of items that optimizes a submodular function, such that the expected number of selected items from each group falls within the desired range. This approach ensures that the final selection of items is not only optimized, but also equitable, providing a fair representation of all groups in the long term. Formally, we consider a set \(V\) of items, which are divided into \(m\) (not necessarily disjoint) groups: \(V_{1},V_{2},\cdots,V_{m}\) with items in each group sharing similar attributes (e.g., race). To ensure fairness, a randomized item selection algorithm must satisfy the following criteria for all groups \(t\in[m]\) where \([m]=\{1,2,\cdots,m\}\): the _expected_ number of selected items from group \(V_{t}\) must be within the range of \([\alpha_{t},\beta_{t}]\), where \(\alpha_{t}\) and \(\beta_{t}\) are arbitrary parameters that may differ across groups; moreover, the number of chosen items must _always_ stay below a cardinality constraint of \(b\). To put it simply, a fair randomized solution must meet two important requirements [3]: (a) restricted dominance, which means the proportion of items from each group must be within a certain limit, and (b) minority protection, which means the proportion of items from each group must not fall below a certain limit. Our fairness notation has gained significant recognition in the academic world and it has been adopted in various studies, including multi-winner voting systems [8], fair recommendation systems [14], and matroid-constrained optimization problems [9]. In fact, this notation is capable of capturing other fairness definitions such as statistical parity [12], the 80%-rule [4], and proportional representation [18]. Different from the majority of prior studies that concentrate on finding a _fixed_ set of items that comply with fairness restrictions, our approach accommodates randomized solutions and therefore offers greater flexibility in fulfilling the fairness restrictions. Take fairness-aware product recommendations as an example. The objective is to suggest a set of products to online consumers while ensuring that each group of sellers, such as male and female sellers, is expected to have at least one of their products recommended. Due to limited display space, suppose we can only display one product to the consumer. In this scenario, it is not possible for any of the deterministic solutions to fulfill the fairness requirement, however, a randomized solution can be easily found to satisfy it. For instance, a product can be suggested from each group with the same likelihood of occurrence. ### Our Contributions * We are the first to investigate the long-term fair submodular maximization problem, which presents a substantial challenge due to its exponential number of variables. As a result, it is difficult to solve using traditional linear programming (LP) solvers. * We develop a \((1-1/e)^{2}\)-approximation algorithm that approximately satisfies the fairness constraints. Specifically, our algorithm ensures that the number of selected items from group \(V_{t}\) is within the range of \([\lfloor\alpha_{t}\rfloor,\lceil\beta_{t}\rceil]\). Notably, if both \(\alpha_{t}\) and \(\beta_{t}\) are integers, our solution strictly satisfies the fairness constraints in the original problem. * It is important to note that the previous algorithm requires optimizing a continuous approximation of the underlying submodular function, referred to as the multi-linear extension [6]. This is achieved by executing the continuous greedy algorithm, whose implementation is computationally expensive in practice. Our second contribution is the introduction of a fast greedy algorithm that achieves a degraded approximation ratio of \((1-1/e)^{2}/2\). * We present a \((1-1/e)\)-approximation randomized algorithm. Our approach involves utilizing the ellipsoid method and incorporating an approximate separation oracle for the dual LP of the original problem, which has a polynomial number of variables and an exponential number of constraints. Unlike the deterministic solutions, our randomized approach provides three key benefits. Firstly, our solution does not depend on the assumption of non-overlapping groups. Secondly, our approach strictly satisfies all fairness constraints. Thirdly, we achieve the optimal approximation ratio of \(1-1/e\). ### Additional Related Works The growing recognition of the importance of fair and objective decision-making systems has resulted in a surge of interest in developing fair algorithms in various fields such as influence maximization [24] and classification [26]. The development of fair algorithms has also been applied to voting systems[8], where the goal is to ensure that election outcomes are a fair representation of the preferences of the voters. Moreover, the field of bandit learning [17], which involves making sequential decisions based on uncertain information, has also seen a growing interest in the development of fair algorithms. Finally, the field of data summarization [7] has seen a growing focus on the development of fair algorithms, which aim to provide a balanced representation of the data. The specific context and type of bias being addressed influence the choice of fairness metric adopted in existing studies, leading to various optimization problems and fair algorithms tailored to the specific requirements of each application. Our definition of fairness is broad enough to encompass many existing notations, such as the 80%-rule [4], statistical parity [12], and proportional representation [18]. Unlike the majority of previous research on fairness-aware algorithm design [8, 14, 9, 23, 25], which aims to find a deterministic solution set, our goal is to compute a randomized solution that can meet the group fairness constraints on average. ## 2 Preliminaries and Problem Statement A set \(V\) of \(n\) items is considered and there is a non-negative submodular utility function \(f:2^{V}\rightarrow\mathbb{R}_{+}\). The marginal utility of an item \(e\in V\) on a set \(S\subseteq V\) is denoted as \(f(e\mid S)\), i.e., \(f(e\mid S)=f(\{e\}\cup S)-f(S)\). The function \(f\) is considered submodular if, for any sets \(X,Y\subseteq V\) with \(X\subseteq Y\) and any item \(e\in V\setminus Y\), the following inequality holds: \[f(e\mid Y)\leq f(e\mid X).\] It is considered monotone if, for any set \(X\subseteq V\) and any item \(e\in V\setminus X\), it holds that \[f(e\mid X)\geq 0.\] Assuming \(V\) is divided into \(m\) groups, \(V_{1},V_{2},\cdots,V_{m}\), there is a specified lower and upper bound on the _expected_ number of items from each group that must be included in a feasible solution. These bounds, referred to as \(\alpha\in\mathbb{R}_{\geq 0}^{m}\) and \(\beta\in\mathbb{R}_{\geq 0}^{m}\), represent group fairness constraints. In addition, there is a _hard_ constraint \(\tilde{b}\) on the number of selected items. Let \(\mathcal{F}=\{S\subseteq V\mid|S|\leq b\}\) denote the set of feasible selections. The goal of the fair submodular maximization problem (denoted as **P.0**) is to determine a distribution \(x\in[0,1]^{\mathcal{F}}\) over sets from \(\mathcal{F}\) that maximizes the expected utility, while ensuring that the expected number of items selected from each group meets the fairness constraints. I.e., \begin{tabular}{|l|} \hline **P.0**\(\max_{x\in[0,1]^{\mathcal{F}}}\sum_{S\in\mathcal{F}}x_{S}f(S)\) **subject to:** \\ \(\begin{cases}\alpha_{t}\leq\sum_{S\in\mathcal{F}}(x_{S}\cdot|S\cap V_{t}|)\leq \beta_{t},\forall t\in[m].\\ \sum_{S\in\mathcal{F}}x_{S}\leq 1.\end{cases}\) \\ \hline \end{tabular} Here each decision variable \(x_{S}\) represents the selection probability of \(S\in\mathcal{F}\). This LP has a total of \(2m+1\) constraints, excluding the obvious constraints that specify that \(x_{S}\geq 0\) for all \(S\in\mathcal{F}\). Despite this, the number of variables in the LP problem is equal to the number of elements in \(\mathcal{F}\), which can be exponential in \(n\). As a result, conventional LP solvers are unable to solve this LP problem efficiently. The next lemma asserts that **P.0** is a problem that is NP-hard. Lemma 1: _Problem **P.0** is NP-hard._ _Proof:_ We demonstrate this by reducing it to the classic cardinality constrained monotone submodular maximization problem, which we will describe below. Definition 1: The cardinality constrained monotone submodular maximization problem takes as input a collection of items \(V\), a monotone submodular function \(f:2^{V}\rightarrow\mathbb{R}_{+}\), and a cardinality constraint \(b\). The goal is to choose a subset of items \(S\subseteq V\) that maximizes \(f(S)\) while ensuring that \(|S|\leq b\). To show the reduction, we take an instance of the cardinality constrained monotone submodular maximization problem and create a corresponding instance of **P.0**. To do this, we consider only one group with no fairness constraints, meaning \(V=V_{1}\), with \(\alpha_{1}=0\) and \(\beta_{1}=|V|\). It can be easily verified that the optimal solution of this instance is a distribution over a set of solutions, each of which is an optimal solution to the instance of cardinality constrained monotone submodular maximization problem. Additionally, although **P.0** allows for randomized solutions, there exists at least one optimal solution that is a deterministic set. Specifically, every optimal solution of the cardinality constrained monotone submodular maximization problem must be an optimal solution to its corresponding instance of **P.0**. Hence, these two instances are equivalent. This concludes the proof of the reduction. \(\Box\) ## 3 Near Feasible Deterministic Algorithms In this section, we present a deterministic algorithm for **P.0**. Here we assume that \(m\) groups do not overlap with each other. To begin, we introduce the multilinear extension of a monotone submodular function \(f\). Given a vector \(y\in[0,1]^{n}\), let \(S_{y}\) be a random set where each item \(i\in V\) is independently added to \(S_{y}\) with probability \(y_{i}\). Then we let \[F(y)=\mathbb{E}[f(S_{y})]=\sum_{S\subseteq V}f(S)\prod_{i\in S}y_{i}\prod_{i \notin S}(1-y_{i}).\] We next introduce a new optimization problem **P.1**. The goal of **P.1** is to compute a vector \(y\in[0,1]^{n}\) that maximizes \(F(y)\) such that \(\alpha_{t}\leq\sum_{i\in V_{t}}y_{i}\leq\beta_{t},\forall t\in[m]\) and \(\sum_{t\in[m]}\sum_{i\in V_{t}}y_{i}\leq b\). \[\boxed{\textbf{P.1}\max_{y\in[0,1]^{n}}F(y)}\] **subject to:** \[\begin{cases}\alpha_{t}\leq\sum_{i\in V_{t}}y_{i}\leq\beta_{t},\forall t\in[m].\\ \sum_{t\in[m]}\sum_{i\in V_{t}}y_{i}\leq b.\end{cases}\] The following lemma establishes a connection between the optimal solution of problem **P.0** and that of problem **P.1**. This lemma serves as a crucial foundation for understanding the relationship between the two problems and allows for the development of a near optimal solution for **P.0** by solving **P.1**. Lemma 2: _Let \(x^{*}\) denote the optimal solution of **P.0** and \(y^{*}\) denote the optimal solution of **P.1**, it holds that_ \[(1-1/e)\sum_{S\in\mathcal{F}}x^{*}_{S}f(S)\leq F(y^{*}). \tag{1}\] _Proof:_ Let \(B\) be a polytope defined as the set of all vectors \(y\in[0,1]^{n}\) that meet the conditions in **P.1**, i.e., \[B=\{y\in[0,1]^{n}\mid\alpha_{t}\leq\sum_{i\in V_{t}}y_{i}\leq\beta_{t},\forall t \in[m];\sum_{t\in[m]}\sum_{i\in V_{t}}y_{i}\leq b;0\leq y_{i}\leq 1,\forall i \in V\}. \tag{2}\] Given the optimal solution \(x^{*}\) of **P.0**, we then introduce a vector \(\hat{y}\in[0,1]^{n}\) such that \(\hat{y}_{i}=\sum_{S\in\mathcal{F}}x^{*}_{S}\cdot\textbf{1}_{i\in S}\) where \(\textbf{1}_{i\in S}=1\) if \(i\in S\) and \(\textbf{1}_{i\in S}=0\) otherwise. It is easy to verify that the value of \(\hat{y}_{i}\) represents the probability of item \(i\) being selected according to the distribution defined by \(x^{*}\). We next show that to prove this lemma, it suffices to prove that \[\hat{y}\in B. \tag{3}\] As established in [2], if \(f\) is monotone and submodular and \(\hat{y}_{i}=\sum_{S\in\mathcal{F}}x_{S}^{*}\cdot\mathbf{1}_{i\in S}\), then \((1-1/e)\sum_{S\in\mathcal{F}}x_{S}^{*}f(S)\leq F(\hat{y})\). Here \(1-1/e\) is also known as _correlation gap_ of monotone submodular functions. Suppose (3) is true and \(y^{*}\) is the optimal solution of **P.1**, it holds that \(F(\hat{y})\leq F(y^{*})\). Therefore, this lemma is a direct consequence of the observation that \((1-1/e)\sum_{S\in\mathcal{F}}x_{S}^{*}f(S)\leq F(\hat{y})\leq F(y^{*})\). The rest of the proof is devoted to proving \(\hat{y}\in B\). First, because \(x^{*}\) is a feasible solution of **P.0**, it holds that \(\alpha_{t}\leq\sum_{S\in\mathcal{F}}(x_{S}^{*}\cdot|S\cap V_{t}|)\leq\beta_{t },\forall t\in[m]\). It follows that \(\alpha_{t}\leq\sum_{i\in V_{t}}\hat{y}_{i}\leq\beta_{t},\forall t\in[m]\), this is because \(\sum_{S\in\mathcal{F}}(x_{S}^{*}\cdot|S\cap V_{t}|)=\sum_{i\in V_{t}}\hat{y}_ {i}\) represents the expected number of items being selected from group \(V_{t}\) according to the distribution defined by \(x^{*}\). Second, because \(x^{*}\) is a feasible solution of **P.0**, the expected number of selected items according to the distribution defined by \(x^{*}\) is at most \(b\). Hence, \(\sum_{t\in[m]}\sum_{i\in V_{t}}\hat{y}_{i}\leq b\). Third, it is trivial to show that \(0\leq\hat{y}_{i}\leq 1,\forall i\in V\). This finishes the proof of \(\hat{y}\in B\). \(\Box\) ### Algorithm Design We next present our algorithm. Initially, we use a continuous greedy algorithm to compute a fractional solution for **P.1**, which we then round to obtain an integral solution. ``` 1:Set \(\delta=9n^{2},l=0,y^{0}=[0]^{n}\). 2:while\(l<\delta\)do 3: For each \(i\in V\), estimate \(F(i\mid y^{l})\) 4: Find an optimal solution \(z\in[0,1]^{n}\) to **P.A** 5: **P.A** Maximize\({}_{y}\)\(\sum_{i\in V}y_{i}F(i\mid y^{l})\) 6:subject to:\(y\in B\). 7:\(y^{l+1}=y^{l}+z\) 8: Increment \(l=l+1\) 9:\(y^{\prime}\gets y^{\delta}\) 10:return\(y^{\prime}\) ``` **Algorithm 1** Continuous Greedy Algorithm **Continuous greedy algorithm.** We first provide a detailed description of the continuous greedy algorithm (listed in Algorithm 1). The framework of this algorithm was first developed in [6] and we adapt it to find a fractional solution within polytope \(B\) (listed in (2)). Note that polytope \(B\) is not downward-closed, which presents unique challenges in our study. This algorithm maintains a fractional solution \(y^{l}\in[0,1]^{n}\), starting with \(y^{0}=(0,0,\cdots,0)\). In each round \(l\), it computes the marginal utility of each item \(i\in V\) on top of \(y^{l}\) with respect to \(F\) as follows, \[F(i\mid y^{l})=F(\mathbf{e}_{i}\lor y^{l})-F(y^{l}). \tag{4}\] where \(\mathbf{e}_{i}\in\{0,1\}^{n}\) is the vector with \(1\) in the \(i\)-th coordinate and \(0\) elsewhere; \(\mathbf{e}_{i}\lor y^{l}\) denotes the element-wise maximum of two vectors \(\mathbf{e}_{i}\) and \(y^{l}\). Then we solve the following linear programming problem **P.A** which assigns a weight \(F(i\mid y^{l})\) to each item \(i\) and seeks the maximum weighted vector in \(B\). **P.A**_Maximize\({}_{y}\)\(\sum_{i\in V}y_{i}F(i\mid y^{l})\)_**subject to:**\(y\in B\). After solving **P.A** at round \(l\) and obtaining an optimal solution \(z\in[0,1]^{n}\), we update the fractional solution as follows: \(y^{l+1}=y^{l}+z\). After \(\delta\) rounds where \(\delta=9n^{2}\), \(y^{\delta}\) is returned as the final solution \(y^{\prime}\). **Rounding.** We next employ pipage rounding [1], a simple deterministic procedure of rounding of linear relaxations, to round \(y^{\prime}\) to an integral solution. This algorithm is composed of three phases. * Phase 1: For each \(t\in[m]\), repeatedly perform the following until \(V_{t}\) has no more than one non-integral coordinate: Choose any two fractional coordinates \(i\), \(j\) such that \(i,j\in V_{t}\). Calculate \(\theta_{1}=\min\{1-y^{\prime}_{i},y^{\prime}_{j}\}\) and \(\theta_{2}=\min\{y^{\prime}_{i},1-y^{\prime}_{j}\}\). Create two vectors, \(y^{a}=y^{\prime}+\theta_{1}(\mathbf{e}_{i}-\mathbf{e}_{j})\) and \(y^{b}=y^{\prime}+\theta_{2}(\mathbf{e}_{j}-\mathbf{e}_{i})\). If \(F(y^{a})\geq F(y^{b})\), set \(y\gets y_{a}\), otherwise set \(y\gets y^{b}\). * Phase 2: Assume \(y_{1},\cdots,y_{k}\) are the remaining fractional coordinates. Repeat the same procedure as in the first phrase until \(y\) has at most one non-integral coordinate. * Phase 3: Let \(i\) denote the last non-integral coordinate, if any. Set \(y_{i}=1\). Output \(A\subseteq V\) whose coordinate in \(y\) is \(1\). Note that a similar framework has been utilized to tackle the fair submodular maximization problem in a deterministic setting [8]. This problem aims to identify a fixed set of items that optimize a submodular function while fulfilling group fairness constraints. Their approach shares similarities with ours in the rounding stage, but does not require the third phase. This is because in their setting, both \(\alpha_{t}\) and \(\beta_{t}\) are integers, which allows them to ensure that no non-integral coordinates exist after the first two rounding phases. ### Performance Analysis Recall that \(x^{*}\) denotes the optimal solution of **P.0**, let \(OPT=\sum_{S\in\mathcal{F}}x^{*}_{S}f(S)\) denote the utility of the optimal solution. The following theorem states that \(A\), the solution set returned from our algorithm, is a _near_ feasible solution of **P.0** and has a utility of at least \((1-1/e)^{2}OPT\). Theorem 3.1: _Let \(A\) be the set returned by our algorithm and \(OPT\) be the utility of the optimal solution of **P.0**. It follows that:_ \[f(A)\geq(1-1/e)^{2}OPT. \tag{5}\] _Moreover, \(A\) always satisfies the cardinality constraint and nearly satisfies the fairness constraints of **P.0**, i.e., \(|A|\leq b\) and \(\lfloor\alpha_{t}\rfloor\leq|A\cap V_{t}|\leq\lceil\beta_{t}\rceil,\forall t \in[m]\)._ Proof: We first prove that \(|A|\leq b\) always holds. Observe that the fractional solution \(y^{\prime}\) found by the continuous greedy algorithm belongs to \(B\), hence, \(\sum_{i\in V}y^{\prime}_{i}\leq b\). Moreover, phases 1 and 2 in the rounding stage do not change this value, and phase 3 rounds the last non-integral coordinate, if any, to one. It follows that \(|A|\leq\lceil\sum_{i\in V}y^{\prime}_{i}\rceil\leq b\) where the second inequality is by the observations that \(\sum_{i\in V}y^{\prime}_{i}\leq b\) and \(b\) is an integer. We next prove that \(A\) nearly satisfies the fairness constraints of **P.0**, i.e., \(\lfloor\alpha_{t}\rfloor\leq|A\cap V_{t}|\leq\lceil\beta_{t}\rceil,\forall t \in[m]\). Because \(y^{\prime}\in B\), it holds that \(\alpha_{t}\leq\sum_{i\in V_{t}}y^{\prime}_{i}\leq\beta_{t},\forall t\in[m]\). Observe that phase 1 does not change this value, phases 2 and 3 round at most one fractional coordinate from each group to a binary value. Hence, \(|\sum_{i\in V_{t}}y^{\prime}_{i}|\leq|A\cap V_{t}|\leq\lceil\sum_{i\in V_{t}} y^{\prime}_{i}\rceil,\forall t\in[m]\). This, together with \(\alpha_{t}\leq\sum_{i\in V_{t}}y^{\prime}_{i}\leq\beta_{t},\forall t\in[m]\), implies that \(\lfloor\alpha_{t}\rfloor\leq|A\cap V_{t}|\leq\lceil\beta_{t}\rceil,\forall t \in[m]\). At last, we prove the approximation ratio of \(A\). Recall that \(y^{*}\) denotes the optimal solution of **P.1**, [6] has proved that if \(f\) is monotone and submodular, then the fractional solution \(y^{\prime}\) returned from the continuous greedy algorithm has a utility of at least \((1-1/e)F(y^{*})\), i.e., \(F(y^{\prime})\geq(1-1/e)F(y^{*})\). This, together with Lemma 2, implies that \(F(y^{\prime})\geq(1-1/e)^{2}\sum_{S\in\mathcal{F}}x^{*}_{S}f(S)=(1-1/e)^{2}OPT\). To prove \(f(A)\geq(1-1/e)^{2}OPT\), it suffices to show that \(f(A)\geq F(y^{\prime})\). We next prove this inequality. Observe that in phases 1 and 2 of the rounding stage, we perform pipage rounding to round \(y^{\prime}\) to a vector \(y\) that contains at most one non-integral coordinate. According to [6], pipage rounding does not decrease the expected utility of \(y^{\prime}\), that is, \(F(y)\geq F(y^{\prime})\). In phase 3, we round the last non-integral coordinate in \(y\), if any, to one. This operation does not decrease the expected utility of \(y\) by the assumption that \(f\) is monotone. Hence, \(F(y)\geq F(y^{\prime})\) still holds. Recall that \(y\) is the indicator vector of \(A\), hence, \(f(A)=F(y)\). Therefore, \(f(A)=F(y)\geq F(y^{\prime})\). \(\Box\) **Remark 1:** It follows immediately from the preceding theorem that if \(\alpha_{t}\) and \(\beta_{t}\) are both integers for all \(t\in[m]\), then our solution strictly satisfies all fairness constraints of problem **P.0**. ### A Fast Greedy Algorithm Our prior algorithm involves solving a multi-linear relaxation problem, which can be slow and computationally expensive, particularly for large scale problems. In this section, we introduce a simple greedy algorithm that offers a significant increase in speed but with a trade-off in the form of a decreased approximation ratio. Even though **P.0** permits the use of randomized solutions, Theorem 3.1 shows that a deterministic solution is sufficient for obtaining a constant-factor approximation for **P.0**. We next present a simple greedy algorithm that effectively finds a near optimal deterministic solution, which in turn results in a constant-factor approximation for the problem **P.0**. To this end we introduce a new optimization problem **P.2**, a deterministic version of **P.0** (with relaxed fairness constraints). \begin{tabular}{|l|} \hline **P.2**\(\max_{S\in\mathcal{F}}f(S)\) \\ **subject to:** \\ \(\lfloor\alpha_{t}\rfloor\leq|S\cap V_{t}|\leq\lceil\beta_{t}\rceil,\forall t\in [m]\). \\ \hline \end{tabular} Note that in **P.2** we use \(\lfloor\alpha_{t}\rfloor\) and \(\lceil\beta_{t}\rceil\) as lower and upper bounds, hence a feasible solution of **P.2** is a near feasible solution of the original problem **P.0**. The following lemma states that the optimal solution of **P.2** attains a \((1-1/e)^{2}\) approximation of the problem **P.0**. Lemma 3: _Let \(A^{P2}\) denote the optimal solution of **P.2**, it holds that \(f(A^{P2})\geq(1-1/e)^{2}OPT\) where \(OPT\) is the optimal solution of **P.0**._ _Proof:_ Recall that in Theorem 1, we show that \(f(A)\geq(1-1/e)^{2}OPT\) where \(A\) satisfies all constraints in **P.2**. Because \(A^{P2}\) is the optimal solution of **P.2**, we have \(f(A^{P2})\geq f(A)\geq(1-1/e)^{2}OPT\). \(\Box\) We next present a simple greedy algorithm to attain a \(1/2\) approximation of **P.2**. First, we present **P.3**, a relaxed problem of **P.2**. \begin{tabular}{|l|} \hline **P.3**\(\max_{S\subseteq V}f(S)\) \\ **subject to:** \\ \(\begin{cases}|S\cap V_{t}|\leq\lceil\beta_{t}\rceil,\forall t\in[m].\\ \sum_{t\in[m]}\max\{\lfloor\alpha_{t}\rfloor,|S\cap V_{t}|\}\leq b\end{cases}\). \\ \hline \end{tabular} It is easy to verify that any feasible solution of **P.2** must be a feasible solution of **P.3**. Hence, \(f(A^{P2})\leq f(A^{P3})\) where \(A^{P3}\) is the optimal solution of **P.3**. It has been shown that the constraints listed in **P.3** constitute a matroid [14]. Hence, **P.3** is a classic submodular maximization problem subject to a matroid constraint. A simple greedy algorithm guarantees a \(1/2\) approximation of **P.3**[19]. This algorithm works by iteratively adding items to the solution set such that at each step, the marginal increase in the objective value is maximized, and the matroid constraint is satisfied, and it terminates when the current solution set can not be expanded. Let \(A^{g}\) denote the solution returned from the greedy algorithm, it holds that \[f(A^{g})\geq(1/2)f(A^{P3})\geq(1/2)f(A^{P2}). \tag{6}\] Moreover, it is easy to verify that \(A^{g}\) must be a feasible solution of **P.2**. Theorem 2: _Let \(A^{g}\) denote the solution of returned from the greedy algorithm, it holds that \(f(A^{g})\geq\frac{(1-1/e)^{2}}{2}\cdot OPT\). Moreover, \(A^{g}\) always satisfies the cardinality constraint and nearly satisfies the fairness constraints of **P.0**, i.e., \(|A^{g}|\leq b\) and \(|\alpha_{t}\rfloor\leq|A^{g}\cap V_{t}|\leq\lceil\beta_{t}\rceil,\forall t\in [m]\)._ _Proof:_ The proof of the first part of this theorem stems from inequality (6) and Lemma 3. The second part of this theorem is because \(A^{g}\) is a feasible solution to problem **P.2**. \(\square\) ## 4 A Feasible \((1-1/e)\)-approximation Randomized Algorithm We now present a randomized algorithm for **P.0**. In contrast to the results presented in the previous section, our randomized solution offers three advantages: (1) our solution does not rely on the assumption of non-overlapping groups, (2) our solution satisfies all fairness constraints in a _strict_ sense, and (3) we achieve the optimal approximation ratio of \(1-1/e\). As previously stated, **P.0** has a number of variables equal to the number of elements in \(\mathcal{F}\), which can become extremely large when \(n\) is significant. This means that standard LP solvers are unable to handle this LP problem effectively. To tackle this issue, we resort to its corresponding dual problem (**Dual of P.0**) and utilize the ellipsoid algorithm [16] to solve it. In the dual problem, we assign two "weights" \(z_{t}\in\mathbb{R}_{\geq 0}\) and \(u_{t}\in\mathbb{R}_{\geq 0}\) to each group \(V_{t}\) and there is an additional variable \(w\in\mathbb{R}_{\geq 0}\). **Dual of P.0**\(\min_{z\in\mathbb{R}_{\geq 0}^{m},u\in\mathbb{R}_{\geq 0}^{m},w\in\mathbb{R}_{\geq 0} \sum_{t\in[m]}(\beta_{t}u_{t}-\alpha_{t}z_{t})+w}\) **subject to:**\(w\geq f(\tilde{S})+\sum_{t\in[m]}|S\cap V_{t}|\cdot(z_{t}-u_{t}),\forall S \in\mathcal{F}\). At a high level, the ellipsoid method is a powerful tool used to determine the emptiness of a given non-degenerate convex set \(C\). For our problem, \(C\) represents the feasible region of **Dual of P.0**. This method begins by defining an ellipsoid that is guaranteed to contain \(C\). In each iteration, the algorithm checks if the center of the current ellipsoid is located within \(C\). If it is, this means that \(C\) is nonempty and that the current solution is feasible. In this case, the method tries a smaller objective. On the other hand, if the center of the current ellipsoid is not located within \(C\), the method uses an (approximate) separation oracle to identify a violated constraint and finds a smaller ellipsoid with a center that satisfies that constraint. This process continues until a feasible solution is found or the volume of the bounding ellipsoid is so small that \(C\) is considered empty, meaning that no feasible solution with a smaller objective can be found. It is important to note that this method operates in a geometric fashion and does not require an explicit description of the LP. The only requirement is a polynomial-time (approximate) separation oracle, which can determine whether a point lies within \(C\) and, if not, return a separating hyperplane. This method has a polynomial number of iterations for solving linear problems. In the context of our problem, we approximately solve the SubMax problem to check the feasibility of the current solution and act as the separation oracle. Definition 2 (SubMax): Given a utility function \(f\), a cardinality constraint \(b\), and two vectors \(z\in\mathbb{R}_{\geq 0}^{m}\) and \(u\in\mathbb{R}_{\geq 0}^{m}\), \(\textsc{SubMax}(z,u,b)\) aims to \[\max_{S\in\mathcal{F}}(f(S)+\sum_{t\in[m]}|S\cap V_{t}|\cdot(z_{t}-u_{t})). \tag{7}\] \(\textsc{SubMax}(z,u,b)\) asks for a set \(S\) of size at most \(b\) such that \(f(S)+\sum_{t\in[m]}|S\cap V_{t}|\cdot(z_{t}-u_{t})\) is maximized. Observe that \(f\) is non-negative monotone and submodular; and \(\sum_{t\in[m]}|S\cap V_{t}|\cdot(z_{t}-u_{t})\) is a modular function in terms of \(S\), hence, \(\textsc{SubMax}(z,u,b)\) is a classic problem of maximizing the summation of a non-negative monotone submodular and a modular function under cardinality constraints. [21] presented a randomized polynomial time algorithm that produces a set \(A\) such that for every \(S\in\mathcal{F}\), it holds that \[f(A)+\sum_{t\in[m]}|A\cap V_{t}|\cdot(z_{t}-u_{t})\geq(1-1/e)f(S)+\sum_{t\in[ m]}|S\cap V_{t}|\cdot(z_{t}-u_{t}). \tag{8}\] Now we are ready to present the main theorem of this section. Theorem 3.1: _There exists a polynomial time \((1-1/e)\)-approximation algorithm (with additive error \(\epsilon\)) for **P.0**._ The rest of this section is devoted to proving Theorem 3.1, that is, we present a polynomial \((1-1/e)\)-approximation algorithm for **P.0**. Let \(C(L)\) denote the set of \((z\in\mathbb{R}_{\geq 0}^{m},u\in\mathbb{R}_{\geq 0}^{m},w\in\mathbb{R}_{\geq 0})\) satisfying that \[\sum_{t\in[m]}(\beta_{t}u_{t}-\alpha_{t}z_{t})+w\leq L,\] \[w\geq f(S)+\sum_{t\in[m]}|S\cap V_{t}|\cdot(z_{t}-u_{t}),\forall S\in \mathcal{F}.\] We use binary search to determine the smallest value of \(L\) for which \(C(L)\) is not empty. For a given \(L\), we first evaluate the inequality \(\sum_{t\in[m]}(\beta_{t}u_{t}-\alpha_{t}z_{t})+w\leq L\). Then, we run algorithm from [21] (labeled as \(\mathcal{A}\)) to solve SubMax\((z,u,b)\). Let \(A\) be the solution set returned from \(\mathcal{A}\). * If \(f(A)+\sum_{t\in[m]}|A\cap V_{t}|\cdot(z_{t}-u_{t})\leq w\), then \(C(L)\) is marked as non-empty. In this case, we try a smaller \(L\). * If \(f(A)+\sum_{t\in[m]}|A\cap V_{t}|\cdot(z_{t}-u_{t})>w\), then \((z,w)\notin C(L)\) and \(A\) is a separating hyperplane. We find a smaller ellipsoid with a center that satisfies that constraint. This process continues until a feasible solution in \(C(L)\) is found (in this case, we try a smaller \(L\)) or the volume of the bounding ellipsoid is so small that \(C(L)\) is considered empty (in this case, we conclude that the current objective is not achievable and will therefore try a larger \(L\)). To learn about the specific steps involved in running ellipsoid using separation oracles and achieving (multiplicative and additive) approximate guarantees, we suggest referring to Chapter 2 of [5]. Assume \(L^{*}\) is the minimum value of \(L\) for which the algorithm determines that \(C(L)\) is non-empty. Hence, there exists a \((z^{*},u^{*},w^{*})\) such that \[\sum_{t\in[m]}(\beta_{t}u_{t}^{*}-\alpha_{t}z_{t}^{*})+w^{*}\leq L^{*} \tag{9}\] and \[f(A)+\sum_{t\in[m]}|A\cap V_{t}|\cdot(z_{t}^{*}-u_{t}^{*})\leq w^{*} \tag{10}\] where \(A\) is the output obtained from algorithm \(\mathcal{A}\) after solving SubMax\((z^{*},u^{*},b)\). Let \(\mu=1-1/e\), it follows that \(\forall S\in\mathcal{F}\), \[f(S)+\sum_{t\in[m]}|S\cap V_{t}|\cdot(u_{t}^{*}-z_{t}^{*})/\mu \leq (f(A)+\sum_{t\in[m]}|A\cap V_{t}|\cdot(u_{t}^{*}-z_{t}^{*}))/\mu \tag{11}\] \[\leq w^{*}/\mu\] where the first inequality is by (8) and the second inequality is by inequality (10). In addition, inequality (9) implies that \[\sum_{t\in[m]}(\beta_{t}u_{t}^{*}-\alpha_{t}z_{t}^{*})/\mu+w^{*}/\mu\leq L^{*} /\mu. \tag{12}\] Inequality (11) implies that \((z^{*}/\mu,u^{*}/\mu,w^{*}/\mu)\) is a feasible solution of **Dual of P.0**. This, together with inequality (12), implies that the optimal solution of **Dual of P.0** and thus the optimal solution of **P.0** (by strong duality) is upper bounded by \(\frac{1}{\mu}\cdot L^{*}\). By finding a solution to **P.0** with a value of \(L^{*}\), we attain a \(\mu\)-approximation solution for the original problem **P.0**. Here, we explain how to compute such a solution using only feasible solution sets corresponding to the separating hyperplanes found by the separation oracle. Assume \(L^{*}-\epsilon\) is the largest value of \(L\) for which the algorithm determines that \(C(L)\) is empty, where \(\epsilon\) is decided by the precision of our algorithm. Let \(\mathcal{F}^{\prime}\) denote the subset of \(\mathcal{F}\) for which the dual constraint is violated in the implementation of the ellipsoid algorithm on \(C(L^{*}-\epsilon)\). Then, \(|\mathcal{F}^{\prime}|\) is polynomial. We consider the following polynomial sized **Dual of P.0** (labeled as **Poly-sized Dual of P.0**), using separating hyperplanes from \(\mathcal{F}^{\prime}\). **Poly-sized Dual of P.0**\(\min_{z\in\mathbb{R}_{\geq 0}^{m},u\in\mathbb{R}_{\geq 0}^{m},w\in\mathbb{R}_{\geq 0 }}\sum_{t\in[m]}(\beta_{t}u_{t}-\alpha_{t}z_{t})+w\) **subject to:**\(w\geq f(S)+\sum_{t\in[m]}|S\cap V_{t}|\cdot(z_{t}-u_{t}),\forall S\in \mathcal{F}^{\prime}\). Because \(C(L^{*}-\epsilon)\) is empty, the value of the optimal solution to **Poly-sized Dual of P.0** is at least \(L^{*}-\epsilon\). Hence, the value of the dual of **Poly-sized Dual of P.0**, which is listed in **Poly-sized P.0**, is at least \(L^{*}-\epsilon\). Note that **Poly-sized P.0** is of polynomial size. Recall that the optimal solution of **P.0** is upper bounded by \(\frac{1}{\mu}\cdot L^{*}\), obtaining an optimal solution from **Poly-sized P.0** provides a \(\mu\)-approximation for **P.0** (with additive error \(\epsilon\)), where \(\mu=1-1/e\). ## 5 Conclusion We introduce a set of fairness criteria, including restricted dominance and minority protection, to ensure that the proportion of selected items from each group falls within the desired range. We present a \((1-1/e)^{2}\)-approximation deterministic algorithm and a fast greedy deterministic algorithm that approximately satisfy the fairness constraints. Additionally, we propose a randomized algorithm that strictly satisfies all fairness constraints and achieves the optimal approximation ratio of \(1-1/e\).
2307.12432
The Anti-Self-Dual Deformation Complex and a conjecture of Singer
Let $(M^4,g)$ be a smooth, closed, oriented anti-self-dual (ASD) four-manifold. $(M^4,g)$ is said to be unobstructed if the cokernel of the linearization of the self-dual Weyl tensor is trivial. This condition can also be characterized as the vanishing of the second cohomology group of the ASD deformation complex, and is central to understanding the local structure of the moduli space of ASD conformal structures. It also arises in construction of ASD manifolds by twistor and gluing methods. In this article we give conformally invariant conditions which imply an ASD manifold of positive Yamabe type is unobstructed.
A. Rod Gover, Matthew J. Gursky
2023-07-23T21:05:11Z
http://arxiv.org/abs/2307.12432v1
# The anti-self-dual deformation complex and a conjecture of Singer ###### Abstract. Let \((M^{4},g)\) be a smooth, closed, oriented anti-self-dual (ASD) four-manifold. \((M^{4},g)\) is said to be _unobstructed_ if the cokernel of the linearization of the self-dual Weyl tensor is trivial. This condition can also be characterized as the vanishing of the second cohomology group of the ASD deformation complex, and is central to understanding the local structure of the moduli space of ASD conformal structures. It also arises in construction of ASD manifolds by twistor and gluing methods. In this article we give conformally invariant conditions which imply an ASD manifold of positive Yamabe type is unobstructed. 2020 Mathematics Subject Classification: 53C21, 53C07 ## 1. Introduction Let \(M^{4}\) be a smooth, closed, oriented four-manifold. Given a Riemannian metric \(g\) on \(M^{4}\), the bundle of two-forms \(\Lambda^{2}=\Lambda^{2}(M^{4})\) splits into the sub-bundles of self-dual and anti-self=dual two-forms under the action of the Hodge \(\star\)=operator: \[\Lambda^{2}=\Lambda^{2}_{+}\oplus\Lambda^{2}_{-}.\] By a result of Singer-Thorpe [33], the curvature operator \(Rm:\Lambda^{2}\to\Lambda^{2}\) has a canonical block decomposition of the form \[Rm=\left(\begin{array}{cc}A^{+}&B\\ B^{t}&A^{-}\end{array}\right),\] where \(A^{\pm}:\Lambda^{2}_{\pm}\to\Lambda^{2}_{\pm}\) and \(B:\Lambda^{2}_{+}\to\Lambda^{2}_{-}\). If \(W\) denotes the Weyl tensor, then \(W^{\pm}:\Lambda^{2}_{\pm}\to\Lambda^{2}_{\pm}\), and \[A^{\pm}=W^{\pm}+\frac{1}{12}R\,I,\] where \(I\) is the identity and \(R\) is the scalar curvature. **Definition 1.1**.: We say that \((M^{4},g)\) is _anti-self-dual_ (ASD) if \(W^{+}_{g}\equiv 0\). The notion of (anti-)self-duality is conformally invariant: if \(W^{+}_{g}=0\) for a metric \(g\) and \(\tilde{g}=e^{f}g\), then \(W^{+}_{\tilde{g}}=0\). This property will be crucial for the proof of our main result below. There are topological obstructions to the existence of ASD metrics. By the Hirzebruch signature formula, \[48\pi^{2}\tau(M^{4})=\int\left(|W^{+}_{g}|^{2}-|W^{-}_{g}|^{2}\right)\,dv_{g}, \tag{1.1}\] where \(\tau(M^{4})\) is the signature of the intersection form on \(H^{2}_{dR}(M^{4})\). In particular, we see that if \((M^{4},g)\) is ASD then \(\tau(M^{4})\leq 0\), with equality if and only if \(g\) is LCF. If \((M^{4},g)\) is ASD with positive scalar curvature, then the intersection form is actually definite (see Proposition 1 of [28]). To see this, we first observe that the splitting of \(\Lambda^{2}(M^{4})\) induces a splitting on the Introduction Let \(M\) be a compact Riemannian manifold and \(\mathcal{M}(M^{4})\) be a smooth Riemannian metric on \(M^{4}\). The Riemannian metric \(g\) is defined as \[\mathcal{D}:\mathcal{M}(M^{4})\to\mathcal{M}(M^{4}).\] The Riemannian metric \(g\) is defined as \[\mathcal{D}(g)=\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2} \left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{ 2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{ 2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1 }{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left( \frac{1}{2\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2 \left(\frac{1}{2}\left(\frac{1}{2\left(\frac{1}{2}\left(\frac{1}{2\left(\frac{1 }\left(\frac{1}{2\left(\frac{1}\left(\frac{1}\left(\frac{1}\left(\frac{1\left( \frac{1}\left(\frac{1}\left(\frac{1\left(\frac{1}\left(\frac{1}\left(\frac{1 }\left(\frac{1\left(\frac{1}\left(\frac{1}\left(\frac{1\left(\frac{1}\left( \frac{1\left(\left(\fracfrac{1}\left(\left(\fracfrac{1\left(\left(( \fracfrac{((\left}\left(\left(\fracfracfracfrac{(((((( (((((((((( ))))}}}}))(((((((((((((((((((((((((((((((((((((((((((((((((( ))))))))) (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( )))))))) ((((((((((((((((((((((((((((((((((((((((( ))))))) ((((((((((((((((((((((((((((((((((((((((((( )))))) ((((((((((((((((((((((((((((((((((((( ))))) (((((((((((((((((((((((((((((((((( ))))) ((((((((((((((((((((((((((((((((((( )))) (((((((((((((((((((((((((( )))) (((((((((((((((((((((((((((((((( )))) ((((((((((((((((((((((((((((( ((((((((( ((((((( ))) ((((((((((((((((((( ((((((( (((((( (( (( (( ( ( ( ( ( ( ( ( ( ( ( ( } ( ; ((((((((((((((((((((((((( 0 symmetric, trace-free endomorphisms of \(\Lambda^{2}_{+}\). Note that \[\mathcal{D}_{g}:\Gamma(S^{2}(T^{*}M^{4}))\to\Gamma(\mathcal{W}^{+})\subset\Gamma( S^{2}_{0}(\Lambda^{2}_{+})). \tag{1.4}\] In fact, since \(g\) is ASD, by conformal invariance \(\mathcal{D}_{g}(fg)=0\) for any \(f\in C^{\infty}(M^{4})\), hence \[\mathcal{D}_{g}:\Gamma(S^{2}_{0}(T^{*}M^{4}))\to\Gamma(\mathcal{W}^{+}). \tag{1.5}\] We also let \[\mathcal{D}^{*}_{g}:\Gamma(\mathcal{W}^{+})\to\Gamma(S^{2}_{0}(T^{*}M^{4})) \tag{1.6}\] denote the \(L^{2}\)-formal adjoint of \(\mathcal{D}_{g}\). Although the formula for \(\mathcal{D}_{g}\) is somewhat involved, the formula for \(\mathcal{D}^{*}\) is much more compact (see Proposition A.4 of [23]): \[\mathcal{D}^{*}U_{ij}=2\left(\nabla^{k}\nabla^{\ell}U_{ikj\ell}+P^{k\ell}U_{ ikj\ell}\right), \tag{1.7}\] where \(P\) denotes the Schouten tensor (see Section 2). Let \(\mathcal{K}_{g}:\Gamma(T^{*}M^{4})\to S^{2}_{0}(T^{*}M^{4})\) denote the Killing operator: \[\mathcal{K}_{g}(\omega)_{ij}=\nabla_{i}\omega_{j}+\nabla_{j}\omega_{i}-\frac{ 1}{2}(\delta_{g}\omega)\,g_{ij},\] where \(\delta_{g}\omega=\nabla^{k}\omega_{k}\) is the divergence of \(\omega\). The kernel of \(\mathcal{K}\) consists of those one-forms whose dual vector field are conformal Killing. Moreover, by diffeomorphism and conformal invariance, \[\operatorname{Im}\mathcal{K}\subset\ker\mathcal{D}.\] The _ASD deformation complex_ is given by \[\Gamma(T^{*}M^{4})\xrightarrow{\mathcal{K}}\Gamma(S^{2}_{0}(T^{*}M^{4})) \xrightarrow{\mathcal{D}}\Gamma(S^{2}_{0}(\Lambda^{2}_{+})). \tag{1.8}\] This complex is elliptic; see [26], Section 2. The associated cohomology groups are given by \[H^{0}_{ASD}(M^{4},g) =\ker\mathcal{K}_{g},\] \[H^{1}_{ASD}(M^{4},g) =\{h\in\Gamma(S^{2}_{0}(T^{*}M^{4}))\,:\,\delta_{g}h=0,\ \mathcal{D}_{g}h=0\},\] \[H^{2}_{ASD}(M^{4},g) =\ker\mathcal{D}^{*}_{g}, \tag{1.9}\] where for \(h\in\Gamma(S^{2}(T^{*}M^{4}))\), \((\delta_{g}h)_{j}=g^{ik}\nabla_{k}h_{ij}\) is the divergence. The Atiyah-Singer index theorem can be used to calculate the index of (1.8)1: Footnote 1: This was calculation was done by Singer in unpublished notes, but can be found in the literature for example in [26] \[\operatorname{Ind}_{ASD}(M^{4},g)=\frac{1}{2}\left(15\chi\left(M^{4}\right)+2 9\tau\left(M^{4}\right)\right).\] Vanishing of the cohomology groups also provides information on the local structure of the moduli space of ASD conformal structures: **Proposition 1.2**.: _(See [23], [36]) Suppose \((M^{4},g)\) is ASD with_ \[H^{0}_{ASD}(M^{4},g)=\{0\},\ \ H^{2}_{ASD}(M^{4},g)=\{0\}.\] _Then the moduli space of anti-self-dual conformal structures near \(g\) is a smooth, finite-dimensional manifold of dimension \(\dim H^{1}_{ASD}(M^{4},g)\)._ This leads to the following definition: **Definition 1.3**.: Let \((M^{4},g)\) be ASD. We say that \((M^{4},g)\) is _unobstructed_ if \(H^{2}_{ASD}(M^{4},g)=\{0\}\). By the work of Floer [16] and Donaldson-Friedman [13], if \((M_{1},g_{1})\) and \((M_{2},g_{2})\) are unobstructed ASD manifolds, then the connected sum \(M_{1}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\#}M_{2}\) admits an ASD metric. Thus, we are lead to the question: under what condition is an ASD manifold unobstructed? The following conjecture is often attributed to Singer: **Conjecture 1.4**.: _Let \((M^{4},g)\) be ASD. If the Yamabe invariant of \((M^{4},g)\) is positive, then \((M^{4},g)\) is unobstructed._ For ASD Einstein manifolds the operators \(\mathcal{DD}^{*}\) and \(\mathcal{D}^{*}\mathcal{D}\) were explicitly computed in [24] and [27]. It follows from these calculations (and can also be seen by more or less direct calculation) that \((S^{4},g_{0})\) and \((\mathbb{CP}^{2},g_{FS})\) are unobstructed. We remark that for non-Einstein ASD manifolds, the formulas for these operators are fairly intractable. The vanishing of \(H^{2}_{ASD}(M^{4},[g])\) can sometimes be verified when the twistor space is explicitly known. For example, the LCF metrics \(k{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\#}S^{3}\times S^{1}\) ([30], Theorem 8.2; [14]), and the ASD metrics on \({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}m}\#\left(-\mathbb{CP}^{2}\right)\) constructed by LeBrun ([29]), are unobstructed. Our goal in this paper is to provide a criterion for the vanishing of \(H^{2}_{ASD}\) that only involves conformal invariants of the ASD manifold (and in particular does not depend on verifying any properties of the twistor space). Our main result is the following: **Theorem 1.5**.: _Suppose \((M^{4},g)\) is ASD with Yamabe invariant \(Y(M^{4},[g])>0.\) If_ \[2\chi(M^{4})+3\tau(M^{4})\geq-\frac{1}{24\pi^{2}}Y(M^{4},[g])^{2}, \tag{1.10}\] _then \((M^{4},g)\) is unobstructed._ The proof of Theorem 1.5 relies on two key ideas. The first is that any element \(U\in\ker\mathcal{D}^{*}\) can be associated to a self-dual harmonic two-form \(z=z(U)\in\Lambda^{2}_{+}\left(\mathcal{E}\right)\) taking its values in the _adjoint tractor bundle_ (see Section 2). Therefore, \(z\) satisfies a twisted version of the usual Weitzenbock formula for self-dual (real-valued) two-forms. This twisted version provides us with two identities for \(U\) (see Theorem 3.6). The second key idea is to make a judicious choice of conformal representative in order to show that the condition (1.10) implies the vanishing of \(U\). As explained in Section 4, we choose a conformal metric whose scalar curvature satisfies a differential inequality involving the Schouten tensor (see Theorem 4.1), and is adapted to the curvature terms appearing in the Weitzenbock formula(s) for \(U\). Curiously, to prove the existence of this metric we consider a modification of the functional determinant of an conformally covariant elliptic operator first computed by Branson and Orsted [3]. An robust theory for the existence of critical points of this functional was developed by Chang and Yang [10], and we are able to show that their ideas also give existence for our modified functional. We remark that the lower bound in (1.10) can likely be improved, and it is possible that the techniques of this paper can be used to ultimately remove any additional topological assumption. ### Organization In Section 2 we provide the necessary background on the tractor bundle and associated connection and metric. The main result (for our purposes) is Proposition 2.2, in which we associate to \(U\in\ker\mathcal{D}^{*}\) a twisted SD harmonic two-form \(z=z(U)\in\Lambda^{2}_{+}\left(\mathcal{E}\right)\). In Section 3 we compute two Weitzenbock formulas for \(U\) that follow from this correspondence. In Section 4 we give the proof of Theorem 1.5, and relegate the PDE aspects of our work to the Appendix. ### Acknowledgements The authors would like to express their sincere thanks to Claude LeBrun for initially suggesting this problem, and for being an invaluable resource during our work. The first author is supported by the Royal Society of New Zealand Marsden grant 19-UOA-008. The second author is supported by NSF grant DMS-2105460 and a Simons Foundation Fellowship in Mathematics, award 923208. This project began during a visit of the second author to the Department of Mathematics at the University of Auckland, in September 2022. The author would like to thank the department for its support and hospitality. ## 2. Background and the interpretation via tractor calculus ### Some conventions for Riemannian geometry For index calculations we will use Penrose's abstract index notation, unless otherwise indicated. In this we write \(\mathcal{E}_{a}\) and \(\mathcal{E}^{a}\) as alternative notations for, respectively the cotangent and tangent bundles and the contraction \(\omega(v)\) of a 1-form \(\omega\) with a tangent vector \(v^{a}\) is written with a repeated index \(\omega_{a}v^{a}\) (mimicking the Einstein summation convention). Tensor bundles are denoted then by adorning the symbol \(\mathcal{E}\) with appropriate indices and sometimes also indicating symmetries. For example \(\mathcal{E}_{(ab)}\) is the notation for \(S^{2}T^{*}M\), the subbundle of symmetric tensors in \(T^{*}M\otimes T^{*}M\). Then our convention for the Riemann tensor \(R_{ab}{}^{c}{}_{d}\) is such that \[\begin{split}\left[\nabla_{a},\nabla_{b}\right]v^{c}& =R_{ab}{}^{c}{}_{d}v^{d},\\ \left[\nabla_{a},\nabla_{b}\right]\omega_{c}&=R_{ abc}{}^{d}\omega_{d},\end{split} \tag{2.1}\] where \(\nabla_{a}\) is the Levi-Civita connection of a metric \(g_{ab}\), \(v\) any tangent vector field, and \(\omega\) is any one-form. Using the metric to raise and lower indices we have, for example \(R_{abcd}=g_{ce}R_{ab}{}^{e}{}_{d}\). This may be decomposed: \[R_{abcd}=W_{abcd}+2\left(g_{c[a}P_{b]d}+g_{d[b}P_{a]c}\right), \tag{2.2}\] where the completely trace-free part \(W_{abcd}\) is the _Weyl tensor_ and \(P_{ab}\) is the _Schouten tensor_. It follows that \[P_{ab}:=\frac{1}{n-2}\left(R_{ab}-\frac{R}{2\left(n-1\right)}g_{ab}\right) \tag{2.3}\] where \(R_{bc}=R_{ab}{}^{a}{}_{c}\) is the Ricci tensor and its metric trace \(R=g^{ab}R_{ab}\) is scalar curvature. We will use \(J\) to denote the metric trace of Schouten, i.e. \(J:=g^{ab}P_{ab}\). Lastly, we have \[\begin{split} C_{abc}&:=2\nabla_{[b}P_{c]a},\\ B_{ab}&:=-\nabla^{c}C_{abc}+P^{dc}C_{dacb},\end{split} \tag{2.4}\] where \(C_{abc}\) and \(B_{ab}\) are _Cotton_ and _Bach_ tensors, respectively. It should be noted that the Bianchi identities imply \[\left(n-3\right)C_{abc}=\nabla^{d}W_{dabc}. \tag{2.5}\] ### The tractor bundle and connection To treat and work with objects that are conformally invariant it is natural work, at least partly, in the setting of conformal manifolds. Here by a conformal manifold \((M,\boldsymbol{c})\) we mean a smooth manifold of dimension \(n\geq 3\) equipped with an equivalence class \(\boldsymbol{c}\) of Riemanian metrics, where \(g_{ab}\), \(\widehat{g}_{ab}\in\boldsymbol{c}\) means that \(\widehat{g}_{ab}=\Omega^{2}g_{ab}\) for some smooth positive function \(\Omega\). On a general conformal manifold \((M,\boldsymbol{c})\), there is no distinguished connection on \(TM\). But there is an invariant and canonical connection on a closely related bundle, namely the conformal tractor connection on the standard tractor bundle, see [2, 7]. Here we review the basic conformal tractor calculus on Riemannian and conformal manifolds. See [2, 11, 18] for more details. Unless stated otherwise, calculations will be done with the use of generic \(g\in\boldsymbol{c}\). On an \(n\)-manifold \(M\) the top exterior power of the tangent bundle \(\Lambda^{n}TM\) is a line bundle. Thus its square \(\mathcal{K}:=(\Lambda^{n}TM)^{\otimes 2}\) is canonically oriented and so one can take oriented roots of it: Given \(w\in\mathbb{R}\) we set \[\mathcal{E}[w]:=\mathcal{K}^{\frac{w}{2n}}, \tag{2.6}\] and refer to this as the bundle of conformal densities. For any vector bundle \(\mathcal{V}\), we write \(\mathcal{V}[w]\) to mean \(\mathcal{V}[w]:=\mathcal{V}\otimes\mathcal{E}[w]\). For example, \(\mathcal{E}_{(ab)}[w]\) denotes the symmetric second tensor power of the cotangent bundle tensored with \(\mathcal{E}[w]\), i.e. \(S^{2}T^{*}M\otimes\mathcal{E}[w]\) on \(M\). On a fixed Riemannian manifold \(\mathcal{K}\) is canonically trivialised by the square of the volume form, and so \(\mathcal{K}\) and its roots are not usually needed explicitly. However if we wish to change the metric conformally, or work on a conformal structure then these objects become important. Since each metric in a conformal class determines a trivialisation of \(\mathcal{K}\), it follows easily that on a conformal structure there is a canonical section \(\boldsymbol{g}_{ab}\in\Gamma(\mathcal{E}_{(ab)})\)[2]. This has the property that for each positive section \(\sigma\in\Gamma(\mathcal{E}_{+}[1])\) (called a _scale_) \(g_{ab}:=\sigma^{-2}\boldsymbol{g}_{ab}\) is a metric in \(\boldsymbol{c}\). Moreover, the Levi-Civita connection of \(g_{ab}\) preserves \(\sigma\) and therefore \(\boldsymbol{g}_{ab}\). Thus it makes sense to use the conformal metric to raise and lower indices, even when we are chosing a particular metric \(g_{ab}\in\boldsymbol{c}\) and its Levi-Civita connection for calculations. It turns out that this simplifies many computations, and so in this section we will do that without further mention. (In the subsequent sections we will work with a fixed metric and use that to trivialise density bundles - so indices will be raised and lowered using the metric.) Considering Taylor series for sections of \(\mathcal{E}[1]\) one recovers the jet exact sequence at 2-jets, \[0\to\mathcal{E}_{(ab)}[1]\overset{\iota}{\to}J^{2}\mathcal{E}[1]\to J^{1} \mathcal{E}[1]\to 0. \tag{2.7}\] Note that \(J^{2}\mathcal{E}[1]\) and its sequence (2.7) are canonical objects on any smooth manifold. But with a conformal structure \(\boldsymbol{c}\) we have the orthogonal decomposition of \(\mathcal{E}_{ab}[1]\) into trace-free and trace parts \[\mathcal{E}_{ab}[1]=\mathcal{E}_{(ab)_{0}}[1]\oplus\boldsymbol{g}_{ab}\cdot \mathcal{E}[-1]. \tag{2.8}\] Thus we can canonically quotient \(J^{2}\mathcal{E}[1]\) by the image of \(\mathcal{E}_{(ab)_{0}}[1]\) under \(\iota\) (in (2.7)). The resulting quotient bundle is denoted \(\mathcal{T}^{*}\) (or \(\mathcal{E}_{A}\) in abstract indices), and called the conformal cotractor bundle. Observing that the jet exact sequence at 1-jets (of \(\mathcal{E}[1]\)), \[0\to\mathcal{E}_{b}[1]\overset{\iota}{\to}J^{1}\mathcal{E}[1]\to\mathcal{E}[1 ]\to 0,\] we see at once that \(\mathcal{T}^{*}\) has a composition series \[\mathcal{T}^{*}=\mathcal{E}[1]\,\dot{\mathbb{C}}\,\mathcal{E}_{a}[1]\,\dot{ \mathbb{C}}\,\mathcal{E}[-1], \tag{2.9}\] meaning that \(\mathcal{E}[-1]\) is a subbundle of \(\mathcal{T}^{*}\) and the quotient of \(\mathcal{T}^{*}\) by this (which is \(J^{1}\mathcal{E}[1]\)) has \(\mathcal{E}_{a}[1]\) as a subbundle, whereas there is a canonical projection \(X:\mathcal{T}^{*}\to\mathcal{E}[1]\). In abstract indices we write \(X^{A}\) for this map and call it the _canonical tractor_. Given a choice of metric \(g\in\boldsymbol{c}\), the formula \[\sigma\mapsto\frac{1}{n}[D_{A}\sigma]_{g}:=\begin{pmatrix}\sigma\\ \nabla_{a}\sigma\\ -\frac{1}{n}\left(\Delta+J\right)\sigma\end{pmatrix} \tag{2.10}\] (where \(\Delta\) is the Laplacian \(\nabla^{a}\nabla_{a}\)) gives a second-order differential operator on \(\mathcal{E}[1]\) which is a linear map \(J^{2}\mathcal{E}[1]\to\mathcal{E}[1]\oplus\mathcal{E}_{a}[1]\oplus\mathcal{E }[-1]\) that clearly factors through \(\mathcal{T}^{*}\) and so determines an isomorphism \[\mathcal{T}^{*}\overset{\sim}{\longrightarrow}[\mathcal{T}^{*}]_{g}=\mathcal{E }[1]\,\oplus\,\mathcal{E}_{a}[1]\,\oplus\,\mathcal{E}[-1]. \tag{2.11}\] In subsequent discussions, we will use (2.11) to split the tractor bundles without further comment. Thus, given \(g\in\boldsymbol{c}\), an element \(V_{A}\) of \(\mathcal{E}_{A}\) may be represented by a triple \((\sigma,\mu_{a},\rho)\), or equivalently by \[V_{A}=\sigma Y_{A}+\mu_{a}{Z_{A}}^{a}+\rho X_{A}. \tag{2.12}\] The last display defines the algebraic splitting operators \(Y:\mathcal{E}[1]\to\mathcal{T}^{*}\) and \(Z:T^{*}M[1]\to\mathcal{T}^{*}\) (determined by the choice \(g_{ab}\in\boldsymbol{c}\)) which may be viewed as sections \(Y_{A}\in\Gamma(\mathcal{E}_{A}[-1])\) and \({Z_{A}}^{a}\in\Gamma(\mathcal{E}_{A}{}^{a}[-1])\). We call these sections \(X_{A},Y_{A}\) and \({Z_{A}}^{a}\)_tractor projectors_. By construction the tractor bundle is conformally invariant, i.e. determined by \((M,\boldsymbol{c})\) and indpenedent of any choice of \(g\in\boldsymbol{c}\). However the splitting (2.12) is not. Considering the transformation of the operator (2.10) determining the splitting we see that if \(\widehat{g}=\Omega^{2}f\) the components of an invariant section of \(\mathcal{T}^{*}\) should transform according to: \[[\mathcal{T}^{*}]_{\widehat{g}}\ni\left(\begin{array}{c}\widehat{\sigma}\\ \widehat{\mu}_{b}\\ \widehat{\rho}\end{array}\right)=\left(\begin{array}{ccc}1&0&0\\ \Upsilon_{b}&\delta_{b}^{c}&0\\ -\frac{1}{2}\Upsilon^{2}&-\Upsilon^{c}&1\end{array}\right)\left(\begin{array}[] {c}\sigma\\ \mu_{c}\\ \rho\end{array}\right)\sim\ \left(\begin{array}{c}\sigma\\ \mu_{b}\\ \rho\end{array}\right)\in[\mathcal{T}^{*}]_{g}, \tag{2.13}\] where \(\Upsilon_{a}=\Omega^{-1}\nabla_{a}\Omega\), and converesely this transformation of triples is the hallmark of an invariant tractor section. Equivalent to the last display is the rule for how the algebraic splitting operators transform \[\widehat{X}_{A}=X_{A},\quad\widehat{{Z}_{A}}^{b}={Z_{A}}^{b}+\Upsilon^{b}X_{ A},\quad\widehat{Y}_{A}=Y_{A}-\Upsilon_{b}{Z_{A}}^{b}-\tfrac{1}{2}\Upsilon_{b} \Upsilon^{b}X_{A}\,. \tag{2.14}\] Given a metric \(g\in\boldsymbol{c}\), and the corresponding splittings, as above, the tractor connection is given by the formula \[\nabla_{a}^{\mathcal{T}}\begin{pmatrix}\sigma\\ \mu_{b}\\ \rho\end{pmatrix}:=\begin{pmatrix}\nabla_{a}\sigma-\mu_{a}\\ \nabla_{a}\mu_{b}+P_{ab}\sigma+\boldsymbol{g}_{ab}\rho\\ \nabla_{a}\rho-P_{ac}\mu^{c}\end{pmatrix}, \tag{2.15}\] where on the right hand side the \(\nabla\)s are the Levi-Civita connection of \(g\). Using the transformation of components, as in (2.13), and also the conformal transformation of the Schouten tensor, \[{P^{\widehat{g}}_{\,\,\,\,ab}}=P_{ab}-\nabla_{a}\Upsilon_{b}+\Upsilon_{a} \Upsilon_{b}-\frac{1}{2}g_{ab}\Upsilon_{c}\Upsilon^{c},\quad\widehat{g}= \Omega^{2}g, \tag{2.16}\] reveals that the triple on the right hand side transforms as a 1-form taking values in \(\mathcal{T}^{*}\) - i.e. again by (2.13) except twisted by \(\mathcal{E}_{a}\). Thus the right hand side of (2.15) is the splitting into slots of a conformally invariant connection \(\nabla^{\mathcal{T}}\) on (section of) the bundle \(\mathcal{T}^{*}\). There is a nice conceptual origin for the connection (2.15). Using (2.16) and the transformation of the Levi-Civita connection it is straightforward to verify that the equation \[\nabla_{(a}\nabla_{b)_{0}}\sigma+P_{(ab)_{0}}\sigma=0 \tag{2.17}\] on conformal densities \(\sigma\in\Gamma(\mathcal{E}[1])\) is conformally invariant. As this is an overdetermined PDE, solutions in general do not exist. Overdetermined linear PDE are typically studied by prolongation and it is quickly verified that the tractor parallel transport given by (2.15) is exactly the closed system that arises from prolonging (2.17) (see [2, 11]). From this observation, and the formula (2.15), it follows that non-trivial solutions of (2.17) are non-vanishing on an open dense set (for \(M\) connected) on which the metric \(\sigma^{-2}\boldsymbol{g}_{ab}\) is Einstein - an observation that has a number of applications, see [11, 17] and references therein. The tractor bundle is also equipped with a conformally invariant signature \(\left(n+1,1\right)\) metric \(h_{AB}\in\Gamma\left(\mathcal{E}_{\left(AB\right)}\right)\), defined as quadratic form by the mapping \[[V_{A}]_{g}=\begin{pmatrix}\sigma\\ \mu_{a}\\ \rho\end{pmatrix}\mapsto\mu_{a}\mu^{a}+2\sigma\rho=:h\left(V,V\right),and \tag{2.18}\] with the polarization identity. This is important not only by dint of its conformal invariance, but it is easily checked that this _tractor metric_\(h\) is preserved by \(\nabla_{a}^{\mathcal{T}}\), i.e. \(\nabla_{a}^{\mathcal{T}}h_{AB}=0\). Thus it makes sense to use \(h_{AB}\) (and its inverse) to raise and lower tractor indices, and we do this henceforth without further comment. In particular \(X^{A}=h^{AB}X_{B}\) is the canonical tractor (and hence our use of the same kernel symbol). For computations the table of Figure 1 is useful. We see that \(h\) may be decomposed into a sum of projections \[h_{AB}=Z_{A}{}^{a}Z_{B}{}^{b}\boldsymbol{g}_{ab}+X_{A}Y_{B}+Y_{A}X_{B}\,.\] Finally for this section we note that, of course, the curvature of the tractor connection \(\kappa_{abCD}\) is determined by \[2\nabla_{[a}^{\mathcal{T}}\nabla_{b]}^{\mathcal{T}}V^{C}=\kappa_{ab}{}^{C}{}_ {D}V^{D}\quad\text{for all}\quad V^{C}\in\Gamma\left(\mathcal{E}^{A}\right),\] and can be written in terms of tractor projectors as \[\kappa_{abCD}=W_{abcd}Z_{C}{}^{c}Z_{D}{}^{d}+C_{cab}Z_{C}{}^{c}X_{D}-C_{cab}X_{ C}Z_{D}{}^{c}\,. \tag{2.19}\] The bundle \(\Lambda^{2}\mathcal{T}\) is often termed the _adjoint tractor bundle_, as it is a vector bundle modelled on the Lie algebra of the conformal group \(SO(n+1,1)\). So, as expected, the tractor curvature is a 2-form taking values in this bundle. ### The differential splitting operator In dimensions \(n\geqslant 4\) the tractor curvature is the image of a conformally invariant operator \(z\) acting on the Weyl curvature. The operator is as follows: **Lemma 2.1**.: _In any dimension \(n\geqslant 4\) there is a conformally invariant differential map_ \[z:\Gamma(\mathcal{W})\rightarrow\Gamma(\Lambda^{2}\otimes\Lambda^{2}\mathcal{T }), \tag{2.20}\] _given by_ \[\begin{split} U_{ij}{}^{k}{}_{\ell}\mapsto z_{ijAB}& =U_{ij}{}^{k\ell}Z_{kA}Z_{\ell B}-\frac{1}{n-3}(\delta U)^{k}{}_{ ij}\left(X_{A}Z_{kB}-Z_{kA}X_{B}\right)\\ &=\frac{1}{2}U_{ij}{}^{k\ell}\left(Z_{kA}Z_{\ell B}-Z_{\ell A}Z_{ kB}\right)-\frac{1}{n-3}(\delta U)^{k}{}_{ij}\left(X_{A}Z_{kB}-Z_{kA}X_{B} \right),\end{split} \tag{2.21}\] _where_ \[(\delta U)_{kij}=\nabla^{m}U_{mkij}.\] Figure 1. Tractor inner product The conformal invariance is easily verified directly using (2.14) and the conformal transformation of the Levi-Civita connection. It can also be deduced from the conformal invariance of the tractor curvature \(\kappa_{abCD}\). In fact the map (2.20) is a standard "BGG-splitting operator", as in the theory [8, 5], and the conformal deformation sequence can de understood as arising from a twisting by \(\Lambda^{2}\mathcal{T}\) of the de Rham complex [6, 19]. We are, in particular, interested in the case of dimension \(n=4\). Then it is evident, from the formula (2.21), that if \(U\) is SD (or ASD) then so is \(z(U)\) as a tractor-twisted 2-form, as the Hodge-\(\star\) commutes with the Levi-Civita connection. We obtain more if also \(U\in\ker\mathcal{D}^{\ast}\). **Proposition 2.2**.: _Suppose \((M^{4},g)\) is ASD, and \(U\) is SD. Then \(U\in\ker\mathcal{D}^{\ast}\) iff \(z=z(U)\in\Lambda^{2}_{+}(\mathcal{E})\) is a harmonic, self-dual two form._ Proof.: First note that by standard BGG theory [8, 5, 15] this is true in the conformally flat setting. Now consider the curved case. Certainly \(\delta z\) is conformally invariant as it is a twisting of the usual divergence of 2-forms (in dimension 4) with the conformally invariant tractor connection. Thus, starting from the top (meaning from the left in the filtration on the adjoint tractor bundle that is induced from (2.9)), the first non-zero slot of \(\delta z\) must be conformally invariant and constructed from \(U\), its covariant derivatives and possibly curvature contracted with \(U\). From order considerations the last of these can only happen in the bottom slot. The very top slot is rank 2 and involves no derivatives of \(U\). Thus it is zero, as \(U\) is trace-free. At the next level we have one derivative of \(U\), but it is well known that in dimension 4 there is no such conformal invariant. Thus the image of \(\delta z\) lies in the bottom slot which contains a rank 2 tensor. Considering the knowledge of conformally flat case, it follows that this conformally invariant object must be a multiple of \(\mathcal{D}^{\ast}U\), plus possibly another conformally invariant rank 2 tensor constructed by contracting curvature into \(U\). It is easily checked that it is not possible to construct a rank 2 conformal invariant by contracting curvature into \(U\), as \(U\) is SD while the Weyl curvature is, by assumption, ASD. Thus the only possibility is that the bottom slot is a non-zero multiple of \(\mathcal{D}^{\ast}U\). We will give another proof of this result (by direct calculation) in the next section; see Corollary 3.3. **Corollary 2.3**.: _(See [4], Theorem 3.10; [22], Lemma 2.1) Suppose \((M^{4},g)\) is ASD and \(U\in\ker\mathcal{D}^{\ast}\). Let \(z\) be defined as in (2.21). Then_ \[\Delta z=\frac{1}{3}Rz=2Jz. \tag{2.22}\] Proof.: Since \(z\) satisfies a twisted version of the Weitzenbock formula (1.2), when \((M^{4},g)\) is ASD the only non-zero curvature term is given by \(\frac{1}{3}R\) in both versions. ## 3. Weitzenbock formula(s) In this section we use Proposition 2.2 and Corollary 2.3 to prove a Weitzenbock formula for \(z=z(U)\in\Lambda^{2}_{+}(\mathcal{E})\) when \(U\in\ker\mathcal{D}^{\ast}\). We begin with some more general calculations. **Proposition 3.1**.: _Let \(U\in\Gamma(\mathcal{W}^{+})\) and let \(z=z(U)\) be given by (2.21). Then the covariant derivative of \(z\) is given by_ \[\begin{split}\nabla_{m}z_{ijAB}&=\left(\delta U \right)_{mij}\left(X_{A}Y_{B}-Y_{A}X_{B}\right)+\left(-\nabla_{m}\left(\delta U \right)^{\ell}_{\phantom{\ell}ij}-U_{ij}^{\phantom{\ell}k\ell}P_{mk}\right) \left(X_{A}Z_{\ell B}-Z_{\ell A}X_{B}\right)\\ &\quad+U_{ij}^{\phantom{ij}k}\left(Y_{A}Z_{kB}-Z_{kA}Y_{B} \right)+\left(\frac{1}{2}\nabla_{m}U_{ij}^{\phantom{ij}k\ell}+\left(\delta U \right)^{k}_{\phantom{k}ij}\delta^{\ell}_{m}\right)\left(Z_{kA}Z_{\ell B}-Z_{ kB}Z_{\ell A}\right).\end{split} \tag{3.1}\] For the proof of this and the following proposition we will use the following formulas (see (6) from [18]): \[\nabla_{k}X_{A} =Z_{kA},\] \[\nabla_{k}Z_{A\ell} =-P_{k\ell}X_{A}-Y_{A}g_{k\ell},\] \[\nabla_{k}Y_{A} =P_{k\ell}Z_{A}^{\ell}. \tag{3.2}\] **Lemma 3.2**.: _The following formulas hold:_ \[\nabla_{m}\left(X_{A}Y_{B}-Y_{A}X_{B}\right)=-\left(Y_{A}Z_{mB}-Z_{mA}Y_{B} \right)+\left(P_{m}^{q}X_{A}Z_{qA}-P_{m}^{q}Z_{qA}X_{B}\right), \tag{3.3}\] \[\nabla_{m}\left(X_{A}Z_{kB}-Z_{kA}Z_{B}\right)=-g_{km}\left(X_{A}Y_{B}-Y_{A}X_ {B}\right)-\left(Z_{kA}Z_{mB}-Z_{mA}Z_{kB}\right), \tag{3.4}\] \[\nabla_{m}\left(Y_{A}Z_{kB}-Z_{kA}Y_{B}\right)=P_{mk}\left(X_{A}Y_{B}-Y_{A}X_ {B}\right)+P_{m}^{q}\left(Z_{qA}Z_{kB}-Z_{qB}Z_{kA}\right), \tag{3.5}\] \[\nabla_{m}\left(Z_{kA}Z_{\ell B}-Z_{kB}Z_{\ell A}\right) =-P_{mk}\left(X_{A}Z_{\ell B}-Z_{\ell A}X_{B}\right)+P_{m\ell} \left(X_{A}Z_{kB}-Z_{kA}X_{B}\right)\] \[\quad-g_{mk}\left(Y_{A}Z_{\ell B}-Y_{B}Z_{\ell A}\right)+g_{m\ell }\left(Y_{A}Z_{kB}-Z_{kA}Y_{B}\right). \tag{3.6}\] Proof.: First, by the Leibniz rule, \[\nabla_{m}\left(X_{A}Y_{B}-Y_{A}X_{B}\right)=\left(\nabla_{m}X_{A}\right)Y_{B} +X_{A}\left(\nabla_{m}Y_{B}\right)-\left(\nabla_{m}Y_{A}\right)X_{B}-Y_{A} \left(\nabla_{m}X_{B}\right). \tag{3.7}\] By (3.2) we find \[\nabla_{m}\left(X_{A}Y_{B}-Y_{A}X_{B}\right) =Z_{mA}Y_{B}+P_{mq}X_{A}Z_{qA}-P_{mq}Z_{qA}X_{B}-Y_{A}Z_{mB}\] \[=-\left(Y_{A}Z_{mB}-Z_{mA}Y_{B}\right)+\left(P_{mq}X_{A}Z_{qA}-P_ {mq}Z_{qA}X_{B}\right). \tag{3.8}\] Similarly, \[\nabla_{m} \left(X_{A}Z_{kB}-Z_{kA}X_{B}\right)=\left(\nabla_{m}X_{A}\right) Z_{kB}+X_{A}\left(\nabla_{m}Z_{kB}\right)-\left(\nabla_{m}Z_{kA}\right)X_{B}-Z_{kA} \left(\nabla_{m}X_{B}\right)\] \[=Z_{mA}Z_{kB}+X_{A}\left(-P_{mk}X_{B}-Y_{B}g_{mk}\right)-\left(-P _{mk}X_{A}-Y_{A}g_{mk}\right)X_{B}-Z_{kA}Z_{mB}\] \[=-g_{km}\left(X_{A}Y_{B}-Y_{A}X_{B}\right)-\left(Z_{kA}Z_{mB}-Z_{ mA}Z_{kB}\right), \tag{3.9}\] \[\nabla_{m} \left(Y_{A}Z_{kB}-Z_{kA}Y_{B}\right)=\left(\nabla_{m}Y_{A}\right) Z_{kB}+Y_{A}\left(\nabla_{m}Z_{kB}\right)-\left(\nabla_{m}Z_{kA}\right)Y_{B}-Z_{kA} \left(\nabla_{m}Y_{B}\right)\] \[=P_{mq}Z_{qA}Z_{kB}+Y_{A}\left(-P_{mk}X_{B}-Y_{B}g_{mk}\right)- \left(-P_{mk}X_{A}-Y_{A}g_{mk}\right)Y_{B}-P_{mq}Z_{qB}Z_{kA}\] \[=P_{mk}\left(X_{A}Y_{B}-Y_{A}X_{B}\right)+P_{mq}\left(Z_{qA}Z_{kB }-Z_{qB}Z_{kA}\right), \tag{3.10}\] \[\nabla_{m}\left(Z_{kA}Z_{\ell B}-Z_{kB}Z_{\ell A}\right) =\left(\nabla_{m}Z_{kA}\right)Z_{\ell B}+Z_{kA}\left(\nabla_{m}Z _{\ell B}\right)-\left(\nabla_{m}Z_{kB}\right)Z_{\ell A}-Z_{kB}\left(\nabla_{ m}Z_{\ell A}\right)\] \[=\left(-P_{mk}X_{A}-Y_{A}g_{mk}\right)Z_{\ell B}+Z_{kA}\left(-P_ {m\ell}X_{B}-Y_{B}g_{m\ell}\right)\] \[\quad-\left(-P_{mk}X_{B}-Y_{B}\delta_{mk}\right)Z_{\ell A}-Z_{kB }\left(P_{m\ell}X_{A}-Y_{A}\delta_{m\ell}\right)\] \[=-P_{mk}\left(X_{A}Z_{\ell B}-Z_{\ell A}X_{B}\right)+P_{m\ell} \left(X_{A}Z_{kB}-Z_{kA}X_{B}\right)\] \[\quad-g_{mk}\left(Y_{A}Z_{\ell B}-Y_{B}Z_{\ell A}\right)+g_{m\ell }\left(Y_{A}Z_{kB}-Z_{kA}Y_{B}\right). \tag{3.11}\] Proof of Proposition 3.1.: By (2.21), \[\nabla_{m}z_{ijAB} =\nabla_{m}\Big{\{}\frac{1}{2}{U_{ij}}^{k\ell}\left(Z_{kA}Z_{\ell B }-Z_{\ell A}Z_{kB}\right)-\left(\delta U\right)^{k}_{ij}\left(X_{A}Z_{kB}-Z_{kA} X_{B}\right)\Big{\}}\] \[=\frac{1}{2}\nabla_{m}{U_{ij}}^{k\ell}\left(Z_{kA}Z_{\ell B}-Z_{ \ell A}Z_{kB}\right)+\frac{1}{2}{U_{ij}}^{k\ell}\nabla_{m}\left(Z_{kA}Z_{\ell B }-Z_{\ell A}Z_{kB}\right)\] \[\quad-\nabla_{m}\left(\delta U\right)^{k}_{ij}\left(X_{A}Z_{kB}-Z _{kA}X_{B}\right)-\left(\delta U\right)^{k}_{ij}\nabla_{m}\left(X_{A}Z_{kB}-Z_ {kA}X_{B}\right)\] \[=\frac{1}{2}\nabla_{m}{U_{ij}}^{k\ell}\left(Z_{kA}Z_{\ell B}-Z_{ \ell A}Z_{kB}\right)+\frac{1}{2}{U_{ij}}^{k\ell}\Big{\{}-P_{mk}\left(X_{A}Z_{ \ell B}-Z_{\ell A}X_{B}\right)\] \[\quad+P_{m\ell}\left(X_{A}Z_{kB}-Z_{kA}X_{B}\right)-g_{mk}\left(Y _{A}Z_{\ell B}-Y_{B}Z_{\ell A}\right)+g_{m\ell}\left(Y_{A}Z_{kB}-Z_{kA}Y_{B} \right)\Big{\}}\] \[\quad-\nabla_{m}\left(\delta U\right)^{k}_{ij}\left(X_{A}Z_{kB}-Z _{kA}X_{B}\right)-\left(\delta U\right)^{k}_{ij}\Big{\{}-g_{km}\left(X_{A}Y_{B }-Y_{A}X_{B}\right)\] \[\quad\quad-\left(Z_{kA}Z_{mB}-Z_{mA}Z_{kB}\right)\Big{\}}\] \[=\left(\delta U\right)_{mij}\left(X_{A}Y_{B}-Y_{A}X_{B}\right)+ \left(-\nabla_{m}\left(\delta U\right)^{\ell}_{ij}-{U_{ij}}^{k\ell}P_{mk} \right)\left(X_{A}Z_{\ell B}-Z_{\ell A}X_{B}\right)\] \[\quad+{U_{ij}}^{k}_{m}\left(Y_{A}Z_{kB}-Z_{kA}Y_{B}\right)+\left( \frac{1}{2}\nabla_{m}{U_{ij}}^{k\ell}+\left(\delta U\right)^{k}_{ij}\delta^{ \ell}_{m}\right)\left(Z_{kA}Z_{\ell B}-Z_{kB}Z_{\ell A}\right). \tag{3.12}\] We can now give another proof of Proposition 2.2: **Corollary 3.3**.: _Suppose \((M^{4},g)\) is ASD, and \(U\) is SD. Then \(U\in\ker\mathcal{D}^{*}\) iff \(z=z(U)\in\Lambda_{+}^{2}(\mathcal{E})\) is a harmonic, self-dual two form._ Proof.: Since \(z\) is self-dual, it suffices to show that \(\delta z=0\). By (3.1), \[\left(\delta z\right)_{jAB} =g^{im}\nabla_{m}z_{ijAB}\] \[=g^{im}\left(\delta U\right)_{mij}\left(X_{A}Y_{B}-Y_{A}X_{B} \right)+g^{im}\left(-\nabla_{m}\left(\delta U\right)^{\ell}_{ij}-{U_{ij}}^{k \ell}P_{mk}\right)\left(X_{A}Z_{\ell B}-Z_{\ell A}X_{B}\right)\] \[\quad+g^{im}{U_{ij}}^{k}_{m}\left(Y_{A}Z_{kB}-Z_{kA}Y_{B}\right)+g ^{im}\left(\frac{1}{2}\nabla_{m}{U_{ij}}^{k\ell}+\left(\delta U\right)^{k}_{ ij}\delta^{\ell}_{m}\right)\left(Z_{kA}Z_{\ell B}-Z_{kB}Z_{\ell A} \right). \tag{3.13}\] Since \(U\) is a curvature-type tensor for which all contractions vanish, it follows that \[\left(\delta z\right)_{jAB}=-\frac{1}{2}\left(\mathcal{D}^{*}U\right)_{j\ell} \left(X_{A}Z_{\ell B}-Z_{\ell A}X_{B}\right). \tag{3.14}\] **Proposition 3.4**.: _Same assumptions as Proposition 3.1. Then the rough laplacian of \(z\), i.e., \(\Delta z_{ijAB}=g^{k\ell}\nabla_{k}\nabla_{\ell}z_{ijAB}\), is given by_ \[\Delta z_{ijAB}=\Big{\{}\left(W^{+}\right)_{pqi}^{k}{U^{qp}}_{kj} +\left(W^{+}\right)_{pqj}^{k}{U^{pq}}_{ik}\Big{\}}\left(X_{A}Y_{B}-Y_{A}X_{B}\right)\] \[\quad+\Big{\{}-\Delta\left(\delta U\right)^{\ell}_{ij}-2\nabla^{m }{U_{ij}}^{k\ell}P_{mk}-{U_{ij}}^{k\ell}\nabla_{k}J+J\left(\delta U\right)^{ \ell}_{ij}\Big{\}}\left(X_{A}Z_{\ell B}-Z_{\ell A}X_{B}\right)\] \[\quad+\Big{\{}\frac{1}{2}\Delta{U_{ij}}^{k\ell}-\nabla^{k}\left( \delta U\right)^{\ell}_{ij}+\nabla^{\ell}\left(\delta U\right)^{k}_{ij}-{U_{ ij}}^{km}P_{m}^{\ell}+{U_{ij}}^{\ell m}P_{m}^{k}\Big{\}}\times\left(Z_{kA}Z_{\ell B}-Z_{kB}Z_{ \ell A}\right). \tag{3.15}\] Proof.: By (3.1), \[\Delta z_{ijAB} =\nabla^{m}\left(\nabla_{m}z_{ijAB}\right)\] \[=\nabla^{m}\Big{\{}\left(\delta U\right)_{mij}\left(X_{A}Y_{B}-Y_{A }X_{B}\right)+\left(-\nabla_{m}\left(\delta U\right)^{\ell}_{ij}-U_{ij}^{\ \ k\ell}P_{mk}\right)\left(X_{A}Z_{\ell B}-Z_{\ell A}X_{B}\right)\] \[\quad+U_{ij}^{\ \ k}\left(Y_{A}Z_{kB}-Z_{kA}Y_{B}\right)+\left( \frac{1}{2}\nabla_{m}U_{ij}^{\ \ k\ell}+\left(\delta U\right)^{k}_{ij}\,\delta^{\ell}_{m} \right)\left(Z_{kA}Z_{\ell B}-Z_{kB}Z_{\ell A}\right)\Big{\}}\] \[=\left(\nabla^{m}\left(\delta U\right)_{mij}\right)\left(X_{A}Y_{B }-Y_{A}X_{B}\right)+\left(-\Delta\left(\delta U\right)^{\ell}_{ij}-\nabla^{m}U _{ij}^{\ \ k\ell}P_{mk}-U_{ij}^{\ \ k\ell}\nabla^{m}P_{mk}\right)\] \[\times\ \left(X_{A}Z_{\ell B}-Z_{\ell A}X_{B}\right)+\left(\nabla^{m }U_{ij}^{\ \ k}\right)\left(Y_{A}Z_{kB}-Z_{kA}Y_{B}\right)+\left(\frac{1}{2}\Delta U_{ ij}^{\ \ k\ell}+\nabla^{\ell}\left(\delta U\right)^{k}_{ij}\right)\] \[\times\ \left(Z_{kA}Z_{\ell B}-Z_{kB}Z_{\ell A}\right)+\left( \delta U\right)_{mij}\nabla^{m}\left(X_{A}Y_{B}-Y_{A}X_{B}\right)\] \[\quad+\left(-\nabla_{m}\left(\delta U\right)^{\ell}_{ij}-U_{ij}^{ \ \ k\ell}P_{mk}\right)\nabla^{m}\left(X_{A}Z_{\ell B}-Z_{\ell A}X_{B}\right)\] \[\quad+U_{ij}^{\ \ k}\,_{m}\nabla^{m}\left(Y_{A}Z_{kB}-Z_{kA}Y_{B} \right)+\left(\frac{1}{2}\nabla_{m}U_{ij}^{\ \ k\ell}+\left(\delta U\right)^{k}_{ij}\,\delta^{\ell}_{m} \right)\nabla^{m}\left(Z_{kA}Z_{\ell B}-Z_{kB}Z_{\ell A}\right). \tag{3.16}\] Then applying the formulas from Lemma 3.2 and using the Bianchi identity to write \(\nabla^{m}P_{mk}=\nabla_{k}J\), we get \[\Delta z_{ijAB}=\left(\nabla^{m}\left(\delta U\right)_{mij}\right) \left(X_{A}Y_{B}-Y_{A}X_{B}\right)+\left(-\Delta\left(\delta U\right)^{\ell}_ {ij}-\nabla^{m}U_{ij}^{\ \ k\ell}P_{mk}-U_{ij}^{\ \ k\ell}\nabla_{k}J\right)\] \[\quad\times\left(X_{A}Z_{\ell B}-Z_{\ell A}X_{B}\right)+\left( \nabla^{m}U_{ij}^{\ \ k}\right)\left(Y_{A}Z_{kB}-Z_{kA}Y_{B}\right)+\left(\frac{1}{2}\Delta U_{ ij}^{\ \ k\ell}+\nabla^{\ell}\left(\delta U\right)^{k}_{ij}\right)\] \[\quad\times\left(Z_{kA}Z_{\ell B}-Z_{kB}Z_{\ell A}\right)+\left( \delta U\right)_{mij}\Big{\{}-\left(Y_{A}Z_{B}^{m}-Z_{A}^{m}Y_{B}\right)+ \left(P_{q}^{m}X_{A}Z_{A}^{q}-P_{q}^{m}Z_{A}^{q}X_{B}\right)\Big{\}}\] \[\quad+\left(-\nabla_{m}\left(\delta U\right)^{\ell}_{ij}-U_{ij}^{ \ \ k\ell}P_{mk}\right)\Big{\{}-\delta^{m}_{\ell}\left(X_{A}Y_{B}-Y_{A}X_{B} \right)-\left(Z_{\ell A}Z_{B}^{m}-Z_{A}^{m}Z_{\ell B}\right)\Big{\}}\] \[\quad+U_{ij}^{\ \ k}\Big{\{}P_{k}^{m}\left(X_{A}Y_{B}-Y_{A}X_{B} \right)+P_{q}^{m}\left(Z_{qA}Z_{kB}-Z_{qB}Z_{kA}\right)\Big{\}}\] \[\quad+\left(\frac{1}{2}\nabla_{m}U_{ij}^{\ \ k\ell}+\left(\delta U \right)^{k}_{ij}\,\delta^{\ell}_{m}\right)\Big{\{}-P_{k}^{m}\left(X_{A}Z_{\ell B }-Z_{\ell A}X_{B}\right)+P_{\ell}^{m}\left(X_{A}Z_{kB}-Z_{kA}X_{B}\right)\] \[\quad-\delta^{m}_{k}\left(Y_{A}Z_{\ell B}-Y_{B}Z_{\ell A}\right)+ \delta^{m}_{\ell}\left(Y_{A}Z_{kB}-Z_{kA}Y_{B}\right)\Big{\}}\] \[=2\left(\nabla^{m}\left(\delta U\right)_{mij}\right)\left(X_{A}Y_{B }-Y_{A}X_{B}\right)+\Big{\{}-\Delta\left(\delta U\right)^{\ell}_{ij}-2\nabla^{m}U _{ij}^{\ \ k\ell}P_{mk}-U_{ij}^{\ \ k\ell}\nabla_{k}J\] \[\quad+J\left(\delta U\right)^{\ell}_{ij}\Big{\}}\left(X_{A}Z_{\ell B }-Z_{\ell A}X_{B}\right)+\Big{\{}\frac{1}{2}\Delta U_{ij}^{\ \ k\ell}+2\nabla^{\ell}\left(\delta U\right)^{k}_{ij}+U_{ij}^{\ \ \ mk}P_{m}^{\ell}+U_{ij}^{\ \ \ell m}P_{m}^{k}\Big{\}}\] \[\quad\times\left(Z_{kA}Z_{\ell B}-Z_{kB}Z_{\ell A}\right). \tag{3.17}\] Since the very last term is skew-symmetric in \(k,\ell\), we can rewrite this as \[\Delta z_{ijAB}=2\left(\nabla^{m}\left(\delta U\right)_{mij} \right)\left(X_{A}Y_{B}-Y_{A}X_{B}\right)+\Big{\{}-\Delta\left(\delta U\right)^ {\ell}_{ij}-2\nabla^{m}U_{ij}^{\ \ k\ell}P_{mk}-U_{ij}^{\ \ k\ell}\nabla_{k}J\] \[\quad+J\left(\delta U\right)^{\ell}_{ij}\Big{\}}\left(X_{A}Z_{ \ell B}-Z_{\ell A}X_{B}\right)+\Big{\{}\frac{1}{2}\Delta U_{ij}^{\ \ k\ell}-\nabla^{k}\left(\delta U\right)^{\ell}_{ij}+\nabla^{\ell}\left(\delta U \right)^{k}_{ij}\] \[\quad-U_{ij}^{\ \ km}P_{m}^{\ell}+U_{ij}^{\ \ \ell m}P_{m}^{k}\Big{\}}\times\left(Z_{kA}Z_{\ell B }-Z_{kB}Z_{\ell A}\right). \tag{3.18}\] The proposition will follow, once we prove **Lemma 3.5**.: (3.19) \[\nabla^{m}\left(\delta U\right)_{mij}=\frac{1}{2}\left(W^{+}\right)_{pqi}^{\ \ \ k}U^{qp}_{\ \ kj}-\frac{1}{2}\left(W^{+}\right)_{pqj}^{\ \ \ k}U^{qp}_{\ \ ki}.\] _In particular, if \(\left(M^{4},g\right)\) is ASD, then_ \[\nabla^{m}\left(\delta U\right)_{mij}=0. \tag{3.20}\] Proof.: By definition of the divergence and the fact that \(U_{\ell mij}=-U_{m\ell ij}\), \[\nabla^{m}\left(\delta U\right)_{mij} =\nabla^{m}\nabla^{\ell}U_{\ell mij}\] \[=g^{mp}g^{\ell q}\nabla_{p}\nabla_{q}U_{\ell mij}\] \[=\frac{1}{2}g^{mp}g^{\ell q}\left(\nabla_{p}\nabla_{q}-\nabla_{q }\nabla_{p}\right)U_{\ell mij}\] \[=\frac{1}{2}g^{mp}g^{\ell q}\left(R_{pq\ell}^{\ \ \ k}U_{kmij}+R_{pqm}^{\ \ \ k}U_{\ell kij}+R_{pqi}^{\ \ \ k}U_{\ell mkkj}+R_{pqj}^{\ \ \ k}U_{\ell mkk}\right)\] \[=\frac{1}{2}\left(-R^{mk}U_{kmij}+R^{\ell k}U_{\ell kij}\right)+ \frac{1}{2}g^{mp}g^{\ell q}\left(R_{pqi}^{\ \ \ k}U_{\ell mkj}+R_{pqj}^{\ \ \ k}U_{\ell mkik}\right)\] \[=\frac{1}{2}g^{mp}g^{\ell q}\left(R_{pqi}^{\ \ \ k}U_{\ell mkj}+R_{pqj}^{\ \ \ k }U_{\ell miik}\right).\] Using the decomposition of the curvature tensor as in (2.2), it follows that \[\nabla^{m}\left(\delta U\right)_{mij} =\frac{1}{2}g^{mp}g^{\ell q}\left(W_{pqi}^{\ \ \ k}+g_{ip}P_{q}^{k}-\delta_{p}^{k}P_{iq}-g_{iq}P_{p}^{k}+\delta_{q}^{k}P_{ip} \right)U_{\ell mkj}\] \[\quad+\frac{1}{2}g^{mp}g^{\ell q}\left(W_{pqj}^{\ \ \ k}+g_{jp}P_{q}^{k}-\delta_{p}^{k}P_{jq}-g_{jq}P_{p}^{k}+\delta_{q}^{k}P_{jp} \right)U_{\ell mk}\] \[=\frac{1}{2}W_{pqi}^{\ \ \ k}U^{qp}_{\ \ kj}+\frac{1}{2}\left( \delta_{i}^{m}P^{k\ell}-g^{km}P_{i}^{\ell}-\delta_{i}^{\ell}P^{km}+g^{k\ell}P _{i}^{m}\right)U_{\ell mkj}\] \[\quad+\frac{1}{2}W_{pqj}^{\ \ \ k}U^{qp}_{\ \ ik}+\frac{1}{2}\left( \delta_{j}^{m}P^{k\ell}-g^{km}P_{j}^{\ell}-\delta_{j}^{\ell}P^{km}+g^{k\ell}P _{j}^{m}\right)U_{\ell mik}.\] Using the fact that all contractions of \(U\) vanish, the above simplifies to \[\nabla^{m}\left(\delta U\right)_{mij} =\frac{1}{2}W_{pqi}^{\ \ \ k}U^{qp}_{\ \ kj}+\frac{1}{2}W_{pqj}^{\ \ \ k}U^{qp}_{\ \ ik}+\frac{1}{2}\left(P^{k\ell}U_{\ell ikj}-P^{km}U_{ imkj}+P^{k\ell}U_{\ell jik}-P^{km}U_{jmik}\right)\] \[=\frac{1}{2}W_{pqi}^{\ \ \ k}U^{qp}_{\ \ kj}+\frac{1}{2}W_{pqj}^{\ \ \ k}U^{ qp}_{\ \ ik}\] \[=\frac{1}{2}W_{pqi}^{\ \ \ k}U^{qp}_{\ \ kj}-\frac{1}{2}W_{pqj}^{\ \ \ k}U^{ qp}_{\ \ ki}\] (note that all terms involving the Schouten tensor can be seen to cancel after re-indexing). Finally, since \(U\in\Gamma(\mathcal{W}^{+})\), (3.19) follows. **Theorem 3.6**.: _If \(\left(M^{4},g\right)\) is ASD and \(U\in\ker\mathcal{D}^{*}\), then_ \[\Delta\left(\delta U\right)_{\ell ij}=-2\nabla^{m}U_{ijk\ell}P_{m}^{k}-U_{ijk \ell}\nabla^{k}J+3J\left(\delta U\right)_{\ell ij}, \tag{3.21}\] \[\frac{1}{2}\Delta U_{ijk\ell}=\nabla_{k}\left(\delta U\right)_{\ell ij}-\nabla _{\ell}\left(\delta U\right)_{kij}+U_{ijkm}P_{\ell}^{m}-U_{ij\ell m}P_{k}^{m} +JU_{ijk\ell}. \tag{3.22}\] Proof.: If \(U\in\ker\mathcal{D}^{*}\) then \(z\in\Lambda_{+}^{2}(\mathcal{E})\) is harmonic. Therefore, (3.21) and (3.22) follow from Proposition 3.4 and Corollary 2.3. **Remark 3.7**.: Although we will not provide a proof, it is not difficult to show that (3.22) holds for any section \(U\in\Gamma\left(\mathcal{W}^{+}\right)\); i.e., the condition \(U\in\ker\mathcal{D}^{*}\) is not necessary. However, this is not the case for (3.21). ## 4. Proof of Theorem 1.5 Let \(Q_{g}\) denote the \(Q\)-curvature of \((M^{4},g)\): \[\begin{split} Q_{g}&=\frac{1}{12}\left(-\Delta_{g}R +R_{g}^{2}-3|Ric_{g}|^{2}\right)\\ &=-\frac{1}{2}\Delta_{g}J+J^{2}-|P|^{2}.\end{split} \tag{4.1}\] The total \(Q\)-curvature is a conformal invariant, and we can rewrite (1.10) as a condition on the total \(Q\)-curvature and the Yamabe invariant as follows. By the Chern-Gauss-Bonnet formula, \[\int Q_{g}\,dv_{g}=4\pi^{2}\chi(M^{4})-\frac{1}{8}\int|W_{g}|^{2}\,dv_{g}. \tag{4.2}\] Since \(g\) is ASD, (4.2) becomes \[\int Q_{g}\,dv_{g}=4\pi^{2}\chi(M^{4})-\frac{1}{8}\int|W_{g}^{-}|^{2}\,dv_{g}. \tag{4.3}\] By the Hirzebruch signature formula, \[48\pi^{2}\tau(M^{4})=\int\left(|W_{g}^{+}|^{2}-|W_{g}^{-}|^{2}\right)\,dv_{g}= -\int|W_{g}^{-}|^{2}\,dv_{g}. \tag{4.4}\] Combining this with (4.3) we see that \[\int Q_{g}\,dv_{g}=2\pi^{2}\left(2\chi(M^{4})+3\tau(M^{4})\right). \tag{4.5}\] Therefore, the assumption (1.10) is equivalent to \[\int Q_{g}\,dv_{g}\geq-\frac{1}{12}Y(M^{4},[g])^{2}. \tag{4.6}\] #### The proof of Theorem 1.5 Since our assumptions are conformally invariant, it suffices to prove the result when \(g\) is replaced by any metric in the same conformal class. The metric we will use is a solution of a variational problem related to the regularized determinant of an elliptic operator. The precise formulation is contained in Theorem 5.2 in the Appendix, while the following consequence will suffice for our purposes: **Theorem 4.1**.: _Let \((M^{4},g_{0})\) be a closed Riemannian four-manifold with_ \((i)\)__\(Y(M^{4},[g_{0}])>0\)_,_ \((ii)\)__\(\int_{M^{4}}Q_{g_{0}}\,dv_{g_{0}}\geq-\frac{1}{12}Y(M^{4},[g_{0}])^{2}\)_._ _Then there is a smooth, unit volume conformal metric \(g\in[g_{0}]\) with \(J_{g}=\operatorname{tr}P_{g}>0\) satisfying_ \[\Delta J_{g}\leq-|\mathring{P}_{g}|^{2}+\frac{15}{4}J_{g}^{2}, \tag{4.7}\] _where \(\mathring{P}_{g}=P-\frac{1}{4}Jg\) is the trace-free Schouten tensor._ The proof of this result is somewhat involved, and will also be given in the Appendix. In the following we will show how Theorem 1.5 follows from Theroem 4.1 and the Weitzenbock formulas of the preceding section. For the rest of the proof the metric \(g\) is assumed to satisfy the conclusions of Theorem 4.1, and we will usually suppress the subscript \(g\). Assume \(U\in\ker\mathcal{D}^{*}\). We will record two integral identities that follow from Theorem 3.6 and the inequality (4.7). First, by (3.22), \[\begin{split}\frac{1}{2}\Delta|U|^{2}&=\langle U, \Delta U\rangle+|\nabla U|^{2}\\ &=U^{ijk\ell}\Delta U_{ijk\ell}+|\nabla U|^{2}\\ &=U^{ijk\ell}\Big{\{}2\nabla_{k}\left(\delta U\right)_{\ell ij}-2 \nabla_{\ell}\left(\delta U\right)_{kij}+2U_{ijkm}P_{\ell}^{m}-2U_{ij\ell m }P_{k}^{m}+2JU_{ijk\ell}\Big{\}}+|\nabla U|^{2}\\ &=4U^{ijk\ell}\nabla_{k}\left(\delta U\right)_{\ell ij}+4U^{ijk \ell}\,U_{ijkm}P_{\ell}^{m}+2J|U|^{2}+|\nabla U|^{2}.\end{split} \tag{4.8}\] Recalling that \[U^{ijk\ell}U_{ijkm}=\frac{1}{4}|U|^{2}\delta_{m}^{\ell},\] (4.8) implies \[\frac{1}{2}\Delta|U|^{2}=4U^{ijk\ell}\nabla_{k}\left(\delta U\right)_{\ell ij }+3J|U|^{2}+|\nabla U|^{2}. \tag{4.9}\] If we multiply both sides by \(J\) and integrate over \(M^{4}\), then \[\frac{1}{2}\int_{M^{4}}J\Delta|U|^{2}\,dv=\int_{M^{4}}\left\{4JU^{ijk\ell} \nabla_{k}\left(\delta U\right)_{\ell ij}+3J^{2}|U|^{2}+J|\nabla U|^{2}\right\}dv. \tag{4.10}\] Integrating by parts on the left and using (4.7), we find \[\frac{1}{2}\int_{M^{4}}J\Delta|U|^{2}\,dv=\frac{1}{2}\int_{M^{4}}\left(\Delta J \right)|U|^{2}\,dv\leq\int_{M^{4}}\left(-\frac{1}{2}|\mathring{P}|^{2}+\frac{ 15}{8}J^{2}\right)|U|^{2}\,dv. \tag{4.11}\] Combining (4.9) and (4.10), \[0\geq\int_{M^{4}}\left\{4JU^{ijk\ell}\nabla_{k}\left(\delta U\right)_{\ell ij }+\frac{1}{2}|\mathring{P}|^{2}|U|^{2}+\frac{9}{8}J^{2}|U|^{2}+J|\nabla U|^{2 }\right\}dv. \tag{4.12}\] We now use (3.21): \[\begin{split}\frac{1}{2}\Delta|\delta U|^{2}&=| \nabla\delta U|^{2}+\langle\delta U,\Delta\left(\delta U\right)\rangle\\ &=|\nabla\delta U|^{2}+\left(\delta U\right)^{\ell ij}\Delta \left(\delta U\right)_{\ell ij}\\ &=|\nabla\delta U|^{2}+\left(\delta U\right)^{\ell ij}\Big{\{}-2 \nabla^{m}U_{ijk\ell}P_{m}^{k}-U_{ijk\ell}\nabla^{k}J+3J\left(\delta U \right)_{\ell ij}\Big{\}}\\ &=|\nabla\delta U|^{2}-2\left(\delta U\right)^{\ell ij}\nabla^{ m}U_{ijk\ell}P_{m}^{k}-\left(\delta U\right)^{\ell ij}U_{ijk\ell}\nabla^{k}J+3J| \delta U|^{2}.\end{split} \tag{4.13}\] Integrating this over \(M^{4}\) gives \[0=\int_{M^{4}}\left\{|\nabla\delta U|^{2}-2\left(\delta U\right)^{\ell ij} \nabla^{m}U_{ijk\ell}P_{m}^{k}-\left(\delta U\right)^{\ell ij}U_{ijk\ell} \nabla^{k}J+3J|\delta U|^{2}\right\}dv. \tag{4.14}\] If we integrate by parts in the second term and use the contracted second Bianchi identity \(\nabla^{m}P_{m}^{k}=\nabla^{k}J\), then \[\int_{M^{4}}-2\left(\delta U\right)^{\ell ij}\nabla^{m}U_{ijk\ell}P _{m}^{k}\,dv_{g} =\int_{M^{4}}\left\{2\nabla^{m}\left(\delta U\right)^{\ell ij}U_{ ijk\ell}P_{m}^{k}+2\left(\delta U\right)^{\ell ij}U_{ijk\ell}\nabla^{m}P_{m}^{k} \right\}dv_{g}\] \[=\int_{M^{4}}\left\{2\nabla^{m}\left(\delta U\right)^{\ell ij}U_{ ijk\ell}P_{m}^{k}+2\left(\delta U\right)^{\ell ij}U_{ijk\ell}\nabla^{k}J\right\}dv_{g}. \tag{4.15}\] Substituting this result back into (4.14) gives \[0=\int_{M^{4}}\left\{|\nabla\delta U|^{2}+2\nabla^{m}\left(\delta U\right)^{ \ell ij}U_{ijk\ell}P_{m}^{k}+U_{ijk\ell}\left(\delta U\right)^{\ell ij}\nabla ^{k}J+3J|\delta U|^{2}\right\}dv. \tag{4.16}\] Next, integrate by parts in the second-to-last term in (4.16): \[\int_{M^{4}}U_{ijk\ell}\left(\delta U\right)^{\ell ij}\nabla^{k} J\,dv =\int_{M^{4}}\left\{-U_{ijk\ell}\nabla^{k}\left(\delta U\right)^ {\ell ij}J-\nabla^{k}U_{ijk\ell}\left(\delta U\right)^{\ell ij}J\right\}dv\] \[=\int_{M^{4}}\left\{-U_{ijk\ell}\nabla^{k}\left(\delta U\right)^ {\ell ij}J-J|\delta U|^{2}\right\}dv. \tag{4.17}\] Substituting this back into (4.16) gives \[0=\int_{M^{4}}\left\{|\nabla\delta U|^{2}+2\nabla^{m}\left(\delta U\right)^{ \ell ij}U_{ijk\ell}P_{m}^{k}-JU_{ijk\ell}\nabla^{k}\left(\delta U\right)^{ \ell ij}+2J|\delta U|^{2}\right\}dv. \tag{4.18}\] Finally, we rewrite the curvature terms above in terms of \(\mathring{P}\) and \(J\): \[0=\int_{M^{4}}\left\{|\nabla\delta U|^{2}+2\nabla^{m}\left(\delta U\right)^{ \ell ij}U_{ijk\ell}\mathring{P}_{mk}-\frac{1}{2}JU_{ijk\ell}\nabla^{k}\left( \delta U\right)^{\ell ij}+2J|\delta U|^{2}\right\}dv. \tag{4.19}\] Multiplying by two and rearranging terms, we find \[\int_{M^{4}}JU_{ijk\ell}\nabla^{k}\left(\delta U\right)^{\ell ij}\,dv=\int_{M^ {4}}\left\{2|\nabla\delta U|^{2}+4\nabla^{m}\left(\delta U\right)^{\ell ij}U_ {ijk\ell}\mathring{P}_{mk}+4J|\delta U|^{2}\right\}dv. \tag{4.20}\] We want to combine (4.20) with (4.12). To do so, we first use (4.20) to write \[\int_{M^{4}} 4JU_{ijk\ell}\nabla^{k}\left(\delta U\right)^{\ell ij}\,dv=3\int_ {M^{4}}JU_{ijk\ell}\nabla^{k}\left(\delta U\right)^{\ell ij}\,dv+\int_{M^{4}} JU_{ijk\ell}\nabla^{k}\left(\delta U\right)^{\ell ij}\,dv\] \[=\int_{M^{4}}3JU_{ijk\ell}\nabla^{k}\left(\delta U\right)^{\ell ij }\,dv+\int_{M^{4}}\left\{2|\nabla\delta U|^{2}+4\nabla^{m}\left(\delta U \right)^{\ell ij}U_{ijk\ell}\mathring{P}_{m}^{k}+4J|\delta U|^{2}\right\}dv\] \[=\int_{M^{4}}\left\{2|\nabla\delta U|^{2}+4\nabla^{m}\left( \delta U\right)^{\ell ij}U_{ijk\ell}\mathring{P}_{m}^{k}+3JU_{ijk\ell}\nabla ^{k}\left(\delta U\right)^{\ell ij}+4J|\delta U|^{2}\right\}dv \tag{4.21}\] Substituting this into (4.12), we have \[0\geq\int_{M^{4}}\left\{2|\nabla\delta U|^{2}+4\nabla^{m}\left( \delta U\right)^{\ell ij}U_{ijk\ell}\mathring{P}_{m}^{k}+3JU_{ijk\ell}\nabla^{ k}\left(\delta U\right)^{\ell ij}+4J|\delta U|^{2}\right.\] \[\qquad\qquad+\frac{1}{2}|\mathring{P}|^{2}|U|^{2}+\frac{9}{8}J^{2 }|U|^{2}+J|\nabla U|^{2}\right\}dv\] \[=\int_{M^{4}}\left\{2|\nabla\delta U|^{2}+4T_{m}^{k}\nabla^{m} \left(\delta U\right)^{\ell ij}U_{ijk\ell}+4J|\delta U|^{2}+\frac{1}{2}| \mathring{P}|^{2}|U|^{2}+\frac{9}{8}J^{2}|U|^{2}+J|\nabla U|^{2}\right\}dv, \tag{4.22}\] where \[T_{km}=\mathring{P}_{mk}+\frac{3}{4}Jg_{km}. \tag{4.23}\] If we define the tensor \[V_{m\ell ij}=T_{m}^{k}U_{ijk\ell}, \tag{4.24}\] then the term involving \(T\) in (4.22) can be estimated (via Cauchy-Schwarz) by \[\left|4T_{m}^{k}\nabla^{m}\left(\delta U\right)^{\ell ij}U_{ijk\ell}\right| =4\left|\nabla^{m}\left(\delta U\right)^{\ell ij}V_{m\ell ij}\right|\] \[\leq 4\left|\nabla\delta U\right|\left|V\right|.\] By the arithmetic-geometric mean inequality, \[\left|4T_{km}\nabla_{m}\left(\delta U\right)_{\ell ij}U_{ijk\ell}\right|\leq 2 \left|\nabla\delta U\right|^{2}+2\left|V\right|^{2}. \tag{4.25}\] By the definition of \(V\), \[\left|V\right|^{2} =V^{m\ell ij}V_{m\ell ij}\] \[=T_{p}^{m}U^{ijp\ell}T_{m}^{k}U_{ijk\ell}\] \[=T_{p}^{m}T_{m}^{k}U^{ij\ell p}U_{ij\ell k}\] \[=T_{p}^{m}T_{m}^{k}\left(\frac{1}{4}|U|^{2}\delta_{k}^{p}\right)\] \[=\frac{1}{4}|T|^{2}|U|^{2}\] \[=\frac{1}{4}\left(|\mathring{P}|^{2}+\frac{9}{4}J^{2}\right)|U|^ {2}.\] Consequently, (4.25) implies \[\left|4T_{km}\nabla_{m}\left(\delta U\right)_{\ell ij}U_{ijk\ell}\right|\leq 2 \left|\nabla\delta U\right|^{2}+\frac{1}{2}|\mathring{P}|^{2}|U|^{2}+\frac{9}{ 8}J^{2}|U|^{2},\] hence \[4T_{km}\nabla_{m}\left(\delta U\right)_{\ell ij}U_{ijk\ell}\geq-2\left|\nabla \delta U\right|^{2}-\frac{1}{2}|\mathring{P}|^{2}|U|^{2}-\frac{9}{8}J^{2}|U|^ {2}. \tag{4.26}\] Substituting this into (4.22), we conclude \[0\geq\int_{M^{4}}\left\{4J|\delta U|^{2}+J|\nabla U|^{2}\right\}dv. \tag{4.27}\] Since \(J>0\), it follows that \(\nabla U=0\). However, it is then immediate from (4.12) that \(U\equiv 0\). ## 5. Appendix: The proof of Theorem 4.1 In this Appendix we give the proof of Theorem 4.1. The material in this section is an extension of the existence work of Chang-Yang [10] for critical points of the regularized determinant of conformally covariant operators. We begin with a brief overview of their work, omitting the motivation from spectral geometry and limiting ourselves to the underlying variational problem. Let \((M^{4},g)\) be a closed, four-dimensional Riemannian manifold, and \(W^{2,2}(M)\) the Sobolev space of functions whose weak derivatives up to order two are in \(L^{2}\). Consider the following functionals on \(W^{2,2}(M)\): \[I^{\pm}[w]=4\int w|W^{\pm}_{g}|^{2}\,dv_{g}-\big{(}\int|W^{\pm}_{g}|^{2}\,dv_{g} \big{)}\log\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.2999 15pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt \vrule height 6.299915pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299915pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299915pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299915pt width 1px\hss}\hbox{$\int$}}}e^{4w}\,dv_{g}, \tag{5.1}\] \[II[w]=\int wP_{g}w\,dv_{g}+4\int Q_{g}w\,dv_{g}-\big{(}\int Q_{g}\,dv_{g}\big{)} \log\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299915pt width 1px\hss}\hbox{$ \int$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299915pt width 1px\hss}\hbox{$ \int$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299915pt width 1px\hss}\hbox{$ \int$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299915pt width 1px\hss}\hbox{$ \int$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299915pt width 1px\hss}\hbox{$ \int$}}}e^{4w}\,dv_{g}, \tag{5.2}\] \[III[w]=12\int(\Delta w+|\nabla w|^{2})^{2}\,dv_{g}-4\int(w\Delta R_{g}+R_{g}| \nabla w|^{2})\,dv_{g}, \tag{5.3}\] where \(P_{g}\) denotes the Paneitz operator, \(Q_{g}\) the scalar curvature, and \(\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299915pt width 1px\hss}\hbox{$ \int$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.299915pt width 1px\hss}\hbox{$ \int$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299915pt width 1px\hss}\hbox{$ \int$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299915pt width 1px\hss}\hbox{$ \int$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299915pt width 1px\hss}\hbox{$ \int$}}}\) denotes the normalized integral (i.e., divided by the volume of \(g\)). Let \(\gamma_{1}^{\pm},\gamma_{2},\gamma_{3}\) be constants and define \(F:W^{2,2}(M)\to\mathbb{R}\) be given by \[F[w]=\gamma_{1}^{+}I^{+}[w]+\gamma_{1}^{-}I^{-}[w]+\gamma_{2}II[w]+\gamma_{3} III[w]. \tag{5.4}\] We also define the associated conformal invariant: \[\kappa_{g}=-\gamma_{1}^{+}\int|W^{+}|^{2}\,dv_{g}-\gamma_{1}^{-}\int|W^{-}|^{2 }\,dv_{g}-\gamma_{2}\int Q_{g}\,dv_{g}. \tag{5.5}\] As explained in [10], critical points of \(F\) determine a conformal metric satisfying a fourth order curvature condition. More precisely, if we define the \(U\)_-curvature_ of \(g\) by \[U=U(g)=\gamma_{1}^{+}|W^{+}_{g}|^{2}+\gamma_{1}^{-}|W^{-}_{g}|^{2}+\gamma_{2}Q _{g}-\gamma_{3}\Delta_{g}R_{g}, \tag{5.6}\] then \(w\) is a smooth critical point of \(F\) if and only if the conformal metric \(g_{F}=e^{2w}g\) satisfies \[U(g_{F})\equiv\mu \tag{5.7}\] for some constant \(\mu\). A general existence result for critical points of \(F\) was proved in [10]: **Theorem 5.1**.: (See [10], Theorem 1.1; also [20], Corollary 1.1) _Assume:_ \((i)\)_\(\gamma_{2}<0\) and \(\gamma_{3}<0\),_ \((ii)\)_\(\kappa_{g}<(-\gamma_{2})8\pi^{2}\)._ _Then \(\sup_{w\in W^{2,2}(M)}F[w]\) is attained by some \(w\in W^{2,2}\)._ Regularity of extremals was proved by the second author in joint work with Chang-Yang [9]; later Uhlenbeck-Viaclovsky proved a more general regularity result for arbitrary critical points of \(F\) (see [35]). To prove Theorem 4.1 we need to introduce another functional: \[IV[w]=\frac{\int\big{(}R_{g}+6|\nabla w|^{2}\big{)}\,e^{2w}\,dv_{g}}{\big{(} \int e^{4w}\,dv_{g}\big{)}^{1/2}}. \tag{5.8}\] This is just the Yamabe functional, written in a slightly non-standard form: if \(\widetilde{g}=e^{2w}g\), then \[IV[w]=\frac{\int R_{\widetilde{g}}\,dv_{\widetilde{g}}}{\text{Vol}(\widehat{g} )^{1/2}}. \tag{5.9}\] Given a constant \(\gamma_{4}\), we define \(\Phi:W^{2,2}(M)\to\mathbb{R}\) by \[\Phi[w]=\gamma_{1}^{+}I^{+}[w]+\gamma_{1}^{-}I^{-}[w]+\gamma_{2}II[w]+\gamma_{3 }III[w]+\gamma_{4}IV[w]. \tag{5.10}\] A trivial modification of the existence result of Chang-Yang and the regularity results of Chang-Gursky-Yang and Uhenbeck-Viaclovsky gives the following: **Theorem 5.2**.: _Assume: \((i)\)\(\gamma_{2}<0,\gamma_{3}<0,\gamma_{4}\leq 0\). \((ii)\)\(\kappa_{g}<(-\gamma_{2})8\pi^{2}\)._ _Then \(\sup_{w\in W^{2,2}(M)}\Phi[w]\) is attained by some \(w\in W^{2,2}(M)\). Moreover, \(w\in C^{\infty}\), and the conformal metric \(\widetilde{g}=e^{2w}g\) satisfies_ \[\gamma_{1}^{+}|W_{\widetilde{g}}^{+}|^{2}+\gamma_{1}^{-}|W_{\widetilde{g}}^{- }|^{2}+\gamma_{2}Q_{\widetilde{g}}-\gamma_{3}\Delta_{\widetilde{g}}R_{ \widetilde{g}}+\frac{1}{2}\gamma_{4}R_{\widetilde{g}}=\mu, \tag{5.11}\] _for some constant \(\mu\)._ Proof.: Note that the functional \(\Phi\) is scale-invariant: \[\Phi[w+c]=\Phi[w]\] for any constant \(c\). Therefore, we may normalize a maximizing sequence \(\{w_{k}\}\) for \(\Phi\) so that \[\int e^{4w_{k}}\,dv_{g}=1.\] Assuming \(R_{g}\geq-C\), it follows from the Schwartz inequality that \[IV[w] =\int\left(R_{g}+6|\nabla w_{k}|^{2}\right)e^{2w_{k}}\,dv_{g}\] \[\geq\int R_{g}e^{2w_{k}}\,dv_{g}\] \[\geq-C\left(\int e^{4w_{k}}\,dv_{g}\right)^{1/2}\] \[\geq-C.\] Consequently, if \(\gamma_{4}\leq 0\) then \(\gamma_{4}IV[w]\) is bounded above. By (5.9), it is also bounded below (by the Yamabe invariant). Therefore, the addition of this term has no effect on the estimates in the existence proof of Chang-Yang. We are now ready to prove Theorem 4.1: _The proof of Theorem 4.1._ We first remark that if \((M^{4},g_{0})\) is conformally equivalent to the round sphere (suitably normalized), then \(g=g_{c}\) satisfies the conclusions of the Theorem. Therefore, we may assume \((M^{4},g_{0})\) is not conformally the round sphere. Taking \[\gamma_{1}^{\pm} =0,\] \[\gamma_{2} =-6,\] \[\gamma_{3} =-\frac{1}{2},\] \[\gamma_{4} =-2Y(M^{4},[g_{0}]),\] then \[\kappa_{g}=6\int Q_{g_{0}}\,dv_{g_{0}}.\] To use Theorem 5.2 we need to verify assumption \((ii)\); i.e., \[\int Q_{g_{0}}\,dv_{g_{0}}<8\pi^{2}. \tag{5.12}\] By Theorem B of [21], (5.12) holds as long as \((M^{4},g_{0})\) is not conformally equivalent to the round sphere. Therefore, by Theorem 5.2 there is a smooth conformal metric \(g=e^{2w}g_{0}\) (which we can normalize to have unit volume) satisfying \[-6Q_{g}+\frac{1}{2}\Delta_{g}R_{g}-Y(M^{4},[g_{0}])R_{g}=\mu. \tag{5.13}\] For the rest of the proof we will omit the subscript \(g\). If \(E=Ric-\frac{1}{4}Rg\) denotes the trace-free Ricci tensor of \(g\), then we may use the definition of the \(Q\)-curvature to rewrite (5.13) as \[\Delta R=\frac{1}{8}R^{2}-\frac{3}{2}|E|^{2}+Y(M^{4},[g_{0}])\,R+\mu. \tag{5.14}\] By the arithmetic-geometric mean inequality, \[Y(M^{4},[g_{0}])\,R\leqslant\frac{1}{2}R^{2}+\frac{1}{2}Y(M^{4},[g_{0}])^{2}.\] Therefore, \[\Delta R\leqslant\frac{5}{8}R^{2}-\frac{3}{2}|E|^{2}+\frac{1}{2}Y(M^{4},[g_{0 }])^{2}+\mu. \tag{5.15}\] **Claim 5.3**.: (5.16) \[\mu+\frac{1}{2}Y(M^{4},[g_{0}])^{2}\leqslant 0.\] For now let us assume the claim and see how the theorem follows. From (5.16) and (5.15) it follows that \[\Delta R\leqslant\frac{5}{8}R^{2}-\frac{3}{2}|E|^{2}. \tag{5.17}\] Using the fact that \(J=\frac{1}{6}R\) and \(\check{P}=\frac{1}{2}E\), this inequality can also be written \[\Delta J\leqslant-|\check{P}|^{2}+\frac{15}{4}J^{2},\] hence (4.7) holds. To see that \(R>0\), we use (5.14) and the fact that \(\mu<0\) to write \[\Delta R \leqslant\frac{1}{8}R^{2}+Y(M^{4},[g_{0}])\,R\] \[\leqslant\frac{1}{6}R^{2}+Y(M^{4},[g_{0}])\,R. \tag{5.18}\] Let \(\phi>0\) denote the eigenfunction associated to the first eigenvalue \(\lambda_{1}(L)\) of the conformal laplacian: \[L\phi:=\left(-6\Delta+R\right)\phi=\lambda_{1}(L). \tag{5.19}\] Since \(Y(M^{4},[g])>0\), it follows that \(\lambda_{1}(L)>0\). An easy calculation using (5.18) and (5.19) gives \[\Delta\frac{R}{\phi}\leqslant-2\langle\nabla\left(\frac{R}{\phi}\right), \frac{\nabla\phi}{\phi}\rangle+\left(Y\left(M^{4},[g_{0}]\right)+\frac{1}{6} \lambda_{1}(L)\right)\frac{R}{\phi}.\] It follows from the strong maximum principle that \(R/\phi>0\) on \(M\), hence \(R>0\). This completes the proof of the theorem, once we prove Claim 5.3. Proof of Claim 5.3.: If integrate (5.13) over \(M\) and use the fact that \(g\) has unit volume, we obtain \[\mu=-6\int Q_{g}\,dv_{g}-Y(M^{4},[g_{0}])\int R_{g}\,dv_{g}. \tag{5.20}\] By definition of the Yamabe invariant (again using the fact that \(g\) has unit volume) and the fact that \(Y(M^{4},[g_{0}])>0\), \[Y(M^{4},[g_{0}])\int R_{g}\,dv_{g}\geq Y(M^{4},[g_{0}])^{2}.\] Therefore, by (5.20), \[\mu\leqslant-6\int Q_{g}\,dv_{g}-Y(M^{4},[g_{0}])^{2}. \tag{5.21}\] Since the total \(Q\)-curvature is a conformal invariant, using assumption \((ii)\) of the theorem we see that \[\mu \leqslant-6\int Q_{g}\,dv_{g}-Y(M^{4},[g_{0}])^{2}\] \[=-6\int Q_{g_{0}}\,dv_{g_{0}}-Y(M^{4},[g_{0}])^{2}\] \[\leqslant\frac{1}{2}Y(M^{4},[g_{0}])^{2}-Y(M^{4},[g_{0}])^{2}\] \[=-\frac{1}{2}Y(M^{4},[g_{0}])^{2},\] which proves (5.16).
2304.12162
Preconditioner Design via the Bregman Divergence
We study a preconditioner for a Hermitian positive definite linear system, which is obtained as the solution of a matrix nearness problem based on the Bregman log determinant divergence. The preconditioner is of the form of a Hermitian positive definite matrix plus a low-rank matrix. For this choice of structure, the generalised eigenvalues of the preconditioned matrix are easily calculated, and we show under which conditions the preconditioner minimises the $\ell_2$ condition number of the preconditioned matrix. We develop practical numerical approximations of the preconditioner based on the randomised singular value decomposition (SVD) and the Nystr\"om approximation and provide corresponding approximation results. Furthermore, we prove that the Nystr\"om approximation is in fact also a matrix approximation in a range-restricted Bregman divergence and establish several connections between this divergence and matrix nearness problems in different measures. Numerical examples are provided to support the theoretical results.
Andreas Bock, Martin S. Andersen
2023-04-24T15:14:57Z
http://arxiv.org/abs/2304.12162v7
# Preconditioner Design via the Bregman Divergence ###### Abstract We study a preconditioner for a Hermitian positive definite linear system, which is obtained as the solution of a matrix nearness problem based on the Bregman log determinant divergence. The preconditioner is on the form of a Hermitian positive definite matrix plus a low-rank matrix. For this choice of structure, the generalised eigenvalues of the preconditioned system are easily calculated, and we show that the preconditioner is optimal in the sense that it minimises the \(\ell_{2}\) condition number of the preconditioned matrix. We develop practical numerical approximations of the preconditioner based on the randomised singular value decomposition (SVD) and the Nystrom approximation and provide corresponding approximation results. Furthermore, we prove that the Nystrom approximation is in fact also a matrix approximation in a range-restricted Bregman divergence and establish several connections between this divergence and matrix nearness problems in different measures. Numerical examples are provided to support the theoretical results. ## 1 Introduction We study preconditioning of a Hermitian matrix \(S=A+B\in\mathbb{C}^{n\times n}\), where \(A=QQ^{*}\in\mathbb{C}^{n\times n}\) is positive definite and \(B\in\mathbb{C}^{n\times n}\) is Hermitian positive semidefinite. The factor \(Q\in\mathbb{C}^{n\times n}\) does not need to be a Cholesky factor and can be a symmetric square root, for instance. Finding \(x\in\mathbb{C}^{n}\) such that \[Sx=b, \tag{1}\] encapsulates regularised least squares, Gaussian process regression [19] and is a ubiquitous problem in scientific computing [25, 1]. It also appears in Schur complements of saddle-point formulations of variational data assimilation problems [8, 7]. It is often the case in large-scale numerical linear algebra that interaction with a matrix is only feasible through its action on vectors. Iterative methods such as the conjugate gradient method are therefore of interest for which preconditioning is often a necessity for an efficient and accurate solution, leading to the so-called _preconditioned_ conjugate gradient method (PCG). In this paper, we investigate preconditioners for (1) based on low-rank approximations of the positive semidefinite term of \(S\). We can write \(S\) as \[S=Q(I+G)Q^{*},\] where \(G=Q^{-1}BQ^{-*}\). Based on this decomposition we study the two following preconditioners \[\tilde{S}_{r} =A+B_{r}, \tag{2a}\] \[\tilde{S}_{r} =Q(I+G_{r})Q^{*}, \tag{2b}\] where \(B_{r}\) and \(G_{r}\) are both obtained as truncated SVDs of \(B\) and \(G\), respectively, where \(r<\operatorname{rank}B\). While (2a) may seem natural, we show that our proposed preconditioner (2b) is always an improvement and has many useful properties from a practical and theoretical point of view. We also demonstrate that it appears naturally as the minimiser of a matrix nearness problem in the Bregman divergence. This is intuitive since it is desirable to seek a preconditioner that is, in some sense, close to the matrix \(S\). Our main contribution is the development of a general framework for the design of preconditioners for (1) based on the Bregman log-determinant divergence. First, Section 2 examines the two preconditioners in (2) more closely based on straightforward eigenvalue analysis and presents a simple example to develop intuition. We then introduce the Bregman divergence framework in Section 3. We prove that our proposed preconditioner (2b) minimises the Bregman divergence to the original matrix \(S\) in Theorem 1, and demonstrate how it can improve the convergence of PCG in Section 3.3. In Section 4, we make a connection between different norm minimisation problems and the Bregman divergence framework. We also derive the Nystrom approximation in several different ways, one of which is as a minimiser of a range-restricted Bregman divergence. Next, Section 5 explores practical numerical methods based on randomised linear algebra for the approximations of our preconditioner in the big data regime where the truncated SVD becomes computationally intractable. Section 6 supports our theoretical findings with numerical experiments for equation (1) using various preconditioners. Section 7 contains a summary. ### Related Work Preconditioning and matrix approximation are closely linked since a preconditioner \(\mathcal{P}\) is typically chosen such that, in some sense, \(\mathcal{P}\approx S\), subject to the requirement that \(\mathcal{P}\) respects some computational budget in terms of storage, action on vectors, and its factorisation. In this paper, we establish a natural connection between matrix nearness problems in the Bregman divergence and preconditioning. The use of divergences in linear algebra not novel. [3] proposed the use of the Bregman divergences for matrix approximation problems motivated by their connection to exponential families of distributions. Also relevant to our work is [15] where the authors investigate extensions of divergences to the low-rank matrices in the context of kernel learning. We build on these ideas in Section 4 and show the connection between the Bregman divergence and the Nystrom approximation. The latter is becoming increasingly popular in numerical linear algebra, in particular in the context of randomised matrix approximation methods. Many such methods are based on _sketching_[24], a technique whereby a matrix \(A\in\mathbb{C}^{n\times n}\) is multiplied by some _sketching matrix_\(\Omega\in\mathbb{R}^{n\times r}\), \(r<n\) producing a compressed matrix \(A\Omega\) which is used to compute approximations of \(A\). As mentioned, randomised methods select \(\Omega\) as a random matrix providing the setting for probabilistic bounds on the accuracy of the approximation, see [17, 12]. Methods such as the randomised SVD and the Nystrom approximation offer practical alternatives to the truncated SVD with both affordable computational cost and certain theoretical guarantees. Such approximations have long been popular in the kernel-based learning community [23, 4]. Preconditioning is becoming more relevant for the data science and machine learning communities with the increasing demand for handling large-scale problems. Recently, the work of [6] introduced a Nystrom-based preconditioner for PCG for matrices \(\mu I+A\) for some scalar \(\mu>0\), where \(A\) is symmetric positive semidefinite. This strategy has also been applied to the _alternating method of multipliers_[26], and sketching more generally to a stochastic quasi-Newton method in [5]. Applications such as large-scale Gaussian process regression can also benefit from the development of the preconditioners introduced in this work since gradient-based parameter optimisation requires expensive linear solves for optimising kernel hyperparameters [22]. ### Notation & Preliminaries **Definition 1**.: _Let \(\mathbb{H}^{n}\) denote the space of Hermitian \(n\times n\) matrices. \(\mathbb{H}^{n}_{+}\subset\mathbb{H}^{n}\) denotes the cone of positive semidefinite matrices and \(\mathbb{H}^{n}_{++}=\operatorname{int}\mathbb{H}^{n}_{+}\) the positive definite cone._ **Definition 2**.: _Let \(\lambda(A)=\{\lambda_{1}(A),\ldots,\lambda_{n}(A)\}\), with \(\lambda_{1}(A)\geq\ldots\geq\lambda_{n}(A)\), denote the ordered eigenvalues of a matrix \(A\in\mathbb{C}^{n\times n}\). Further, let \(\sigma(A)=\{\sigma_{1}(A),\ldots,\sigma_{n}(A)\}\) denote the similarly ordered singular values of a matrix \(A\in\mathbb{C}^{n\times n}\). For nonsingular normal matrices \(A\) we denote the \(\ell_{2}\) condition number by \(\kappa_{2}(A)=\frac{\sigma_{1}(A)}{\sigma_{n}(A)}\)._ We denote by \(A:B:=\operatorname{trace}\big{(}A^{*}B\big{)}\) the trace inner product between two matrices \(A\) and \(B\). \(\|\cdot\|_{F}\) denotes the Frobenius norm and \(\|\cdot\|_{2}\) the \(\ell_{2}\) norm. \(A^{+}\) denotes the Moore-Penrose inverse of \(A\). \(A_{r}\) will in general be used to denote a rank \(r\) approximation of \(A\) obtained via a truncated SVD. When \(A\) is positive definite we define the induced norm \(\|\cdot\|_{A}\) by the relation \[\|x\|_{A}^{2}=x^{*}Ax.\] Convex analysis is a natural tool for studying the Bregman divergence, so we recall some elementary definitions, see [20] for more details. We restrict our attention to spaces of finite dimension. The _relative interior_, \(\operatorname{ri}C\), of a convex set \(C\) is the subset of \(C\) which can be considered as an affine subset of \(C\), i.e., \[\operatorname{ri}C=\{x\in C\ |\ \exists\ \text{neighbourhood}\ V\text{ of }x\text{ such that }V\cap\operatorname{aff}C\subseteq C\},\] where \(\operatorname{aff}C\) is the affine hull of \(C\). Let \(\mathcal{X}\) be some finite-dimensional real inner product space. The _effective domain_ of a function \(\phi:\mathcal{X}\to\mathbb{R}\cup\{-\infty,+\infty\}\) is defined by \[\operatorname{dom}\phi=\{x\in\mathcal{X}\,|\,\phi(x)<\infty\,\}.\] A convex function \(\phi\) is _proper_ if it never attains the value \(-\infty\) and that there exists point \(x\in\mathcal{X}\) for which \(\phi(x)\) is finite. Spectral Analysis In this section, we develop some intuition about the two preconditioners \(\tilde{S}_{r}\) and \(\widehat{S}_{r}\) defined in (2) by straightforward analysis of the eigenvalues of the preconditioned matrices \(\tilde{S}_{r}^{-1}S\) and \(\widehat{S}_{r}^{-1}S\). Their respective inverses are given by \[\tilde{S}_{r}^{-1} =Q^{-*}(I+Q^{-1}B_{r}Q^{-*})^{-1}Q^{-1}\] \[\widehat{S}_{r}^{-1} =Q^{-*}(I+G_{r})^{-1}Q^{-1}.\] **Lemma 1**.: _The eigenvalues of \(\tilde{S}_{r}^{-1}S\) and \(\widehat{S}_{r}^{-1}S\) satisfy the following bounds for \(i=1,\ldots,n\):_ \[\lambda_{i}(\widehat{S}_{r}^{-1}S) \in\left[1,1+\lambda_{1}(G)\right],\] \[\lambda_{i}(\widehat{S}_{r}^{-1}S) \in\left[1,1+\lambda_{r+1}(G)\right].\] Proof.: We obtain the result by observing the following bounds for the Rayleigh quotients below, where \(x\in\mathbb{C}^{n}\) and not identically zero: \[\frac{x^{*}(I+G)x}{x^{*}(I+Q^{-1}B_{r}Q^{-*})x} \in\left[1,1+\lambda_{1}(G)\right], \tag{3a}\] \[\frac{x^{*}(I+G)x}{x^{*}(I+G_{r})x} \in\left[1,1+\lambda_{r+1}(G)\right]. \tag{3b}\] The bound (3a) is of course pessimistic, but it is in general difficult to infer a tighter upper bound here since \(G\) and \(Q^{-1}B_{r}Q^{-*}\) are not a priori simultaneously diagonalisable. Later, in Theorem 2, we show that \(\widehat{S}_{r}\) minimises the conditioner number of the preconditioned matrix. We defer a more detailed analysis to Section 3, but this result motivates the study of (2). Next we present a simple example illustrating the difference between \(\widehat{S}_{r}\) and \(\tilde{S}_{r}\). Let \(A\) and \(B\) be diagonal \(6\times 6\) matrices: \[A=\operatorname{diag}(1.1,1.05,0.375,0.05,0.05,0.05),\qquad B=\operatorname{ diag}(1,0.5,0.25,0.1,0,0). \tag{4}\] This results in \[G=\operatorname{diag}(\mathbf{0.9090},0.4762,0.6667,\mathbf{2},0,0), \tag{5}\] where, for ease of exposition, we truncate after the fourth decimal. We observe \(\operatorname{rank}B=4\), and take \(r=2\) in the low-rank approximations of the positive semidefinite term. A truncated SVD will pick out the indices corresponding to the \(r\) largest numbers (in bold). On the other hand, we find that \[Q^{-1}B_{r}Q^{-1}=\operatorname{diag}(\mathbf{0.9090},\mathbf{0.4762},0,0,0,0).\] The resulting generalised eigenvalues are shown in Figure 1. Recall that \[\widehat{S}_{r}^{-1}S =Q^{-*}(I+Q^{-1}B_{r}Q^{-*})^{-1}(I+G)Q^{*},\] \[\widehat{S}_{r}^{-1}S =Q^{-*}(I+G_{r})^{-1}(I+G)Q^{*}.\] By straightforward calculation we have, for \(i=1,\ldots,n\): \[\lambda_{i}(Q^{-1}B_{r}Q^{-*})=\lambda_{i}(A)^{-1}\lambda_{i}(B),\quad\lambda_ {i}(G_{r})=\lambda_{n-i+1}(A)^{-1}\lambda_{i}(B),\] which clearly shows the scaling effect. There is no difference when the spectrum of \(A\) is flat, but if the spectrum of \(A\) decays as above, then \(\lambda_{i}(Q^{-1}B_{r}Q^{-*})\) will be dominated by \(\lambda_{i}(G_{r})\) in which case: \[1\leq\lambda_{i}(\widehat{S}_{r}^{-1}S)<\lambda_{i}(\tilde{S}_{r}^{-1}S), \qquad i=1,\ldots,n.\] We highlight a situation where \(\widehat{S}_{r}=\tilde{S}_{r}\). Suppose that we had defined \(B\) by reversing the order of the diagonal elements in (4), i.e., \[B^{\text{rev}}=\operatorname{diag}(0.1,0.25,0.5,1,0,0).\] In this case, \[G^{\text{rev}}=\operatorname{diag}(1.1^{-1},0.2381,\mathbf{1.3333},\mathbf{20},0, 0,0), \tag{6}\] and \[Q^{-1}B_{r}^{\text{rev}}Q^{-1}=\operatorname{diag}(0,0,\mathbf{1.3333},\mathbf{2 0},0,0)=G_{r}^{\text{rev}}.\] By comparing (5) and (6), we notice the largest eigenvalues (in bold) have different indices, representing the loss of information when prematurely truncating \(B\) before scaling by the factors of \(A^{-1}\). This suggests that this scaling can, in some circumstances, improve the approximation of \(S\). While this approach uses \(G\) explicitly, we show in Section 5 economical ways of computing with it using only matrix-vector products. In the general case where the basis is not given by the identity, analysis of the generalised eigenvalues is of course insufficient to determine the quality of the associated preconditioners, a point to which we return later. ## 3 Preconditioners via Bregman Projections It is well-known that a truncated singular value decomposition is an optimal low-rank approximation in both the spectral and Frobenius norms thanks to the Eckart-Young-Mirsky theorem. These norms are not invariant under congruence transformations, i.e., \(Z\mapsto P^{*}ZP\) for some invertible \(P\), and we will discover in Section 4 that that the preconditioners \(\tilde{S}_{r}\) and \(\widehat{S}_{r}\) are optimal solutions to different matrix norm minimisation problems. In this section, we introduce a different nearness measure, the Bregman divergence, which naturally leads to the preconditioner \(\widehat{S}_{r}\). The Bregman matrix divergence \(\mathcal{D}_{\phi}:\operatorname{dom}\phi\times\operatorname{ri}\operatorname {dom}\phi\to[0,\infty)\) associated with a proper, continuously-differentiable, strictly convex _seed_ function \(\phi\) is defined as follows: \[\mathcal{D}_{\phi}(X,Y)=\phi(X)-\phi(Y)-\operatorname{trace}\big{(}\nabla\phi( Y)^{*}(X-Y)\big{)}.\] The choices \(\phi(X)=\operatorname{trace}\big{(}X\log X-X\big{)}\), \(\phi(X)=\|X\|_{F}^{2}\) and \(\phi(X)=-\log\det(X)\) lead to the _von Neumann, squared Frobenius_ and _log determinant_ (or _Burg_) divergences, respectively: \[\mathcal{D}_{\text{VN}}(X,Y)=\operatorname{trace}\big{(}X\log X -X\log Y-X+Y\big{)}, \operatorname{dom}\phi =\mathbb{H}_{++}^{n},\] \[\mathcal{D}_{\text{F}}(X,Y)=\|X-Y\|_{F}^{2}, \operatorname{dom}\phi =\mathbb{H}^{n},\] \[\mathcal{D}_{\text{LD}}(X,Y)=\operatorname{trace}\big{(}XY^{-1} \big{)}-\log\det(XY^{-1})-n, \operatorname{dom}\phi =\mathbb{H}_{++}^{n}.\] By a limit argument [15, Section 4], the log determinant matrix divergence is only finite if \(\operatorname{range}X=\operatorname{range}Y\), or \(\operatorname{range}X\subseteq\operatorname{range}Y\) for the von Neumann divergence. We highlight a few useful properties of the divergences, collectively denoted by \(\mathcal{D}_{\phi}\)[3]: * \(\mathcal{D}_{\phi}(X,Y)=0\Leftrightarrow X=Y\), * _Nonnegativity_: \(\mathcal{D}_{\phi}(X,Y)\geq 0\), * _Convexity_: \(X\to\mathcal{D}_{\phi}(X,Y)\) is convex. Figure 1: Eigenvalues of \(S\) and generalised eigenvalues of the preconditioned system using the preconditioners \(\tilde{S}_{r}\) and \(\widehat{S}_{r}\). We also state the following facts about these divergences: **Corollary 1** ([15, Corollary 2]).: _Given \(X=U\operatorname{diag}(\lambda)U^{*}\) and \(Y=V\operatorname{diag}(\theta)V^{*}\) with eigenvectors \(u_{i}\) and \(v_{i}\), respectively, the squared Frobenius, von Neumann and log determinant matrix divergences satisfy:_ \[\mathcal{D}_{\text{LD}}(X,Y) =\sum_{i=1}^{n}\sum_{j=1}^{n}(v_{i}^{*}u_{j})^{2}\left[\frac{ \lambda_{i}}{\theta_{j}}-\log\frac{\lambda_{i}}{\theta_{j}}-1\right], \tag{8a}\] \[\mathcal{D}_{\text{VN}}(X,Y) =\sum_{i=1}^{n}\sum_{j=1}^{n}(v_{i}^{*}u_{j})^{2}[\lambda_{i}\log \lambda_{i}-\lambda_{i}\log\theta_{j}-\lambda_{i}+\theta_{j}],\] (8b) \[\mathcal{D}_{\text{F}}(X,Y) =\sum_{i=1}^{n}\sum_{j=1}^{n}(v_{i}^{*}u_{j})^{2}[\lambda_{i}- \theta_{j}]^{2}. \tag{8c}\] **Proposition 1** ([15, Proposition 12]).: _Let \(P\) be invertible so \(Z\mapsto f(Z)=P^{*}ZP\) is a congruence transformation. Then,_ \[\mathcal{D}_{\text{LD}}(X,Y)=\mathcal{D}_{\text{LD}}(f(X),f(Y)).\] From Proposition 1, we have \[\mathcal{D}_{\text{LD}}(\widehat{S}_{r},S) =\mathcal{D}_{\text{LD}}(Q(I+G_{r})Q^{*},Q(I+Q^{-1}BQ^{-*})Q^{*})\] \[=\mathcal{D}_{\text{LD}}(I+G_{r},I+Q^{-1}BQ^{-*}).\] In light of this, we ask if \(G_{r}\) is in fact a solution to the problem \[\operatorname*{minimise}_{X\in\mathbb{H}_{+}^{n}} \mathcal{D}_{\text{LD}}(\mathcal{P},S)\] (9a) s.t. \[\mathcal{P}=Q(I+X)Q^{*} \tag{9b}\] \[\operatorname{rank}(X)\leq r, \tag{9c}\] or, equivalently, \[\operatorname*{minimise}_{X\in\mathbb{H}_{+}^{n}} \mathcal{D}_{\text{LD}}(I+X,I+Q^{-1}BQ^{-*})\] s.t. \[\operatorname{rank}(X)\leq r.\] We answer this question in the affirmative with the following Theorem. **Theorem 1**.: \(\widehat{S}_{r}\) _defined in (2b) is a minimiser of (9). Furthermore, the first \(i\) eigenvalues of \(\widehat{S}_{r}^{-1}S\) are given by \(1+\lambda_{r+i}(G)\), \(i=1,\ldots,\operatorname{rank}B-r\), and the remaining ones are given by \(1\) with multiplicity \(n+r-\operatorname{rank}B\)._ Proof.: We derive a lower bound for \(\mathcal{P}\mapsto\mathcal{D}_{\text{LD}}(\mathcal{P},S)\). Let \(\mathcal{P}=V\Sigma V^{*}\), \(\Sigma=\operatorname{diag}(\sigma)\) and \(S=U\Lambda U^{*}\), \(\Lambda=\operatorname{diag}(\lambda)\). The first term of the divergence is \(\operatorname{trace}\big{(}\mathcal{P}S^{-1}\big{)}\), which can be written as \[\operatorname{trace}\big{(}\mathcal{P}S^{-1}\big{)}=\operatorname{trace} \big{(}V\Sigma V^{*}U\Lambda^{-1}U^{*}\big{)}=\operatorname{trace}\big{(} \Sigma P\Lambda^{-1}P^{*}\big{)},\] where \(P=V^{*}U\) is unitary. Since \(P\mapsto\operatorname{trace}\big{(}\Sigma P\Lambda^{-1}P^{*}\big{)}\) is a continuous map over a compact set (the orthogonal group), it will attain its extrema in this set by Weierstrass' theorem. As a result, we obtain the following lower bound [2]: \[\operatorname{trace}\big{(}\mathcal{P}S^{-1}\big{)}\geq\sum_{i=1}^{n}\sigma_ {i}\lambda_{i}^{-1}.\] Now note that \(I+G_{r}\) and \(I+Q^{-1}BQ^{-*}\) are simultaneously diagonalisable so they share an eigenbasis with eigenvectors \(u_{i}\), \(i=1,\ldots n\). Let \(1+\nu_{i}\) be the eigenvalues of \(I+G_{r}\) and \(1+\mu_{i}\) the eigenvalues of \(I+Q^{-1}BQ^{-*}\). We have, by construction, \(\nu_{i}=\mu_{i}\) for \(1\leq i\leq r\), \(\nu_{i}=0\) for \(r+1\leq i\). Then we have the lower bound on the log determinant term as a function of \(\Sigma\), which is realised by the choice \(X=G_{r}\) in (9b): \[-\log\det(\mathcal{P}S^{-1})=-\sum_{i=1}^{n}\log\left(\frac{1+\sigma_{i}}{1+ \mu_{i}}\right)\geq-\sum_{i=r+1}^{n}\log\left(\frac{1}{1+\mu_{i}}\right).\] This implies that for any \(\mathcal{P}\), we have \[\mathcal{D}_{\mathrm{LD}}(\mathcal{P},S)\geq\sum_{i=1}^{n}\sigma_{i}\lambda_{i}^{ -1}-\sum_{i=r+1}^{n}\log\left(\frac{1}{1+\mu_{i}}\right)-n. \tag{10}\] Using Corollary 1 and orthogonality of the eigenvectors we obtain \[\mathcal{D}_{\mathrm{LD}}(\widehat{S}_{r},S) =\mathcal{D}_{\mathrm{LD}}(I+G_{r},I+Q^{-1}BQ^{-*})\] \[=\sum_{i=1}^{n}\sum_{j=1}^{n}(u_{i}^{*}u_{j})^{2}\left(\frac{1+ \nu_{i}}{1+\mu_{j}}-\log\left(\frac{1+\nu_{i}}{1+\mu_{j}}\right)-1\right)\] \[=\sum_{j=r+1}^{\mathrm{rank}\,B}\left(\frac{1}{1+\mu_{j}}-\log \left(\frac{1}{1+\mu_{j}}\right)-1\right). \tag{11}\] This coincides with the lower bound (10), so the first result follows. The statement concerning the generalised eigenvalues follows since \(G_{r}\) and \(G\) are simultaneously diagonalisable by construction. We can also ask if \(G_{r}\) satisfies a similar result. Since the domain of the divergence is \(\mathbb{H}_{++}^{n}\times\mathbb{H}_{++}^{n}\) we cannot say that \(G_{r}\) is the matrix such that \(\mathcal{D}_{\mathrm{LD}}(G_{r},G)\) is minimised. We can, however, state a different result. **Proposition 2**.: _Let \(VMV^{*}\), \(M=\mathrm{diag}(\mu)\) be an eigendecomposition of \(G=Q^{-1}BQ^{-*}\) and \(V_{r}\) the first \(r<n\) columns of \(V\). The rank \(r\) approximation of \(G\) obtained by a truncated singular value decomposition, \(G_{r}\), is a minimiser of \(\mathcal{D}(V_{r}^{*}G_{r}V_{r},V_{r}^{*}GV_{r})\), where \(\mathcal{D}\) can be either the von Neumann or log determinant divergence. Furthermore,_ \[\mathcal{D}_{\mathrm{VN}}(G_{r},G)=\sum_{j=r+1}^{\mathrm{rank}\,B}\mu_{j}.\] Proof.: This follows from direct computation. Indeed, \(V_{r}^{*}GV_{r}=M_{r}\), where \(M_{r}\) is the \(\mathbb{R}^{r\times r}\) submatrix of \(M\) containing the first \(r\) eigenvalues of \(G\). So, by construction, \[\mathcal{D}(V_{r}^{*}G_{r}V_{r},V_{r}^{*}GV_{r}) =\mathcal{D}(V_{r}^{*}V_{r}M_{r}V_{r}^{*}V_{r},V_{r}^{*}VMV^{*}V_{ r})\] \[=\mathcal{D}(M_{r},M_{r})\] \[=0.\] Since \(0\) is a lower bound of the convex function \(X\mapsto\mathcal{D}(V_{r}^{*}XV_{r},V_{r}^{*}GV_{r})\), we are done. [15] extends the divergences to low-rank matrices, which will be explored in more detail in the context of Nystrom approximations in Section 4. **Remark 1**.: _The quantities \(\mathcal{D}_{\mathrm{LD}}(\widehat{S}_{r},S)\) and \(\mathcal{D}_{\mathrm{LD}}(\tilde{S}_{r},S)\) are described in terms of the eigenvalues of \(I+G_{r}\), \(I+Q^{-1}BQ^{-*}\) and \(I+Q^{-1}B_{r}Q^{-*}\). By construction,_ \[\lambda_{i}(I+G_{r})=\lambda_{i}(I+Q^{-1}BQ^{-*})\quad\mathrm{when}\quad 1\leq i \leq r,\] _so the divergence \(\mathcal{D}_{\mathrm{LD}}(\widehat{S}_{r},S)\) in (11) measures a quantity in terms of the \(i\)th eigenvalues, where \(r+1\leq i\leq n\). An attempt to carry out the same analysis for \(\mathcal{D}_{\mathrm{LD}}(\tilde{S}_{r},S)\) makes it clear why we expect \(\widehat{S}_{r}\) to be a better preconditioner. The derivation of the bound (11) relies on the fact that \(I+G_{r}\) and \(I+Q^{-1}BQ^{-*}\) are simultaneously diagonalisable. In general, however, \(Q^{-1}B_{r}Q^{-*}\neq G_{r}\) (see Corollary 3 below). Letting \((u_{i},1+\nu_{i})\) and \((\tilde{u}_{i},1+\tilde{\nu}_{i})\) denote the eigenpairs of \(G\) and \(Q^{-1}B_{r}Q^{-*}\), respectively, we have:_ \[\mathcal{D}_{\mathrm{LD}}(\tilde{S}_{r},S)=\sum_{i=1}^{n}\sum_{j=1}^{n}(\tilde {u}_{i}^{*}u_{j})^{2}\left(\frac{1+\tilde{\nu}_{i}}{1+\mu_{j}}-\log\left(\frac {1+\tilde{\nu}_{i}}{1+\mu_{j}}\right)-1\right). \tag{12}\] _The importance of the choice of basis is also apparent from the term \((\tilde{u}_{i}^{*}u_{j})^{2}\) in (12), since this measures how aligned the bases are between \(S\) and its preconditioner. To summarise, a reason for \(\widehat{S}_{r}\) appearing to be a better preconditioner is that it matches the eigenvalues of \(S\) in the first parts of the spectrum i.e. \([1,r]\), which directly implies unit eigenvalues of the multiplicity mentioned in Theorem 1._ **Corollary 2**.: \(\mathcal{D}_{\mathrm{LD}}(\tilde{S}_{r},S)\geq\mathcal{D}_{\mathrm{LD}}(\widehat{S} _{r},S)\)_._ While corollary 2 is trivial, its significance in terms of interpreting preconditioning strategies for \(S\) is not yet clear. If \(D(X,Z)<D(Y,Z)\), then we in general expect the matrix \(XZ^{-1}\) to be closer to the identity than \(YZ^{-1}\), but the quality of a preconditioner depends not only on the bounds for the eigenvalues but also on the distribution of the spectrum. Based on Remark 1 and Theorem 1, the basis for the preconditioner also plays a part. This is discussed in Section 3.3 in the context of iterative methods. **Corollary 3**.: \(\widehat{S}_{r}=\tilde{S}_{r}\) _when \(A\) is a scaled identity._ Proof.: Let \(A=\alpha I\) for some \(\alpha>0\). Then, \[G_{r}=(\alpha^{-1/2}B\alpha^{-1/2})_{r}=\alpha^{-1}B_{r}=Q^{-1}B_{r}Q^{-*}.\] **Theorem 2**.: \(\widehat{S}_{r}\) _defined in (2b) is a minimiser of the problem_ \[\underset{W\in\mathbb{H}^{+}_{+}}{\text{minimise}} \kappa_{2}((A+W)^{-1}S)\] (13a) s.t. \[\mathrm{rank}(W)\leq r. \tag{13b}\] Proof.: By definition, we have \(\kappa_{2}((A+W)^{-1}S)=\frac{\sigma_{1}((A+W)^{-1}S)}{\sigma_{n}((A+W)^{-1}S)}\). We find a lower bound for \(\sigma_{1}((A+W)^{-1}S)\), independent of the choice of \(W\), \[\sigma_{1}((A+W)^{-1}S)=\sigma_{1}((I+QWQ^{*})^{-1}(I+G))\geq 1+\lambda_{r+1}(G).\] Similarly, \(\sigma_{n}((A+W)^{-1}S)\leq 1\), so \[\kappa_{2}((A+W)^{-1}S)\geq 1+\lambda_{r+1}(G)\] for any \(W\). The choice \(W=Q^{-1}G_{r}Q^{-*}\) leads to \[A+W=Q(I+G_{r})Q^{*}=\widehat{S}_{r}.\] The condition number \(\kappa_{2}(\widehat{S}_{r}^{-1}S)\) is \(1+\lambda_{r+1}(G)\) thanks to Theorem 1, so we are done. ### Choice of Splitting So far we assumed _a priori_ that \(S\) admits a splitting into a positive definite term \(A\) and a remainder term \(B\), in which case Theorem 1 tells us how to construct a preconditioner specific on the form \(A+X\), for some low-rank matrix \(X\). We now address the following problem, where we denote by \(\Omega\subset\mathbb{C}^{n\times n}\) some set of admissible factorisable matrices: \[\underset{W\in\mathbb{H}^{+}_{+}}{\text{minimise}} \mathcal{D}_{\mathrm{LD}}(X,S)\] (14a) s.t. \[X=V+W \tag{14b}\] \[\mathrm{rank}(W)\leq r,\] (14c) \[V\in\Omega. \tag{14d}\] Without the constraint (14d), an optimal solution to (14) is clearly \(V=S,W=0\). The admissibility set \(\Omega\) can therefore be viewed as the possible choices of matrices \(V\) whose factors \(Q\) define the congruence transformation used in (9). There is no _a priori_ way of choosing an optimal split for an arbitrary \(S\in\mathbb{H}^{n}_{++}\) without deeper insight into its structure. It is, however, useful to reason about (14) to develop heuristics for practical use. We provide an example of \(\Omega\) in Section 3.2 that recovers the standard Jacobi preconditioner. Some splits of \(S\) may be more practical than others. For instance, if \(S\) is provided as a sum of a sparse indefinite term \(C_{\mathrm{sparse}}\) and a low-rank term \(K\) (whose sum is positive definite), then one option is to let \(A=C_{\mathrm{sparse}}+\alpha I\), \(B=K-\alpha I\) for some suitable constant \(\alpha\). Further, if a factorisation of the positive definite term is not readily available, then an incomplete one can be used to a similar effect by compensating for the remainder in the low-rank approximation. It can even be done _on-the-fly_: suppose \(S=\tilde{A}+\tilde{B}\) where the matrix \(\tilde{A}\) is a given candidate for factorisation (not necessarily known to be positive definite). We attempt to perform a Cholesky factorisation, and if a negative pivot is encountered, then it is replaced by some small positive value. This amounts to producing a factorisation \(LL^{*}=\tilde{A}+D\) for some suitable diagonal \(D\in\mathbb{H}_{+}^{n}\). Then, taking \(A=LL^{*}\) results in the following choice of splitting: \[S=A+B=LL^{*}+(\tilde{B}-D).\] As another example, suppose \(S=\tilde{A}+\tilde{B}\) where \(A\) is "almost" a circulant matrix, e.g., \(A_{\mathrm{circ}}=\tilde{A}+R_{\mathrm{circ}}\) is circulant for some \(R_{\mathrm{circ}}\). Factorisations of \(A_{\mathrm{circ}}\) are typically faster to compute, so we can construct a preconditioner from the splitting \(A=A_{\mathrm{circ}}\), \(B=\tilde{B}-R_{\mathrm{circ}}\). This approach extends to any situation where the positive definite part of \(S\) is a small perturbation away from a structured matrix that is easier to factorise. We emphasise that this is highly dependent on the problem at hand and may perform arbitrarily poorly in practice. An investigation of such approaches is deferred to future work. ### Diagonal Preconditioners The framework above recovers the standard block Jacobi preconditioner. Suppose we want to construct a preconditioner \(\mathcal{P}\in\mathbb{H}_{++}^{n}\) for the system (1) on the form \[\mathcal{P}=\mathtt{blkdiag}(D_{1},\ldots,D_{b}), \tag{15}\] with \(b\) blocks of size \(n_{i}\), \(i=1,\ldots,b\), such that \(n=\sum_{i=1}^{b}n_{i}\). Let us denote by \(E_{i}\in\mathbb{R}^{n\times n_{i}}\) the matrix constructed from the identity matrix of order \(n\) by deleting columns such that \[D_{i}=E_{i}^{*}\mathcal{P}E_{i},\quad i=1,\ldots,b.\] Letting \(\mathbf{P}\) denote matrices on the form (15), we formulate the following optimisation problem: \[\operatorname*{minimise}_{\mathcal{P}\in\mathbf{P}}\quad\mathcal{D}_{\mathrm{ LD}}(S,\mathcal{P}). \tag{16}\] Using \(\mathcal{D}_{\mathrm{LD}}(S,\mathcal{P})=\mathcal{D}_{\mathrm{LD}}(\mathcal{P }^{-1},S^{-1})\), the optimality conditions are: \[S:E_{i}D_{i}^{-1}\,\mathrm{d}D_{i}D_{i}^{-1}E_{i}^{*}=\mathcal{P}:E_{i}D_{i}^{ -1}\,\mathrm{d}D_{i}D_{i}^{-1}E_{i}^{*},\quad i=1,\ldots,b,\] so \(E_{i}^{*}SE_{i}=D_{i}\). A minimiser of (16) is therefore the following block Jacobi preconditioner: \[\mathcal{P}^{*}=\mathtt{blkdiag}(E_{1}^{*}SE_{1},\ldots,E_{b}^{*}SE_{b}).\] ### Preconditioned Iterative Methods The following convergence results for PCG are well-known [11]: **Theorem 3**.: _Let \(S\in\mathbb{H}_{++}^{n}\), \(\mathcal{M}\) denote the inverse of a preconditioner and \(x_{0}\in\mathbb{C}^{n}\) an initial guess and \(Sx=b\) for some \(b\). Then:_ \[\|x-x_{k}\|_{S} =\inf_{\begin{subarray}{c}p_{k}\in\mathcal{P}_{k}\\ p_{k}(0)=1\end{subarray}}\|p_{k}(\mathcal{M}S)(x-x_{0})\|_{S},\] \[\|x-x_{k}\|_{S} \leq\inf_{\begin{subarray}{c}p_{k}\in\mathcal{P}_{k}\\ p_{k}(0)=1\end{subarray}}\sup_{z\in\lambda(\mathcal{M}S)}|p_{k}(z)|\|x-x_{0}\| _{S},\] \[\|x-x_{k}\|_{S} \leq 2\Big{(}\frac{\sqrt{\kappa_{2}(\mathcal{M}S)}-1}{\sqrt{\kappa_{ 2}(\mathcal{M}S)}+1}\Big{)}^{k}\|x-x_{0}\|_{S},\] _where \(\mathcal{P}_{k}\) is the space of polynomials of order at most \(k\)._ The condition number of \(\mathcal{M}S\) plays a role in the convergence theory above, but the clustering of the eigenvalues is also a factor. Recall the following theorem: **Theorem 4** ([10, Theorem 10.2.5]).: _If \(A=I+B\in\mathbb{H}_{++}^{n}\) and \(r=\mathrm{rank}(B)\), then the conjugate gradient method converges in at most \(r+1\) iterations._ Theorem 1 guarantees multiplicity \(n-\mathrm{rank}\,B+r\) of unit eigenvalues and the explicit characterisation of the remaining ones which together with Theorem 4 leads to the following result. **Corollary 4**.: _Let \(\mu_{i}\), \(i=1,\ldots,n\) denote the eigenvalues of \(\widehat{S}_{r}^{-1}S\). Using \(\widehat{S}_{r}\) defined in (2b) as a preconditioner for the system in (1) for some initial guess \(x_{0}\), then:_ \[\|x-x_{k}\|_{S} \leq\inf_{\begin{subarray}{c}p_{k}\in\mathcal{P}_{k}\\ p_{k}(0)=1\end{subarray}}\sup_{r\leq j\leq\operatorname{rank}B}|p_{k}(1+\mu_{ j})|\|x-x_{0}\|_{S}, \tag{17a}\] \[\|x-x_{k}\|_{S} \leq 2\big{(}\frac{\sqrt{1+\mu_{r+1}}-1}{\sqrt{1+\mu_{r+1}}+1} \big{)}^{k}\|x-x_{0}\|_{S}. \tag{17b}\] _Further, assuming exact arithmetic, PCG using \(\widehat{S}_{r}\) as a preconditioner for solving \(Sx=b\) converges in at most \(\operatorname{rank}(B)-r+1\) steps._ An attractive property of \(\widehat{S}_{r}\) is that it clusters the eigenvalues of \(\widehat{S}_{r}^{-1}S\) leading to the supremum in (17a) being taken over \(\mu_{j}\), \(j=r,\ldots,\operatorname{rank}B\). If we were to use \(\tilde{S}_{r}\), we cannot deduce any information about the eigenvalues of \(\widehat{S}_{r}^{-1}S\) or any clustering. In view of (12) and Remark 1, this preconditioner most likely leads to a lower multiplicity of unit eigenvalues. The bound (17b) also shows that we expect PCG using \(\widehat{S}_{r}\) to converge quickly when \(\mu_{r+1}\) is small. We support the results and the intuition above with a numerical study in Section 6. ## 4 Low-rank Approximations as Minimisers In what precedes we found \(\widehat{S}_{r}\) as a minimiser of a Bregman divergence. Below, we also find that \(\widehat{S}_{r}\) is a solution of the following _scaled_ norm minimisation problem, where \(\|\cdot\|\) can be either the spectral or Frobenius norm. **Proposition 3**.: \(\widehat{S}_{r}\) _defined in (2) is a minimiser of_ \[\operatorname*{minimise}_{\mathcal{P}\in\mathbb{H}^{n}_{++}} \|Q^{-1}(S-\mathcal{P})Q^{-*}\|^{2}\] (18a) s.t. \[\mathcal{P}=Q(I+X)Q^{*} \tag{18b}\] \[\operatorname{rank}(X)\leq r. \tag{18c}\] Proof.: This follows from computation: \[\|Q^{-1}(S-\mathcal{P})Q^{-*}\|_{2}^{2}=\|Q^{-1}BQ^{-*}-X\|_{2}^{2}.\] The truncated SVD is optimal in the spectral and Frobenius norm, so we must have \(X=G_{r}\) and the result follows. We now adopt a more general setting to extend the result above. In light of the singular value decomposition, the task of finding a preconditioner for a system \(Hx=b\), \(H\in\mathbb{H}^{n}_{++}\) can be thought of as matrix-nearness problem consisting of 1. _finding an approximation of the range of_ \(H\) _(and in the Hermitian case also the row space),_ 2. _finding an approximation of the eigenvalues of_ \(H\)_,_ subject to a measure of nearness between the approximant and the matrix \(H\in\mathbb{C}^{n\times n}\). A truncated SVD provides an _optimal truncated basis_ for the approximation. For large matrices, this is a prohibitively costly operation. _Range finders_ can be used when the basis of a matrix approximation is unknown. These are matrices \(\Theta\in\mathbb{C}^{n\times r}\) with orthonormal columns such that \[\|H-\Theta\Theta^{*}H\|_{F}\leq\epsilon, \tag{19}\] for some threshold \(\epsilon\) (step 1 above). \(\Theta\) can be computed from an QR decomposition of \(H\Omega\), \(\Omega\in\mathbb{R}^{n\times r}\), where \(\Omega\) is a test or _sketching_ matrix. For Hermitian matrices, this results in the following randomised approximation: \[H\langle\Omega\rangle=\Theta\Theta^{*}H\Theta\Theta^{*}. \tag{20}\] In practice, this is the province of randomised linear algebra where \(\Omega\) is often drawn from a Gaussian distribution, the analysis and discussion of which is deferred to Section 5. The purpose of the remainder of this section is to discuss the truncated singular value decomposition and the Nystrom approximation before introducing randomness. In this section, we assume \(\Omega\) is simply chosen such that (19) holds. The next result presents the Nystrom approximation [12, 4]. **Proposition 4** (Nystrom approximation: Frobenius minimiser).: _Let \(\Omega\in\mathbb{R}^{n\times r}\) be a test matrix of full rank. Then_ \[H^{\text{Nys}}\langle\Omega\rangle=(H\Omega)(\Omega^{*}H\Omega)^{+}(H\Omega)^{*}\] _is the Nystrom approximation of \(H\in\mathbb{H}_{+}^{n}\) (whose Hermitian square root is denoted by \(H^{\frac{1}{2}}\)), where \(X^{\star}=(\Omega^{*}H\Omega)^{+}\) is found as the minimiser of the following problem where \(Y=H\Omega\):_ \[\operatorname*{minimise}_{X\in\mathbb{H}^{r}}\ \|(H^{\frac{1}{2}})^{+}(H-YXY^{*})(H^{ \frac{1}{2}})^{+}\|_{F}^{2}. \tag{21}\] Proof.: The optimality conditions of (21) are: \[Y^{*}H^{+}(H-YXY^{*})H^{+}Y:\,\mathrm{d}X=0.\] Simplifying yields \[\Omega^{*}H\Omega=\Omega^{*}(H\Omega)X(H\Omega)^{*}\Omega,\] so \(X=(\Omega^{*}H\Omega)^{+}\). It is sometimes beneficial to select \(\Omega\in\mathbb{R}^{n\times(r+p)}\) for some small integer \(p\). In the context of randomised approaches, this is called _oversampling_ and will be described in more detail in Section 5. We briefly mention how to incorporate this into the formulation above. Let \(U_{r}\) denote the \(r\) leading left singular vectors of \(H\Omega\). Then, substituting \(U_{r}\) for \(Y\) in (21) leads to the solution \(X^{+}=U_{r}^{*}HU_{r}\), so \[H\approx U_{r}(U_{r}^{*}HU_{r})^{+}U_{r}^{*}.\] We can also write the Nystrom approximation as the minimiser of a Bregman divergence minimisation problem. In [15], the divergence is extended to rank \(r\leq n\) matrices \(X\), \(Y\in\mathbb{H}_{+}^{n}\) via the definition \[\mathcal{D}_{\text{LD}}^{Z}(X,Y)=\mathcal{D}_{\text{LD}}(Z^{*}XZ,Z^{*}YZ),\] where \(Z\in\mathbb{C}^{n\times r},\operatorname{rank}(Z)=r\). Since any such \(Z\) can be written a product \(Z=OW\), where \(O\in\mathbb{C}^{n\times r}\) has orthonormal columns and \(W\in\mathbb{C}^{r\times r}\) has full rank, we know that \[\mathcal{D}_{\text{LD}}(Z^{*}XZ,Z^{*}YZ)=\mathcal{D}_{\text{LD}}(O^{*}XO,O^{*} YO).\] This leads to the following observation. **Proposition 5** (Nystrom approximation: Bregman divergence minimiser).: _Instate the hypotheses of Proposition 4, and assume \(H\Omega\) has full rank. Then, \(H^{\text{Nys}}\langle\Omega\rangle\) is a minimiser of the following optimisation problem:_ \[\operatorname*{minimise}_{W\in\mathbb{H}_{+}^{n}} \mathcal{D}_{\text{LD}}(\Omega^{*}W\Omega,\Omega^{*}H\Omega)\] (22a) s.t. \[\operatorname{range}W\subseteq\operatorname{range}H\Omega. \tag{22b}\] Proof.: As a result of equation (22b), \(W\) has the form \[W=YXY^{*},\] for some \(X\in\mathbb{H}_{++}^{r}\) where \(Y=H\Omega\). Using this to eliminate the constraint, we can write (22) as \[\operatorname*{minimise}_{X\in\mathbb{H}_{++}^{r}}\ \ \ \mathcal{D}_{\text{LD}}(\Omega^{*}YXY^{*} \Omega,\Omega^{*}H\Omega). \tag{23}\] The optimality conditions of (23) are \[(\Omega^{*}H\Omega)^{-1}:\Omega^{*}Y\,\mathrm{d}XY^{*}\Omega-(\Omega^{*}YXY^{ *}\Omega)^{-*}:\Omega^{*}Y\,\mathrm{d}XY^{*}\Omega=0,\] which can be rewritten as \[(\Omega^{*}H\Omega)^{-1}=(\Omega^{*}YXY^{*}\Omega)^{-*}.\] Therefore, \(X=(\Omega^{*}H\Omega)^{-1}\), and hence \[W=(H\Omega)(\Omega^{*}H\Omega)^{-1}(H\Omega)^{*}=Y(Y^{*}\Omega)^{-1}Y^{*}. \tag{24}\] Note that when \(H\Omega\) does not have full rank, we make an additional restriction to a subspace where \(\Omega^{*}H\Omega\in\mathbb{H}_{++}^{r}\) so that (23) is finite. This is the case when \(H\in\mathbb{H}_{+}^{n}\) and the inverse in (24) becomes a pseudoinverse. As a consequence of Proposition 5, the Nystrom approximation has the following invariance. **Corollary 5**.: _Suppose \(H^{\text{Nys}}\langle\Omega\rangle\) is given as in Proposition 5 and let \(\Theta R=\Omega\) denote a QR decomposition of \(\Omega\). Then,_ \[H^{\text{Nys}}\langle\Omega\rangle=H^{\text{Nys}}\langle\Theta\rangle.\] **Remark 2** (Nystrom approximation: partial \(LDL^{*}\) factorisation).: _We can also characterise the Nystrom approximation as a partial \(LDL^{*}\) factorisation when \(H\in\mathbb{H}_{+}^{n}\). Let \(U=\begin{bmatrix}U_{1}&U_{2}\end{bmatrix}\) be a unitary matrix with \(U_{1}\in\mathbb{C}^{n\times r}\) and \(U_{2}\in\mathbb{C}^{n\times(n-r)}\). Then_ \[\tilde{H}=U^{*}HU=\begin{bmatrix}U_{1}^{*}HU_{1}&U_{1}^{*}HU_{2}\\ U_{2}^{*}HU_{1}&U_{2}^{*}HU_{2}\end{bmatrix}=\begin{bmatrix}\tilde{H}_{11}& \tilde{H}_{21}^{*}\\ \tilde{H}_{21}&\tilde{H}_{22}\end{bmatrix}\] _can be factorized as_ \[\tilde{H} =\begin{bmatrix}I&0\\ \tilde{H}_{21}\tilde{H}_{11}^{+}&I\end{bmatrix}\begin{bmatrix}\tilde{H}_{11}& 0\\ 0&\tilde{H}_{22}-\tilde{H}_{21}\tilde{H}_{11}^{+}\tilde{H}_{21}^{*}\end{bmatrix} \begin{bmatrix}I&\tilde{H}_{11}^{+}\tilde{H}_{21}^{*}\\ 0&I\end{bmatrix}\] \[=\begin{bmatrix}\tilde{H}_{11}&0\\ \tilde{H}_{21}\tilde{H}_{11}^{+}\tilde{H}_{11}&I\end{bmatrix}\begin{bmatrix} \tilde{H}_{11}^{+}&0\\ 0&\tilde{H}_{22}-\tilde{H}_{21}\tilde{H}_{11}^{+}\tilde{H}_{21}^{*}\end{bmatrix} \begin{bmatrix}\tilde{H}_{11}&\tilde{H}_{11}\tilde{H}_{11}^{+}\tilde{H}_{21}^ {*}\\ 0&I\end{bmatrix}\] \[=L\begin{bmatrix}D_{1}&0\\ 0&D_{2}\end{bmatrix}L^{*}.\] _Setting \(D_{2}=0\) in this expression we have:_ \[L\begin{bmatrix}D_{1}&0\\ 0&0\end{bmatrix}L^{*}=\begin{bmatrix}\tilde{H}_{11}\\ \tilde{H}_{21}\end{bmatrix}\tilde{H}_{11}^{+}\begin{bmatrix}\tilde{H}_{11}\\ \tilde{H}_{21}\end{bmatrix}^{*}=LD_{1}L.\] _Using \(Y=HU_{1}\) in the definition of the Nystrom approximation (24) we obtain:_ \[H^{\text{Nys}}\langle U_{1}\rangle =(HU_{1})(U_{1}^{*}HU_{1})^{+}(HU_{1})^{*}\] \[=ULD_{1}L^{*}U^{*}.\] _This is aligned with [17][Proposition 11.1], which explains that the Nystrom approximation error is measured by the Schur complement \(D_{2}\) that was dropped in the steps above. Indeed,_ \[U^{*}HU-H^{\text{Nys}}\langle U_{1}\rangle=\begin{bmatrix}0&0\\ 0&D_{2}\end{bmatrix}.\] ## 5 Practical Design Using Randomised Linear Algebra In this section, we present a way to approximate the positive semidefinite term \(G_{r}\) of \(\widehat{S}_{r}\) efficiently using randomised linear algebra rather than computing an SVD of \(G\) and truncating it to order \(r\). We do not wish to form \(G\) so we instead compute a rank \(r\)_randomised_ decomposition, \(G\langle\Omega\rangle\) of \(G\) using only \(r+p\) matrix-vector products with it. Since \(G\) is Hermitian, this is comprised of the following steps [16]: 1. Draw a Gaussian test matrix \(\Omega\in\mathbb{R}^{n\times(r+p)}\) where \(r\) is the desired rank and \(p\) is an oversampling parameter to form \(Y=G\Omega\). 2. Compute an orthonormal basis \(\Theta\) of \(\operatorname{range}Y\), e.g., using a QR decomposition of \(Y\). 3. Form \(C=\Theta^{*}G\Theta\) and compute an eigenvalue decomposition \(C=\tilde{U}\Pi\tilde{U}^{*}\). If a rank \(r\) approximation is required, truncate accordingly. 4. Compute \(U=\Theta\tilde{U}\), then set \[G\langle\Omega\rangle=U\Pi U^{*}.\] (25) This produces a practical preconditioner \[\widehat{S}\langle\Omega\rangle=Q(I+G\langle\Omega\rangle)Q^{*}. \tag{26}\] We denote by \(C_{X}\) the cost of computing matrix-vector products with \(X\). The steps involved above and their costs, for a general matrix \(G\), are therefore as follows: * Constructing the test matrix \(\Omega\): \(O(n(r+p))\). * Compute \(Y=G\Omega\) in \(O((C_{B}+C_{Q^{-1}})(r+p))\). * Approximating the range \(\Theta\) means performing a QR decomposition of \(Y\): \(O(n(r+p)^{2})\). This also dominates the cost of the eigenvalue decomposition. The algorithm only computes products with \(G=Q^{-1}BQ^{-*}\), which means the matrix \(G\) is never formed explicitly. We often have additional structure so that \(C_{Q}\) and \(C_{Q^{-1}}\) are linear in \(n\). We do not need to form \(\widehat{S}\langle\Omega\rangle\) or \(G\langle\Omega\rangle\) in (25) explicitly as we only need the action \(x\mapsto\widehat{S}\langle\Omega\rangle^{-1}x\), from which we deduce that an application of the \(\widehat{S}\langle\Omega\rangle\) is \(O(C_{Q^{-1}}+nr)\) for a rank \(r\) approximation. ### Bounds for Randomised Low-rank Approximation In practice, \(\widehat{S}\langle\Omega\rangle\neq\widehat{S}_{r}\), which means that results such as Theorem 1 no longer apply. The following theorem estimates the error in the Bregman divergence: **Theorem 5** (Expected suboptimality of \(\widehat{S}\langle\Omega\rangle\)).: _Let \(\widehat{S}_{r}\) be a minimiser of (9) and \(\widehat{S}\langle\Omega\rangle\) be given by (26) where \(\operatorname{rank}G\langle\Omega\rangle=r+p\leq\operatorname{rank}B\) and the oversampling parameter satisfies \(p\geq 2\). Then,_ \[\operatorname{\mathds{E}}\left[\mathcal{D}_{\operatorname{LD}}(\widehat{S} \langle\Omega\rangle,S)\right]-\mathcal{D}_{\operatorname{LD}}(\widehat{S}_{r},S)\leq 2\epsilon c_{r}, \tag{27}\] _where_ \[\epsilon=\sqrt{\left(1+\frac{r}{p-1}\right)\sum_{j=r+1}^{n}\lambda_{j}(G)^{2} },\qquad c_{r}=\|(I+G)^{-1}\|_{F}+r(1+\lambda_{\operatorname{rank}B}(G))^{-1}.\] _Furthermore, the relative error estimate holds:_ \[\frac{\operatorname{\mathds{E}}\left[\mathcal{D}_{\operatorname{ LD}}(\widehat{S}\langle\Omega\rangle,S)\right]-\mathcal{D}_{\operatorname{LD}}( \widehat{S}_{r},S)}{\mathcal{D}_{\operatorname{LD}}(\widehat{S}_{r},S)}\] \[\leq 2c_{r}\sqrt{\frac{(1+\frac{r}{p-1})}{n-r}}\frac{\lambda_{r+ 1}(G)}{\frac{1}{(1+\lambda_{\operatorname{rank}B}(G)}-\log(\frac{1}{1+\lambda _{\operatorname{rank}B}(G)})-1)}.\] Proof.: We start by noting that \[\operatorname{\mathds{E}}\left[\mathcal{D}_{\operatorname{LD}}( \widehat{S}\langle\Omega\rangle,S)-\mathcal{D}_{\operatorname{LD}}(\widehat{ S}_{r},S)\right] =\operatorname{\mathds{E}}\left[\mathcal{D}_{\operatorname{LD}}(I+G \langle\Omega\rangle,I+G)-\mathcal{D}_{\operatorname{LD}}(I+G_{r},I+G)\right]\] \[=\operatorname{\mathds{E}}\left[\operatorname{trace}\left((G \langle\Omega\rangle-G_{r})(I+G)^{-1}\right)\right]\] \[\qquad+\operatorname{\mathds{E}}\left[\log\det(I+G_{r})-\log \det(I+G\langle\Omega\rangle)\right]\] \[=T_{1}+T_{2}.\] We have the following bound for \(T_{1}\): \[T_{1} \leq\|(I+G)^{-1}\|_{F}\operatorname{\mathds{E}}\left[\|G\langle \Omega\rangle-G_{r}\|_{F}\right]\] \[\leq\|(I+G)^{-1}\|_{F}\operatorname{\mathds{E}}\left[\|G\langle \Omega\rangle-G\|_{F}\right].\] Next, we treat \(T_{2}\): \[T_{2} =\mathrm{E}\left[\log\det\left((I+G_{r})(I+G\langle\Omega\rangle)^{- 1}\right)\right]\] \[=\mathrm{E}\left[\log\frac{\prod_{1\leq i\leq n}(1+\lambda_{i}(G_{ r}))}{\prod_{1\leq i\leq n}(1+\lambda_{i}(G\langle\Omega\rangle))}\right]\] \[\leq\mathrm{E}\left[\sum_{1\leq i\leq r}|\log(1+\lambda_{i}(G_{ r}))-\log(1+\lambda_{i}(G\langle\Omega\rangle))|\right].\] Since \(\log\) is Lipschitz on \([x,\infty]\) with Lipschitz constant \(x^{-1}\), \[T_{2}\leq(1+\lambda_{\mathrm{rank}\,B}(G))^{-1}\mathrm{E}\left[\sum_{1\leq i \leq r}|\lambda_{i}(G_{r})-\lambda_{i}(G\langle\Omega\rangle)|\right].\] For \(X,Y\in\mathbb{H}_{+}^{n}\), we know that \[\lambda_{i+j-1}(X+Y)\leq\lambda_{1}(X)+\lambda_{j}(X),\quad 1\leq i,j\leq n, \quad i+j-1\leq n, \tag{28}\] see, e.g., [14, Theorem 3.3.16]. Choosing \(X=G-G\langle\Omega\rangle\), \(Y=G\langle\Omega\rangle\), and setting \(i=1\) in (28), we obtain \[|\lambda_{j}(G)-\lambda_{j}(G\langle\Omega\rangle)|\leq\lambda_{1}(G-G \langle\Omega\rangle).\] Since \(\lambda_{1}(G-G\langle\Omega\rangle)=\|G-G\langle\Omega\rangle\|_{2}\leq\|G- G\langle\Omega\rangle\|_{F}\), we conclude that \[T_{2}\leq r(1+\lambda_{\mathrm{rank}\,B}(G))^{-1}\mathrm{E}\left[\|G-G \langle\Omega\rangle\|_{F}\right].\] Combining the results for \(T_{1}\) and \(T_{2}\), we get \[\mathrm{E}\left[\mathcal{D}_{\mathrm{LD}}(\widehat{S}\langle\Omega\rangle,S)- \mathcal{D}_{\mathrm{LD}}(\widehat{S}_{r},S)\right]\leq(\|(I+G)^{-1}\|_{F}+r( 1+\lambda_{\mathrm{rank}\,B}(G))^{-1})\mathrm{E}\left[\|G\langle\Omega\rangle -G\|_{F}\right].\] Thanks to [12, Theorem 10.5], \[\mathrm{E}\left[\|G-\Theta\Theta^{*}G\|_{F}\right]\leq\epsilon=\sqrt{(1+\frac{ r}{p-1})\sum_{j=r+1}^{n}\lambda_{j}(G)^{2}},\] so \(\mathrm{E}\left[\|G-G\langle\Omega\rangle\|_{F}\right]\leq 2\epsilon\). For the relative error, recall (11) to notice that \[\mathcal{D}_{\mathrm{LD}}(\widehat{S}_{r},S) \geq(n-r)(\frac{1}{1+\lambda_{\mathrm{rank}\,B}(G)}-\log(\frac{1}{ 1+\lambda_{\mathrm{rank}\,B}(G)})-1)\] \[\epsilon \leq\sqrt{(1+\frac{r}{p-1})(n-r)(\lambda_{r+1}(G))^{2}}=\sqrt{(1 +\frac{r}{p-1})(n-r)}\lambda_{r+1}(G).\] To achieve a desired accuracy \(\epsilon\) such that \[\|G-\Theta\Theta^{*}G\Theta\Theta^{*}\|_{2}\leq 2\epsilon,\] it is observed that a small increase \(p\) in the number of columns of the test matrix \(\Omega\) is necessary depending on the structure of \(G\), see [12, Theorem 10.6] for details. We can also obtain a deviation bound: **Theorem 6**.: _Let the hypotheses of Theorem 5 hold and assume \(p\geq 4\). Then, for all \(u,t\geq 1\),_ \[\mathcal{D}_{\mathrm{LD}}(\widehat{S}\langle\Omega\rangle,S)-\mathcal{D}_{ \mathrm{LD}}(\widehat{S}_{r},S)\leq 2c_{r}\left[\left(1+t\sqrt{\frac{12r}{p}} \right)\sqrt{\sum_{j=r+1}^{n}\lambda_{j}(G)^{2}}+ut\frac{e\sqrt{r+p}}{p+1} \lambda_{r+1}(G)\right],\] _holds with probability \(1-5t^{-p}+2e^{-u^{2}/2}\), where \(c_{r}\) is defined in Theorem 5._ Proof.: By [12, Theorem 10.7], \[\|G-G\langle\Omega\rangle\|_{F}\leq 2\left[(1+t\sqrt{\frac{12r}{p}})\sqrt{\sum_{j=r+1 }^{n}\lambda_{j}(G)^{2}}+ut\frac{e\sqrt{r+p}}{p+1}\lambda_{r+1}(G)\right],\] for any \(u,t\geq 1\). Finally, a _power range_ finder can also be used to improve the accuracy of the approximation, where \(\Theta\) in (19) is computed from a QR decomposition of \[Y=(HH^{*})^{q}H\Omega,\] for some \(q\geq 1\), leading to \[\widehat{S}^{q}\langle\Omega\rangle=Q(I+G^{q}\langle\Omega\rangle)Q^{*}. \tag{29}\] This produces a more accurate approximation at the cost of \(2q\) more products with \(G\) than its simpler (\(q=0\)) counterpart [12, Algorithm 4.3]. It has been observed [9, 12] that \(G^{\text{Nys}}\langle\Theta\rangle\), where \(\Theta\) is obtained from a QR decomposition of \(G\Omega\), is a considerable improvement over \(G^{\text{Nys}}\langle\Omega\rangle\) with a computational cost that compares to the randomised SVD, \(G\langle\Omega\rangle\). This is essentially because we perform a power iteration: \[(G\Theta)(\Theta^{*}G\Theta)^{-1}(G\Theta)^{*}=(G^{2}\Omega)(\Omega^{*}G^{3} \Omega)^{-1}(G^{2}\Omega)^{*}.\] See also [18] for a generalisation based on this observation. In Section 6, we therefore evaluate the preconditioners above as well including the scaled and nonscaled Nystrom preconditioner \[\widehat{S}^{\text{Nys}}\langle\Theta\rangle =Q(I+G^{\text{Nys}}\langle\Theta\rangle)Q^{*}, \tag{30a}\] \[\widehat{S}^{\text{Nys}}\langle\Theta\rangle =A+B^{\text{Nys}}\langle\Theta\rangle, \tag{30b}\] where in (30b), \(\Theta\) is obtained from a QR decomposition of \(B\Omega\). ### Single View Approach We now look at a _single view_ approach of the randomised SVD, where we _only once_ access the matrix we wish to approximate [12, Section 5.5]. The algorithm is similar to the one in the preceding section, where the range is approximated via \(\Theta\) such that \(\|G-\Theta\Theta^{*}G\Theta\Theta^{*}\|_{2}\) is below some tolerance. Suppose \(G\langle\Omega\rangle\) is on the form \[G\langle\Omega\rangle=\Theta\Pi\Theta^{*}, \tag{31}\] where \(\Pi=\Theta^{*}G\Theta\) with no oversampling being used. Multiplying both sides of \(\Pi=\Theta^{*}G\Theta\) yields a system of equations for \(\Pi\): \[\Pi\Theta^{*}\Omega=\Theta^{*}\underbrace{G\Theta\Theta^{*}}_{\approx G}\Omega \approx\Theta^{*}G\Omega. \tag{32}\] \(\Theta^{*}\Omega\) is invertible with high probability, so solving the system (32) for \(\Pi\) is \(O(r^{3})\). Provided \(B\) also admits affordable products, then computing \(\Theta\) and \(\Pi\) is \(O((C_{B}+C_{Q^{-1}})r+r^{3})\). A Hermitian solution of (32) is sought after owing to the definition (31) by using least squares with a constraint on \(\Pi\)[16, Figure 2.2]. However, a direct inversion still results in a symmetric solution \(G\langle\Omega\rangle\) in exact arithmetic, and the effect of the approximation step in (32) is explained in the following lemma: **Lemma 2**.: _When (32) holds so \(\Pi=\Theta^{*}G\Omega(\Theta^{*}\Omega)^{-1}\), the single pass randomised SVD (31) produces a Nystrom approximation \(G^{\text{Nys}}\langle\Omega\rangle\)._ Proof.: This follows from direct calculation. Note that \(\Theta=G\Omega R^{-1}\), so expanding \(G\langle\Omega\rangle\), \[G\langle\Omega\rangle =\Theta\Pi\Theta^{*}\] \[=\Theta\Theta^{*}G\Omega(\Theta^{*}\Omega)^{-1}\Theta^{*}\] \[=\Theta\Theta^{*}G\Omega(R^{-}\Omega^{*}G\Omega)^{-1}\Theta^{*}\] \[=\Theta\Theta^{*}G\Omega(\Omega^{*}G\Omega)^{-1}R^{*}\Theta^{*}\] \[=G\Omega(\Omega^{*}G\Omega)^{-1}(G\Omega)^{*},\] where the last equality holds since \(\operatorname{range}G\Omega=\operatorname{range}\Theta\). This can also be seen since \(R=\Theta^{*}G\Omega\) ## 6 Numerical Experiments ### Experimental Setup We construct \(S=A+B\), where \(A\in\mathbb{H}_{++}^{n}\) and \(B\in\mathbb{H}_{+}^{n}\) of rank \(m<n\), for use in our numerical experiments. Given an eigendecomposition \(P\Lambda P^{*}\) of \(A\), we have \(S=P(\Lambda+P^{*}BP)P^{*}\), where \(P\) is a unitary matrix. Since \(B\mapsto P^{*}BP\) is just a change of basis, we can, thanks to the properties of the Bregman divergence, limit our consideration to matrices \(S=A+B\), where \(A\) is a diagonal matrix. In this case, solving \(Sx=b\) is then equivalent to solving \((\Lambda+P^{*}BP)y=P^{*}b\), where \(y=P^{*}x\). For our experiments, we therefore use the canonical basis for \(P\). We vary the spectrum of \(A\) and \(B\) to study the eigenvalues of the preconditioners, the generalised eigenvalues of the preconditioned system, and their respective PCG performance. We select the \(i\)th eigenvalue of \(A\) according to \[l_{A}(i)=\exp\{-(\alpha_{A}i/n-c_{A})^{\beta_{A}}\}+\kappa, \tag{33}\] for positive parameters \(\alpha_{A}\), \(\beta_{A}\), \(c_{A}\), and \(\kappa\). The parameters \(\beta_{A}\), \(\alpha_{A}\) and \(c_{A}\) control the exponential decay of \(A\) and, depending on \(c_{A}\), models a rapid drop-off in eigenvalue as a function of \(i\). The parameter \(\kappa\) is there to bound the eigenvalues of \(A\) away from zero when \(i\) approaches \(n\). We construct \(B=O\Sigma O^{*}\), where the columns of \(O\in\mathbb{R}^{n\times m}\) are computed from a QR decomposition of a matrix whose entries are drawn from the standard Gaussian distribution, and \(\Sigma\) is a diagonal matrix whose \(i\)th element are selected according to: \[l_{B}(i)=\exp\{-(\alpha_{B}i/n-c_{B})^{\beta_{B}}\}. \tag{34}\] We set \(n=1000\), \(m=600\), and \(r=300\) in our experiments. The parameters used for \(A\) and \(B\) are shown in Table 1. Next, we evaluate and compare the preconditioners in Table 2 for each combination of \(A\) and \(B\). We do not use any oversampling (i.e. \(p=0\)) to develop an understanding of the preconditioners in the simplest setting possible. In our randomised power range approximations \(\widehat{S}^{q}\langle\Omega\rangle\) and \(\widehat{S}^{q}\langle\Omega\rangle\), we use \(q=2\). The Python package scaled_preconditioners1 contains implementations of the preconditioners listed in the "Scaled" column of Table 2. We have used MATLAB R2022b [21] to generate the results shown below. Footnote 1: Available at [https://github.com/andreasbock/scaled_preconditioners](https://github.com/andreasbock/scaled_preconditioners). ### Generalised Eigenvalues and Performance of Iterative Methods For each combination of the spectra depicted in Figure 2, we generate 25 random instances of \(O\) and \(b\in\mathbb{R}^{n}\) and study, for each of the preconditioners \(\mathcal{P}\) in Table 2, \begin{table} \begin{tabular}{|c|c|c|c|c|l|} \hline **Matrix** & \(\alpha\) & \(c\) & \(\beta\) & \(\kappa\) & **Description** \\ \hline \(A\) & \(0\) & \(0\) & \(0\) & \(0.7\) & flat \\ & \(3.5\) & \(0\) & \(1\) & \(0.05\) & exponential \\ & \(4\) & \(0.3\) & \(4.5\) & \(0.05\) & drop-off and fast decay \\ & \(2\) & \(0.25\) & \(4.5\) & \(0.05\) & drop-off at \(n/2\) \\ \hline \(B\) & \(3\) & \(0\) & \(1\) & \(0\) & exponential \\ & \(2.5\) & \(0.55\) & \(4.7\) & \(0\) & slow decay \\ \hline \end{tabular} \end{table} Table 1: Parameters used to generate eigenvalues of \(A\) and \(B\). Figure 2: Different choices of spectra for \(A\) and \(B\) using the parameters in table 1. * the eigenvalues \(\mathcal{P}^{-1}S\), * the value of the associated Bregman divergence \(\mathcal{D}_{\mathrm{LD}}(\mathcal{P},S)\), * and the convergence behaviour of the preconditioned conjugate gradient method for the solution of \(Sx=b\) with a tolerance of \(10^{-10}\). Overall, we observed negligible variance in the generalised eigenvalues and PCG convergence as a function of \(O\) and \(b\). As a result, we only present results for one instance of \(O\) and \(b\). The Bregman divergence values are presented as a box plot, which shows that the variation across the 25 problem instances is insignificant. When the spectrum of \(A\) is flat, Corollary 3 tells us the preconditioners \(\widehat{S}_{r}\) and \(\tilde{S}_{r}\) (and their respective variants) are identical since \(G_{r}=\lambda_{A}^{-1}B_{r}\). This is confirmed by the results seen in Figure 3, where the results differ only as a result of the spectrum of \(B\). As we expect, the randomised power range preconditioners are almost as good as \(\widehat{S}_{r}=\tilde{S}_{r}\) in terms of both generalised eigenvalues and PCG performance, with the results for the Nystrom approximation being slightly worse. The standard randomised SVD can be seen be the worst choice in all cases. Analysing the Bregman divergence values leads to the same conclusion. We conduct the same experiments for three other choices of \(A\) seen in Figure 2, namely a slowly decaying spectrum and two spectra with rapid drop off in the magnitude of the eigenvalues around \(n/2\) and at the beginning of the spectra, respectively. Figure 4 depicts the results for the simple exponential decay of the eigenvalues of \(A\) where we observe that by using the preconditioner \(\widehat{S}_{r}\), PCG requires fewer iterations to achieve convergence for both instances of \(B\), with the randomised power range preconditioner \(\widehat{S}(\Omega)^{q}\) almost achieving the same convergence. This similarity is also seen in terms of the generalised eigenvalues. The results here are aligned with Theorem 1 which suggests that finding the correct basis for a preconditioner as well as matching the eigenvalues of \(S\) has an impact on PCG convergence. Moreover, the Bregman divergence values for this experiment appear to be a useful indicator of the PCG performance of the preconditioner, i.e., the smaller the divergence, the better the preconditioner performs. In the left column of Figure 4, we see that \(\tilde{S}_{r}\) (and \(\tilde{S}^{q}\langle\Omega\rangle\), by a small margin) leads to faster PCG convergence than \(\widehat{S}^{\mathrm{Nys}}\langle\Theta\rangle\), while the opposite is true when looking at the right column. The difference between the two settings is the spectrum of \(B\), which on the left decays exponentially while on the right it is mostly flat. In the latter case, the spectrum of \(G\) is more easily captured with an arbitrary sketching matrix, whereby the advantage of approximating \(G\) is more profound, suggesting perhaps an effect similar to what was observed for the contrived diagonal examples in Section 2. As before, we see that the Bregman divergence values correlate with PCG performance. This evidence supports the close relationship between the Bregman divergence and the Nystrom approximation for preconditioning. Interestingly, while \(\widehat{S}\langle\Omega\rangle\) does not appear to be a good choice, our results for \(\widehat{S}^{q}\langle\Omega\rangle\) show that a few power iterations can be used to great effect. It is instructive to look at Figure 5 and 6 together, where the decay of \(A\) is rapid but occurs at different points in the spectrum. By comparing these four cases of \(A\) and \(B\), we can develop some insight into how the sketching with \(\Omega\) without any power iterations affects the results and how the randomised SVD approximations compare to the Nystrom analogues. As before, \(\widehat{S}_{r}\) and \(\widehat{S}^{q}\langle\Omega\rangle\) are superior choices whereas \(\widehat{S}\langle\Omega\rangle\) and \(\bar{S}\langle\Omega\rangle\) lead to the worst convergence. Here, the main difference in the results is the PCG convergence and associated generalised eigenvalues using \(\widehat{S}^{\mathrm{Nys}}\langle\Theta\rangle\), \(\tilde{S}_{r}\), and \(\tilde{S}^{q}\langle\Omega\rangle\). When the spectrum of \(A\) and \(B\) is mostly flat (recall \(\mathrm{rank}\ B=300\) here), such as in the right column of Figure 6, PCG performs best using \(\widehat{S}^{\mathrm{Nys}}\langle\Theta\rangle\) as a preconditioner. Since we expect the approximation error owing to randomisation to be less pronounced, we see the benefits of the Nystrom approximation over the randomised SVD \begin{table} \begin{tabular}{|l|l|l|} \hline **Approximation** & **Scaled** & **No scaling** \\ \hline Truncated SVD & \(\widehat{S}_{r}\) (eq. (2b)) & \(\widehat{S}_{r}\) (eq. (2a)) \\ Randomised SVD & \(\widehat{S}(\Omega)\) (eq. (26)) & \(\widehat{S}(\Omega):=A+\widehat{B}(\Omega)\) \\ Rand. power range SVD & \(\widehat{S}^{q}\langle\Omega\rangle\) (eq. (29)) & \(\widehat{S}^{q}\langle\Omega\rangle:=A+\widehat{B}^{q}\langle\Omega\rangle\) \\ Nyström & \(\widehat{S}^{\mathrm{Nys}}\langle\Theta\rangle\) (eq. (30a)) & \(\widehat{S}^{\mathrm{Nys}}\langle\Theta\rangle\) (eq. (30b)) \\ \hline \end{tabular} \end{table} Table 2: The preconditioners and definitions used in our numerical experiments. preconditioner. In other words, the approximation of the \(G\) matrix is less dependent on the realisation of the sketching matrix. A similar but less pronounced effect is seen in the right column of Figure 5. This is reflected in the Bregman divergence, where \(\widehat{S}^{\mathrm{Nys}}\langle\Theta\rangle\) are closer to \(S\) in this nearness measure than \(\widehat{S}_{r}\) or its variations. Some caution must be exercised when using the Bregman divergence as a measure of quality. As an example, in Figure 5 we see that the scaled Nystrom preconditioner \(\widehat{S}^{\mathrm{Nys}}\langle\Theta\rangle\) is closer to \(S\) than \(\widehat{S}_{r}\) in terms of divergence, but does not produce faster PCG convergence. This again suggests great care must be taken when using practical approximations of \(S\) using the scaling approach combined with randomisation. The left columns of Figure 5 and 6 use a \(B\) whose spectrum is exponentially decaying for different choices of \(A\). When the spectrum of \(A\) decays rapidly (Figure 5), the scaling approach does not greatly impact the convergence results, since the effect of sketching is dominant. In other words, it is less likely that \(\Omega\) sketches the right part of the spectrum of \(G\) and \(\widehat{S}_{r}\) outperforms \(\widehat{S}^{\mathrm{Nys}}\langle\Theta\rangle\). This approximation error is not as dominant in the left column of Figure 6 where the spectrum of \(A\) decays slowly. Overall, these results suggest that while \(\widehat{S}_{r}\) is at least as good a preconditioner as \(\widehat{S}_{r}\) (and in all nontrivial cases, better), practical randomised numerical approximations thereof must be carried out with great care since we see that the \(\widehat{S}\langle\Omega\rangle\) and \(\widehat{S}^{\mathrm{Nys}}\langle\Theta\rangle\) can be outperformed by \(\widehat{S}_{r}\) depending on the spectrum of \(A\) and \(B\). In general, it appears that \(\widehat{S}^{q}\langle\Omega\rangle\) is the best choice when the computation of \(\widehat{S}_{r}\) is not an option. Overall, the Bregman divergence appears to be a useful tool in suggesting how a preconditioner can perform in practice, in particular over simply observing the generalised eigenvalues. A result such as Theorem 1 supports this analysis. We study the effect of sketching in more detail in Section 6.3. ### Effect of the Low Rank Approximation on the Bregman Divergence To reinforce the intuition about the Bregman divergence we conclude by examining more closely how its terms are affected by the different low-rank approximations described previously. For simplicity, we now only consider the spectrum of \(A\) and \(B\) used in the left column of Figure 6, and set \(n=100\), \(m=60\), and \(r=30\). Recall the definition in (8), where \(X=V\operatorname{diag}(\theta)V^{*}\) and \(Y=V\operatorname{diag}(\theta)V^{*}\): \[\mathcal{D}_{\mathrm{LD}}(X,Y)=\sum_{i=1}^{n}\sum_{j=1}^{n}(v_{i}^{*}u_{j})^{ 2}\Big{(}\frac{\lambda_{i}}{\theta_{j}}-\log\frac{\lambda_{i}}{\theta_{j}}-1 \Big{)}.\] Figure 7 and 8 show the terms of the Bregman divergence \(\mathcal{D}_{\mathrm{LD}}(\mathcal{P},S)\) for each approximation \(\mathcal{P}\) of \(S\). The right column in these figures show that there is little difference between the terms \(\frac{\lambda_{i}}{\theta_{j}}-\log\frac{\lambda_{i}}{\theta_{j}}-1\) in the definition of the divergence. Looking at the left columns of these figures suggest that the main contributing factor to the observed difference in Bregman divergence values in Figure 6 lies in the approximation of the basis for \(S\) expressed by the term \((v_{i}^{*}u_{i})^{2}\) in the expression above. In fact, we see that \(\widehat{S}_{r}\) matches the first eigenvectors of \(S\) perfectly by definition, and that the first \(r\) eigenvectors of both \(\widehat{S}^{q}\langle\Omega\rangle\) and \(\widehat{S}\langle\Omega\rangle\) are almost collinear with those of \(S\). This is not quite the case for \(\widehat{S}^{\mathrm{Nys}}\langle\Theta\rangle\), where we see the effect of the Nystrom approximation. For the nonscaled preconditioners, we see in the left column that these approximations do not capture the basis in which the eigenvalues of \(S\) are expressed, which is aligned with the PCG results seen in the left column of Figure 6. ## 7 Summary & Outlook In this paper, we have presented several preconditioners for solving \(Sx=b\) where the matrix \(S\) is a sum of \(A\in\mathbb{H}_{++}^{n}\) and \(B\in\mathbb{H}_{+}^{n}\). The first proposed preconditioner \(\widehat{S}_{r}\), which is chosen as a sum of a \(A\) and a low-rank matrix, is the minimiser of a Bregman divergence. We have also shown how this preconditioner can be recovered from a scaled Frobenius norm minimisation problem, and that it is optimal in the sense that it minimises the condition number of the preconditioned matrix for preconditioners on its form, i.e., positive definite plus low rank. We have also established a link between the Bregman divergence and the Nystrom approximation. The equivalence between _single pass_ randomised SVD and the Nystrom approximation was also, to the best of the authors' knowledge, not documented before. Our numerical experiments illustrate how our theoretical results vary for differt choices of \(A\) and \(B\), and for different practical choices of low-rank approximations, i.e., randomised, randomised with power iterations, and a randomised Nystrom approximation. Figure 3: Results for a flat spectrum of \(A\). Figure 4: Results for an exponential decay of the spectrum of \(A\). Figure 5: Results for the fast decay of the eigenvalues of \(A\) near \(n/4\). Note that PCG failed to converge to the desired tolerance for both \(\widehat{S}\langle\Omega\rangle\) and \(\tilde{S}\langle\Omega\rangle\). Figure 6: Results for the fast decay of the eigenvalues of \(A\) near \(n/2\). Figure 8: Bregman divergence terms for the scaled preconditioners. Top to bottom: \(\widehat{S}_{r}\), \(\widehat{S}\langle\Omega\rangle\), \(\widehat{S}^{q}\langle\Omega\rangle\), \(\widehat{S}^{\text{Nys}}\langle\Theta\rangle\). The work in this paper offers many avenues for future research. The invariance property of the Bregman divergence has shown to be a valuable property in interpreting the quality of the different preconditioners studied in this work. Understanding the choice of splitting of the matrix \(S\) discussed in Section 3.1 could help in developing approximation strategies, for instance when the positive definite part of \(S\) is not easily factorisable. In [13], the authors developed a preconditioner based on a low-accuracy LU factorisation for ill-conditioned systems, which could yield insights in this direction. Moreover, in the present work \(\widehat{S}_{r}\), is selected as the sum of \(A\) and a low-rank matrix. We could also investigate preconditioners on other forms, such as either hierarchical matrices, leading to different constraints in a Bregman divergence minimisation problem such as in Section 3.2. It could also be fruitful to see to which extent the framework introduced here generalises existing approaches, e.g., [6]. We also aim to apply the general framework to specific problems in Gaussian process regression [22] and variational data assimilation. The link between the Bregman divergence and the Nystrom approximation may also help to explain why this approximation is suitable for a given application since, as we saw in Section 6, it always appears to improve PCG convergence over a randomised SVD. However, if the computational budget of a given application allows for a randomised power range approximation, this effort appears to be well compensated in terms of PCG performance. Other divergences commonplace in information geometry could also be interesting to analyse, e.g., the Itakura-Saito divergence or the divergence associated with the negative Shannon entropy. Deeper connections could be sought between this domain and other problems in numerical linear algebra beyond preconditioning.
2306.04179
Photon Reconstruction in the Belle II Calorimeter Using Graph Neural Networks
We present the study of a fuzzy clustering algorithm for the Belle II electromagnetic calorimeter using Graph Neural Networks. We use a realistic detector simulation including simulated beam backgrounds and focus on the reconstruction of both isolated and overlapping photons. We find significant improvements of the energy resolution compared to the currently used reconstruction algorithm for both isolated and overlapping photons of more than 30% for photons with energies E < 0.5 GeV and high levels of beam backgrounds. Overall, the GNN reconstruction improves the resolution and reduces the tails of the reconstructed energy distribution and therefore is a promising option for the upcoming high luminosity running of Belle II.
F. Wemmer, I. Haide, J. Eppelt, T. Ferber, A. Beaubien, P. Branchini, M. Campajola, C. Cecchi, P. Cheema, G. De Nardo, C. Hearty, A. Kuzmin, S. Longo, E. Manoni, F. Meier, M. Merola, K. Miyabayashi, S. Moneta, M. Remnev, J. M. Roney, J. -G. Shiu, B. Shwartz, Y. Unno, R. van Tonder, R. Volpe
2023-06-07T06:23:12Z
http://arxiv.org/abs/2306.04179v2
# Photon Reconstruction in the Belle II Calorimeter ###### Abstract We present the study of a fuzzy clustering algorithm for the Belle II electromagnetic calorimeter using Graph Neural Networks. We use a realistic detector simulation including simulated beam backgrounds and focus on the reconstruction of both isolated and overlapping photons. We find significant improvements of the energy resolution compared to the currently used reconstruction algorithm for both isolated and overlapping photons of more than \(30\,\mathrm{\char 37}\) for photons with energies \(E_{\gamma}<0.5\) GeV and high levels of beam backgrounds. Overall, the GNN reconstruction improves the resolution and reduces the tails of the reconstructed energy distribution and therefore is a promising option for the upcoming high luminosity running of Belle II. **Keywords:** calorimeter, photon reconstruction, overlapping clusters, high background, fuzzy clustering, machine learning, deep learning, graph neural networks, end-to-end representation spaces ## 1 Introduction The Belle II experiment is located at the high-intensity, asymmetric electron-positron-collider SuperKEKB in Tsukuba, Japan. SuperKEKB is colliding \(4\,\mathrm{\,Ge\kern-0.8ptV}\) positron and \(7\,\mathrm{\,Ge\kern-0.8ptV}\) electron beams at a center-of-mass energy of around \(10.58\,\mathrm{\,Ge\kern-0.8ptV}\) to search for rare meson decays and new physics phenomena. Many of these decays include photons in the final state that are reconstructed exclusively in the electromagnetic calorimeter. The experimental program of Belle II targets a significantly increased instantaneous luminosity that ultimately exceeds the predecessor experiment by a factor of 30. This increase in luminosity also leads to a significant increase in beam-induced backgrounds [1]. These background processes produce both high-energy particle interactions that could be misidentified as physics signals, but also energy depositions of low-energy particles that degrade the energy resolution of the electromagnetic crystal calorimeter. In this paper, we describe a fuzzy clustering algorithm based on Graph Neural Networks (GNNs). We then study the reconstruction of photon energies in simulated events using the GNN clustering algorithm. The term fuzzy clustering [2] refers to the partial assignment of individual calorimeter crystals to several clustering classes. In our case, these are potentially overlapping, different signal photons, but also a beam background class. The paper is organized as follows: Section 2 gives an overview of related work on Machine Learning for calorimeter reconstruction. Section 3 describes the Belle II electromagnetic calorimeter. The event simulation and details of the beam background simulation are discussed in Section 4. The conventional Belle II reconstruction algorithm and the new GNN algorithm are described in Section 5. We introduce the metrics used to measure the performance of the GNN algorithm in Section 6. The main performance studies and results are discussed in Section 7. We summarize our results in Section 8. ## 2 Related work Machine Learning is widely used in high energy physics for the reconstruction of calorimeter signals both for clustering [3, 4], energy regression [5, 6], but also particle identification [7, 8] and fast simulation [9, 10]. Most of the recent work has been performed in the context of the high-granularity calorimeter (HGCAL) at CMS [11, 12]. For Belle II, the use of machine learning utilizing the electromagnetic calorimeter is so far limited to image-based particle identification in the barrel [8, 13]. GNNs are now widely recognized as one possible solution for irregular geometries in high energy physics [14]. GNN architectures that are able to learn a latent space representation of the detector geometry itself [15, 16] are the basis of the work presented in this paper. Previous work has focused on simplified and idealized detector geometries, often approximated as a regular grid of readout cells expressed as 2D or 3D images. Additionally, the presence of geometry changes and overlaps between barrel and endcap regions, large variations of cell sizes, and the presence of very high spatially non-uniform noise levels induced by beam background energy depositions are neglected. ## 3 The Belle II Electromagnetic Calorimeter The Belle II detector consists of several subdetectors arranged around the beam pipe in a cylindrical structure that is described in detail in Ref. [17, 18]. We define the \(z\)-axis of the laboratory frame as the central axis of the solenoid. The positive direction is pointing in the direction of the electron beam. The longitudinal direction, the transverse plane with azimuthal angle \(\phi\), and the polar angle \(\theta\) are defined with respect to the detector's solenoidal axis. The Belle II electromagnetic calorimeter (ECL) consists of 8736 Thallium-doped CsI (CsI(Tl)) crystals that are grouped in a forward endcap, covering a polar angle \(12.4^{\circ}<\theta<31.4^{\circ}\), a barrel, covering a polar angle \(32.2^{\circ}<\theta<128.7^{\circ}\), and a backward endcap, covering a polar angle \(130.7^{\circ}<\theta<155.1^{\circ}\). The crystals have a trapezoidal geometry with a nominal cross-sectional area of approximately \(6\times 6\) cm\({}^{2}\) and a length of 30 cm, providing 16.1 radiation lengths of material. While crystals in the barrel are similar in cross-section and shape, the crystals in the endcaps vary with masses between \(4.03\,\mathrm{kg}\) and \(5.94\,\mathrm{kg}\)[19]; crystals in the endcaps also have significantly more passive material in front of the crystals. Each crystal is aligned in the direction of the collision point with a small tilt in polar angle \(\theta\). Crystals in the barrel additionally have a small tilt in azimuthal angle \(\phi\). The scintillation light produced in the CsI(Tl) crystals is read out by two photodiodes glued to the back of each crystal. After shaping electronics, the waveform is digitized and the crystal energy \(E_{\mathrm{rec}}^{\mathrm{crystal}}\) over baseline and time \(t_{\mathrm{rec}}^{\mathrm{crystal}}\) since trigger time of the energy deposition are reconstructed online using FPGAs [20]. Waveforms of crystals with energy depositions above \(50\,\mathrm{MeV}\) are stored for offline processing to allow for electromagnetic vs. hadronic shower identification through pulse shape discrimination (PSD) [21]. Available information from PSD is * the fit type ID of a multi-template fit indicating which of the possible templates provides the best goodness-of-fit, * the respective \(\chi^{2}\) value as an indicator of the goodness-of-fit, * and the ratio of reconstructed hadronic and photon template energies, referred to as PSD hadronic energy ratio in the following. ## 4 Data Set In this work, we use simulated events to train and evaluate the reconstruction algorithms. The detector geometry and interactions of final-state particles with detector materials are simulated using Geant4[22] combined with a dedicated detector response simulation. Simulated events are reconstructed and analyzed using the Belle II Analysis Software Framework (basf2) [23, 24]. We simulate isolated photons, with energy \(0.1<E_{\mathrm{gen}}<1.5\,\mathrm{GeV}\), and direction \(17^{\circ}<\theta_{\mathrm{gen}}<150^{\circ}\) and \(0^{\circ}<\phi_{\mathrm{gen}}<360^{\circ}\) drawn randomly from independent uniform distributions in \(E\), \(\theta\), and \(\phi\). The generation vertex of the photons is \(x=0\), \(y=0\), and \(z=0\). For events with two overlapping photons, we first draw randomly one photon with independent uniform distributions as outline above. We then simulate a second photon with an angular separation \(2.9<\Delta\alpha<9.7\,^{\circ}\) drawn randomly from uniform distributions in \(\Delta\alpha\) and in \(E\). This angular separation covers approximately the distance needed to create two overlapping clusters. As part of the simulation, we overlay simulated beam background events corresponding to different collision conditions to our signal particles [1, 25]. The simulated beam backgrounds correspond to an instantaneous luminosity of \(\mathcal{L}_{\mathrm{beam}}=1.06\times 10^{34}\,\mathrm{cm}^{-2}\mathrm{s}^{-1}\) (called _low beam background_), and \(\mathcal{L}_{\mathrm{beam}}=8\times 10^{35}\,\mathrm{cm}^{-2}\mathrm{s}^{-1}\) (called _high beam background_). Those two values approximately correspond to the conditions in 2021, and the expected conditions slightly above the design luminosity, respectively. The spatial distribution of beam backgrounds is asymmetric: They are much higher in the backward endcap than in the forward endcap, and they are slightly higher in the barrel than in the forward endcap. Additional electronics noise per crystal of about \(0.35\,\mathrm{MeV}\) is included in our simulation as well. The supervised training and the performance evaluation both use labeled information that relies on matching reconstructed information with the simulated _truth_ information. For each of the four configurations, isolated and overlapping photons with low and high beam backgrounds, we use 1.8 million events for training and \(200\,000\) events for validation. The performance evaluation is carried out on a large number of statistically independent samples simulated with various energies and in different detector regions. We then study the performance of the GNN clustering algorithm in all four scenarios and compare it to the baseline basf2 reconstruction. Both reconstruction algorithms are described in detail in Sec. 5. ### Isolated Photon To study isolated photons, we use the simulated events with a generated isolated photon only. For each event, we select a region of interest (ROI), defined as the \(9\times 9\) array of crystals centered around a local maximum (LM). With the irregular crystal arrangement in the endcaps, this array is not necessarily regular but rather defined by the 80 nearest neighbors of the LM. The LM is a crystal with at least \(10\,\mathrm{MeV}\) of reconstructed crystal energy, and energy higher than all its direct eight neighbors. The LM must be the only LM in the ROI, and the matched truth particle must be a simulated photon responsible for at least \(20\%\) of the reconstructed crystal energy. Precisely, for the LM we require the ratio \[r_{\mathrm{LM}}^{\gamma_{1}}=\frac{E_{\mathrm{dep}}^{\gamma_{1},\mathrm{ crystal}_{\mathrm{LM}}}}{E_{\mathrm{rec}}^{\mathrm{crystal}_{\mathrm{LM}}}}\geq 0.2. \tag{1}\] Here, \(E_{\mathrm{dep}}^{\gamma_{1},\mathrm{crystal}_{\mathrm{LM}}}\) denotes the truth energy deposition of photon 1 in the LM, and \(E_{\mathrm{rec}}^{\mathrm{crystal}_{\mathrm{LM}}}\) the reconstructed crystal energy in the LM. The crystals contained in the ROI are considered for the clustering by the GravNet algorithm and significantly extend the \(5\times 5\) area considered by the baseline algorithm (Sec. 5). Furthermore, the ROI represents the area of the local coordinate system later used as an input feature, with the LM as the origin. Figure 2 (top) shows a typical isolated photon event with high beam background. ### Overlapping Photons Two different photons that deposit some of their energy in identical crystals are referred to as overlapping photons. To study overlapping photons, we use the simulated events with two overlapping photons only. We select events that have exactly two LMs that must fulfill the following selection criteria: 1. each LM must have reconstructed crystal energies greater than \(10\) MeV, 2. \(r_{\mathrm{LM}_{1}}^{\gamma_{1}}\geq 0.2\) and \(r_{\mathrm{LM}_{1}}^{\gamma_{1}}>r_{\mathrm{LM}_{1}}^{\gamma_{2}}\), 3. \(r_{\mathrm{LM}_{2}}^{\gamma_{2}}\geq 0.2\) and \(r_{\mathrm{LM}_{2}}^{\gamma_{2}}>r_{\mathrm{LM}_{2}}^{\gamma_{1}}\). We refer to criteria a)-c) as LM _separation criteria_ since they ensure that the particles form two separate LMs. Additionally, events must meet the _overlap criterion_: d) * each of the two photons must deposit at least 10 MeV energy in shared crystals within a \(5\times 5\) area around its respective LM. Figure 1 shows the fraction of events accepted by these selections as a function of the simulated opening angle. In the scope of this paper, we additionally require LMs to exclusively originate from simulated particles without additional background clusters in the ROI, that is: * the two LMs must be the only ones in the ROI and they must be truth-matched to the simulated photons. Finally, we remove rare cases of small truth energy depositions and large backgrounds, by requiring: * the crystal with the largest truth energy deposition of a photon must be within a \(5\times 5\) area around its corresponding LM. We then create a ROI centered at the midpoint between the two LMs. The midpoint between the two LMs used to center the ROI is calculated using the shortest distance between two LMs projected onto the surface of a sphere. The crystal closest to the midpoint is defined as the ROI center. The LM positions for this are determined by interpreting the global LM coordinates of their associated crystals as latitude and longitude. Figure 2 (bottom) shows an overlapping photon event with high beam background. The truth energy deposition per photon and the reconstructed crystal energy \(E_{\rm rec}^{\rm crystal}\), crystal time \(t_{\rm rec}^{\rm crystal}\), crystal PSD information (see Sec. 3), and the LM positions within the ROI are recorded for each event. ## 5 Reconstruction Algorithms Interactions of energetic photons in the Belle II ECL typically deposit energy in up to \(5\times 5\) crystals. The task of the clustering reconstruction algorithms is to select a set of crystals that contains all the energy of the incoming photon, but no energy from other particles or from beam background. Low beam background results in approximately \(17\,\%\) of all crystals in the ECL having significant reconstructed energy \(E_{\rm rec}^{\rm crystal}\geq 1\,\)MeV; for high beam backgrounds this number is expected to increase to about \(40\,\%\). This increase in the number of crystals to consider in the clustering, adds to the complexity of the reconstruction. ### Baseline The baseline algorithm is designed to provide maximum efficiency for cluster finding, contain all crystals from the incoming particle for particle identification, and select an optimal subset of the cluster crystals that provides the best energy resolution [17]. The clustering is performed in three steps. In the first step, all crystals are grouped into a connected set of crystals, so-called connected regions starting with LMs, as defined previously, that have a reconstructed crystal energy of more than \(10\,\)MeV. In an iterative procedure all direct neighbors with energies above \(0.5\,\)MeV are added to this LM, and the process is continued if any neighbor itself has energy above \(10\,\)MeV. Overlapping connected regions are merged into one. In the second step, each connected region is split into clusters, one per LM. If there is only one LM in the connected region, up to 21 crystals in a \(5\times 5\) area excluding corners centered at the local maximum are grouped into a cluster. If there is more than one LM in a connected region, the energy in each crystal of the connected region is assigned a distance-dependent weight and can be shared between different clusters. The distance is calculated from the cluster centroid to each crystal center, where the cluster centroid is updated Figure 1: Fraction of selected overlapping photon events in the barrel as a function of generated opening angle. The orange markers correspond to events fulfilling LM _separation criteria_ a)-c); the blue markers correspond to events that additionally pass the _overlap criterion_ d) (see text for details). iteratively using logarithmic energy weights. This process is repeated until all cluster centroids in a connected region are stable. In a post-processing step, an optimal subset, including the \(n\) highest energetic crystals of all non-zero weighted crystals that minimize the energy resolution, is used to predict the cluster energy \(E_{\mathrm{rec}}^{\mathrm{basf2}}\). \(n\) depends on the measured noise in the event, and on the energy of the LM itself. The noise level is estimated by counting the number of crystals in the event containing more than \(5\,\mathrm{MeV}\) that have times \(t\) more than \(125\,\mathrm{ns}\) from the trigger time. \(E_{\mathrm{rec}}^{\mathrm{basf2}}\) is also corrected already within \(\mathtt{basf2}\) for possible bias using simulated events. This bias includes leakage (energy not deposited in the crystals included in the energy sum) and beam backgrounds (energy included in the sum that is not from the signal photon). \(E_{\mathrm{rec}}^{\mathrm{basf2}}\) is the estimator for the generated energy of a particle. The \(\mathtt{basf2}\) clustering algorithm also returns a cluster energy \(E_{\mathrm{rec,\ raw}}^{\mathrm{basf2}}\) that is not corrected for energy bias. \(E_{\mathrm{rec,\ raw}}^{\mathrm{basf2}}\) is the estimator for the deposited energy of a particle. ### Graph Neural Network Architecture GNN architectures have shown that they are powerful network types to deal with both irregular geometries and varying input sizes. In this work, all crystals of an ROI with an energy deposition above \(1\) MeV are interpreted as nodes in a graph, Figure 2: Typical event displays showing (left) simulated truth assignments, (center) input variables time, and (right) PSD hadronic energy ratio for (top) isolated and (bottom) overlapping photons for two example events with high beam background. The marker centers indicate the crystal centers, the marker area is proportional to the truth energy deposition for the left plots; it is proportional to the reconstructed crystal energy for the other plots. which leads to variable input sizes and is thus a good use case for GNNs. The implementation of this GNN is done in PyTorch Geometric[26]. The input features consist of crystal properties and crystal measurements: The global coordinates \(\theta\) and \(\phi\) of each crystal, the local coordinates \(\theta^{\prime}\) and \(\phi^{\prime}\) with respect to the ROI center, the crystal mass, and the LM(s) (in one-hot encoding) represent crystal properties. The crystal energy \(E_{\rm rec}^{\rm crystal}\) in GeV, the time \(t_{\rm rec}^{\rm crystal}\) in \(\mu\)s, and the PSD fit type, PSD \(\chi^{2}\), and PSD hadronic energy ratio are crystal measurements used as input features. Pre-processing scales the input uniformly before further processing with the GNN: All features are min-max normalized to an interval of \([0,1]\) with the exception of \(t_{\rm rec}^{\rm crystal}\) and the PSD hadronic energy ratio which are both normalized to the interval \([-1,1]\). The global coordinates and the crystal masses are normalized based on the range of coordinates and masses of all crystals in the detector instead of only the ones in the ROI. Additionally, we average each input feature over all nodes in the ROI and concatenate the averaged input features as additional inputs, thus enabling a global exchange of information. As displayed in Fig. 3, our model is built out of four so-called GravNet[16] blocks of which the concatenated outputs are passed through three dense output layers with a final softmax activation function. Each GravNet block features three dense layers at the beginning of the block, the initial two of which with ELU[27] activation functions and the last one with a tanh activation function. The dense layers feed into a GravNet layer and the overall GravNet block is concluded by a batch normalization layer[28]. The GravNet layer is responsible for the graph building and subsequent message passing between the nodes of the graph. It first translates the input features into two learned representation spaces: one representing spatial information \(S\) while the other, denoted \(F_{\rm LR}\), contains the transformed features used for message passing. In the second step, each node is connected to its \(k\) nearest neighbors defined by the Euclidean distances in \(S\), thus creating an undirected, connected graph. For each node, the input features of connected nodes are then weighted by a Gaussian potential depending on the distance in \(S\) and aggregated by summation. The resulting features are concatenated with the initial features and, after batch normalization, passed to the next GravNet block and to the dense output layers. The implementation in the present work follows the concept of fuzzy clustering which refers to the partial assignment of individual crystals to several clustering classes. Consequently, the GNN predicts weights \(w_{i}^{\rm X}\) that indicate the proportion of the reconstructed energy \(E_{\rm rec}^{\rm crystal_{i}}\) in a crystal \(i\) that belongs to a clustering class X. For models used with isolated photons, \({\rm X\in\{\gamma_{1},background\}}\), for models with overlapping photons \({\rm X\in\{\gamma_{1},\gamma_{2},background\}}\). As a loss function, we then use the Mean Squared Error (MSE) between the true and predicted weights summed over all classes and crystals. The training is stopped when there has been no improvement for 15 epochs in the optimization objective. For low beam background models that objective is the MSE loss on the validation data set, whereas the high beam background models employ the more high-level \({\rm FWHM_{dep}}\) (Sec. 6) on the validation data set. Hyperparameters have been chosen through a hyperparameter optimization using Optuna[29]. The optimization is done with respect to the \({\rm FWHM_{dep}}\) (Sec. 6) instead of the loss function. We optimize the two models trained for high beam backgrounds and use the respective hyperparameters also for the corresponding low beam background models. The final hyperparameters for both the isolated photon models and the overlapping photon models are shown in Table 1. The learning rate, the number of dense layers in each GravNet block, and all dimensions of the output layers have been manually optimized by testing a reasonable range of values. The learning rate is set to \(5\!\times\!10^{-3}\) and is subject to a decay Figure 3: An illustration of the GNN architecture. Each pair of gray, square brackets represents one GravNet block consisting of dense layers, a GravNet layer and a batch norm layer. factor of 0.25 after every five epochs of stagnating validation loss. We did not observe significant over-training and as a consequence, we do not use dropout layers or other regularization methods but rely on the large data set. The GNN algorithm yields the weights \(w_{i}^{\rm X}\) per crystal for all crystals in the ROI with an energy deposition above 1 MeV. In order to reconstruct the total cluster energy \(E_{\rm rec}^{\rm GNN}\) associated with a certain particle, we then sum over all specific weights multiplied by the reconstructed energies per crystal, \(E_{\rm rec}^{\rm GNN}=\sum w_{i}E_{\rm rec}^{\rm crystal}\). Figure 4 shows how the GNN and the basf2 algorithms behave in clustering a typical case of overlapping photons. ## 6 Metrics For performance evaluation, the reconstructed energy of a particle is compared with two different truth targets: the total deposited truth energy \(E_{\rm dep}\) per photon in the ROI, and the generated truth energy \(E_{\rm gen}\) per photon. This results in two variants of relative reconstruction errors. The reconstruction error on the deposited energy \[\eta_{\rm dep}^{\rm basf2} =\frac{E_{\rm rec,\,raw}^{\rm basf2}-E_{\rm dep}}{E_{\rm dep}}\quad \text{and}\] \[\eta_{\rm dep}^{\rm GNN} =\frac{E_{\rm rec}^{\rm GNN}-E_{\rm dep}}{E_{\rm dep}} \tag{2}\] \begin{table} \begin{tabular}{l r r} \hline \hline Hyperparameter & Isolated Photon Models & Overlapping Photon Models \\ \hline Width of the Dense Layers, \(\rm F_{IN},\rm F_{OUT}\) & 22 & 24 \\ Feature Space Dimension \(\rm F_{LR}\) & 16 & 16 \\ Spatial Information Space Dimension S & 6 & 6 \\ Connected Nearest Neighbors \(k\) & 14 & 16 \\ Batch Norm Momentum & 0.01 & 0.4 \\ Stacked GravNet Blocks & 4 & 4 \\ Batch Size & 1024 & 512 \\ \hline \hline \end{tabular} \end{table} Table 1: Optimized hyperparameters of the isolated photon, and overlapping photon GravNet models. The hyperparameters are the result of an optimization of the \(\rm FWHM_{dep}\) on the respective high background validation data set. Figure 4: Comparison of (a) truth energy fractions, (b) reconstructed energy fraction by the GNN, and (c) reconstructed energy fraction by basf2 for an example event with high beam background. Colors indicate the fractions belonging to each photon or background. The marker centers indicate the crystal centers, the marker area is proportional to the truth or reconstructed (GNN, basf2) energy deposition respectively. gives access to the energy resolution ignoring leakage and other detector effects. It is a direct evaluation of the clustering performance of an algorithm. On the other hand, the reconstruction error on the generated energy \[\eta_{\rm gen}^{\rm basf2} =\frac{E_{\rm rec}^{\rm basf2}-E_{\rm gen}}{E_{\rm gen}}\quad\text{ and}\] \[\eta_{\rm gen}^{\rm GNN} =\frac{E_{\rm rec}^{\rm GNN}-E_{\rm gen}}{E_{\rm gen}} \tag{3}\] factors in all detector and physics effects and quantifies how much of the improvements to the underlying clustering carry over to downstream physics object reconstruction. Evaluating both algorithms on a large number of simulated photons yields peaking distributions in both reconstruction errors \(\eta_{\rm dep}\) and \(\eta_{\rm gen}\). Both distributions are potentially biased because of energy leakage and the presence of beam backgrounds (compare Sec. 5.1). We perform a binned fit using a double-sided crystal ball [30, 31] function as probability density function (pdf) with the kafe2[32] framework. We shift all reconstruction error distributions independently by a multiplicative factor to correct the difference between the fitted peak position and zero (Fig. 5). Since \(\eta_{\rm dep}\) and \(\eta_{\rm gen}\) are asymmetric distributions, we repeat this procedure until the difference between the fitted peak position and zero is less than 0.002. This procedure usually converges within two or three iterations. We then determine the full width half maximum (FWHM) of the final shifted distributions in \(\eta_{\rm dep}\) and \(\eta_{\rm gen}\), yielding \(\rm FWHM_{\rm dep}\) and \(\rm FWHM_{\rm gen}\) respectively. The uncertainty on the FWHM is calculated from the uncertainties of the fit parameters. We define the resolution as \(\sigma_{\rm gen}=\rm FWHM_{\rm gen}/2.355\). In addition to the FWHM, we determine the tails of the reconstruction error distribution. The left and right tails \(T_{\rm L,R}\) are calculated as the 95th percentile when ranking the unbinned events on the respective side of the peak position, as given by the fit parameters, in ascending order (\(T_{\rm R}\)) and descending order (\(T_{\rm L}\)) respectively. Propagating the uncertainty on the peak position as given by the fit yields the uncertainty on \(T_{\rm L,R}\). ## 7 Results The first sections of the results focus on detailed studies of isolated clusters. Section 7.4 then introduces overlapping clusters and their effects on the performance. Figure 6 shows examples for the distributions of both reconstruction errors \(\eta_{\rm dep}\) and \(\eta_{\rm gen}\), as well as the fit results for events with low beam background. Figure 7 shows the equivalent distributions for events with high beam background. The \(\eta_{\rm gen}\) distributions are wider because the reconstruction error includes the effects of leakage which result in missing energy with respect to the generated photon energy. This only affects the left-side tails. In the following subsections, we are comparing the performance of the GNN and the basf2 reconstruction algorithms for different detector regions for low and high beam backgrounds by evaluating the energy resolution \(\rm FWHM_{\rm gen}/2.355\) and the tail parameters. We then analyze the GNN in more detail by testing the input variable dependencies and the robustness against differences in beam background levels between training and evaluation. Figure 5: Example distribution of the relative reconstruction error \(\eta_{\rm gen}\) of the generated energy and illustration of the bias correction, the FWHM, and the tail ranges. ### Energy resolution and energy tails The three detector regions barrel, forward endcap, and backward endcap described in Sec. 3 differ in crystal geometry, levels of background, and amount of passive material before and in between crystals. The following section studies the variations in the energy reconstruction performance that arise as a direct result of these differences. In order to access the energy dependence of the resolution and tail parameters we simulate Figure 6: Distribution of relative reconstruction errors (a) \(\eta_{\text{dep}}\) and (b) \(\eta_{\text{gen}}\) for isolated clusters for low beam backgrounds. The first bin contains all underflow entries; the last bin contains all overflow entries. test data sets of photons at various fixed energies. The FWHM for each simulated data set is then determined according to Sec. 6. Plotting the resolutions \(\mathrm{FWHM_{gen}}/2.355\) over the generated photon energies \(E_{\mathrm{gen}}\) reveals a characteristic relationship that is parameterized by the function \(a/E_{\mathrm{gen}}\oplus b/\sqrt{E_{\mathrm{gen}}}\oplus c\), where \(\oplus\) indicates addition in quadrature. Both the GNN as well as the baseline algorithm perform differently in regards to the energy resolution in all three detector parts, as can be seen in Fig. 7(a) for low beam background and as Fig. 7(b) for high beam background. Table 2 reports the parameters of the fitted parameterization of the resolution. We attribute these difference to the large spread of both shape and size of crystals in the endcaps, the asymmetric distribution of beam Figure 8: Resolution \(\mathrm{FWHM_{gen}}/2.355\) of the GNN and \(\mathtt{basf2}\) as function of the simulated photon energy \(E_{\mathrm{gen}}\) for both endcaps and the barrel for (a) low and (b) high beam background. Each color is associated with one detector region; the light color indicates \(\mathtt{basf2}\), the dark color the GNN. The bands indicate the uncertainty of the fits, see text for details. The fit parameters are summarized in Tab. 2. backgrounds, and the different amount of passive material in front of the different detector regions. Overall, the energy resolution of the GNN algorithm is significantly better than the baseline algorithm for all photon energies. The GNN energy resolution is better by more than 30 % for photon energies below 500 MeV which is the energy range of more than 90 % of all photons in \(B\)-meson decay chains. The higher the beam background, the larger the difference between the GNN and the baseline algorithm. The difference between the two algorithms decreases with energy because the relative contribution of beam backgrounds to the photon energy resolution decreases. The shape of the left-side tails is dominated by passive material and is hence expected to be different in the different detector regions. The left-side tails are almost independent of beam backgrounds as can be seen by comparing Fig. 8(a) for low beam background and Fig. 8(c) for high beam background. The GNN and the baseline algorithm both show the smallest tail length for the barrel region with decreasing tail lengths for increasing energy. The left-side tails are largest in the backward endcap due to the highest ratio of passive to active material as expected. The right-side tails are mostly originating from beam background being wrongly added to photon clusters. The GNN produces shorter tails than the baseline algorithm for all energies and for both low and high beam backgrounds, with the performance difference increasing for lower energies and higher beam backgrounds. ### Beam Background Robustness The beam background levels are changing continuously during detector operations. Ideally, reconstruction algorithms at Belle II are insensitive to such changes. The basf2 baseline algorithm achieves robustness against increasing beam backgrounds by adaptively including fewer crystals in the energy sum calculation. Since our GNN is trained with a large number of events with event-by-event fluctuations of beam backgrounds, we expect robustness against varying beam backgrounds if the GNN generalizes well enough. We test the robustness of our GNN by comparing GNNs trained and tested on the same backgrounds, against GNNs trained and tested on the two different beam backgrounds (Fig. 10, parameterization in Tab. 3). While the GNNs trained on the same beam backgrounds achieve a better resolution than the ones trained on different beam backgrounds, the GNN still outperforms the baseline algorithm even for networks trained on the different beam backgrounds. This demonstrates an excellent generalization with respect to different levels of beam backgrounds. ### Input Parameter Dependency As discussed in Sec. 3, multiple input features are available for the GNN, while the basf2 algorithm uses crystal position and energy only. This section presents a study of the influence of the input features on the FWHM. For that, the architecture described in Sec. 5.2 is trained on isolated photon events with low or high beam backgrounds using different combinations of input features. The 200 000 events from the respective validation data set, as described in Sec. 4, are used for inference. The data set covers an energy range of \(0.1<E_{\text{gen}}<1.5\,\text{GeV}\) and the full detector range \(17^{\circ}<\theta_{\text{gen}}<150^{\circ}\) and \(0^{\circ}<\phi_{\text{gen}}<360^{\circ}\), each of which in uniform distribution. The FWHM of \(E_{\text{gen}}\) and \(E_{\text{dep}}\) is calculated as described in Sec. 6. All GNNs use the global crystal coordinates, the LM position, and the crystal mass as input features. A comparison of the FWHM for the different additional input features is shown in Tab. 4. The results show, that even for the minimal set of input variables, the GNN's FWHM is smaller than basf2's for both the deposited and the generated energy in both beam background scenarios. Adding local coordinates leads to small improvements and using time information brings significant improvement in the GNN performance. PSD information has almost no effect on the FWHM. Since the main purpose of the PSD information is to differentiate electromagnetic and hadronic interactions per crystal, this is expected. In anticipation of future extensions of the GNN to hadronic interactions as well, the PSD information is kept throughout this work. ### Overlapping Photons When discussing overlapping photon events, it is important to note that the FWHM of the photon energy distribution not only depends on its own properties but also on the properties of the \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline & & Low Beam Background & \multicolumn{3}{c}{High Beam Background} & \multicolumn{1}{c}{} \\ Region & Algorithm & a (\(\times 10^{-2}\)) & b (\(\times 10^{-2}\)) & c (\(\times 10^{-2}\)) & a (\(\times 10^{-2}\)) & b (\(\times 10^{-2}\)) & c (\(\times 10^{-2}\)) \\ \hline Barrel & GNN & 0.23\(\pm\)0.02 & 1.32\(\pm\)0.02 & 1.00\(\pm\)0.01 & 1.25\(\pm\)0.02 & 2.39\(\pm\)0.02 & 0.75\(\pm\)0.03 \\ & basf2 & 0.35\(\pm\)0.02 & 1.54\(\pm\)0.02 & 0.91\(\pm\)0.02 & 1.88\(\pm\)0.02 & 3.11\(\pm\)0.03 & 0.31\(\pm\)0.10 \\ Forward & GNN & 0.00\(+\)0.14 & 1.11\(\pm\)0.01 & 1.49\(\pm\)0.00 & 0.61\(\pm\)0.03 & 2.23\(\pm\)0.02 & 1.20\(\pm\)0.02 \\ & basf2 & 0.00\(+\)0.37 & 1.51\(\pm\)0.01 & 1.38\(\pm\)0.01 & 1.11\(\pm\)0.03 & 2.92\(\pm\)0.03 & 0.84\(\pm\)0.03 \\ Backward & GNN & 0.50\(\pm\)0.02 & 1.69\(\pm\)0.03 & 1.59\(\pm\)0.02 & 2.18\(\pm\)0.03 & 2.51\(\pm\)0.05 & 2.28\(\pm\)0.02 \\ & basf2 & 0.78\(\pm\)0.03 & 2.12\(\pm\)0.04 & 1.50\(\pm\)0.03 & 2.72\(\pm\)0.05 & 4.64\(\pm\)0.05 & 0.91\(\pm\)0.08 \\ \hline \hline \end{tabular} \end{table} Table 2: Fit results (\(a/E_{\rm gen}\oplus b/\sqrt{E_{\rm gen}}\oplus c\)) of the fits shown in Fig. 8. Figure 9: 95 % left- and right tail lengths \(T_{L}\) and \(T_{R}\) of \(\eta_{\rm gen}\) for the GNN and basf2 as function of the simulated photon energy \(E_{\rm gen}\) for both endcaps and the barrel for (a and b) low and (c and d) high beam background. Each color is associated with one detector region. second photon present. To account for that, the evaluation is split in energy bins of [0.1, 0.2], [0.2, 0.5], [0.5, 1.0], and [1.0, 1.5] GeV for both photons respectively. We then show the FWHM of the first photon for different simulated energies of the second photon for low beam backgrounds (see Tab. 5) and high beam backgrounds (Tab. 6). The GNN provides a better FWHM for all combinations, but the improvement is most significant if the photon is low energetic. For low beam backgrounds, the GNN improves the FWHM by up to 20 % for photons with simulated energies between \(0.1<E_{\rm gen}<0.2\) GeV. For high beam backgrounds, the GNN improves the FWHM by more than 35 % for photons with simulated energies between \(0.1<E_{\rm gen}<0.2\) GeV. The result shows that the significant performance improvement observed for isolated photons can also be achieved for the more complicated overlapping photon signatures. ## 8 Conclusion and Outlook In this work, we have presented a complete study of a GNN-based clustering algorithm for the Belle II electromagnetic calorimeter. We have been using a realistic full detector simulation and simulated beam background for low and high luminosity conditions of Belle II. The GNN algorithm has been compared to the currently used basf2 baseline algorithm. We find a significantly improved \begin{table} \begin{tabular}{l l l l l l l l} \hline \multirow{2}{*}{Region} & \multirow{2}{*}{Algorithm} & \multicolumn{2}{c}{Low Beam Background} & \multicolumn{3}{c}{High Beam Background} \\ & & a (\(\times 10^{-2}\)) & b (\(\times 10^{-2}\)) & c (\(\times 10^{-2}\)) & a (\(\times 10^{-2}\)) & b (\(\times 10^{-2}\)) & c (\(\times 10^{-2}\)) \\ \hline \multirow{2}{*}{Barrel} & LBB GNN & 0.23\(\pm\)0.02 & 1.32\(\pm\)0.02 & 1.00\(\pm\)0.01 & 1.59\(\pm\)0.02 & 2.27\(\pm\)0.03 & 1.32\(\pm\)0.02 \\ & HBB GNN & 0.28\(\pm\)0.02 & 1.58\(\pm\)0.01 & 0.85\(\pm\)0.02 & 1.25\(\pm\)0.02 & 2.39\(\pm\)0.02 & 0.75\(\pm\)0.03 \\ \hline \end{tabular} \end{table} Table 3: Fit results (\(a/E_{\rm gen}\oplus b/\sqrt{E_{\rm gen}}\oplus c\)) of the fits shown in Fig. 10 for the GNN trained with low beam background (LBB GNN) and high beam background (HBB GNN). The values for the LBB GNN inferred on low beam background test samples, and for the HBB GNN inferred on high beam background are identical to the ones reported in Tab. 2. Figure 10: Resolution FWHM\({}_{\rm gen}\)/2.355 as a function of the simulated photon energy \(E_{\rm gen}\) for the GNNs trained with low beam background (LBB GNN) and high beam background (HBB GNN) in the barrel. The color is associated with the evaluation on either beam background; the dark color indicates the model trained with the beam background identical to the evaluation, and the light color indicates the model trained with the respective other beam background. The bands indicate the uncertainty of the fits, see text for details. The fit parameters are summarized in Tab. 3. resolution of more than 30 % for high beam backgrounds, but also improved performance in reducing the right-side tails of the reconstruction errors that are caused by beam background. Such significant improvements in photon reconstruction performance directly improve the physics reach of Belle II for almost all final states with photons, but also analyses that use missing energy information [17]. We also trained different GNNs to separate energy depositions of overlapping photon clusters. The improvement of the energy resolution is up to 30 % for the low energy photon in asymmetric photon pairs. Any improvement in overlapping photon reconstruction has direct implications for the reconstruction of boosted \(\pi^{0}\) mesons or axion-like particles with couplings to photons [33]. While the basf2 algorithm strictly reconstructs one cluster for each LM, the GNN algorithm only uses the LMs to center the ROI. The GNN algorithm can therefore in principle also be \begin{table} \begin{tabular}{l l l l l} \hline & \multicolumn{2}{c}{Low Beam Background} & \multicolumn{2}{c}{High Beam Background} \\ & \(\mathrm{FWHM_{dep}}\) & \(\mathrm{FWHM_{gen}}\) & \(\mathrm{FWHM_{dep}}\) & \(\mathrm{FWHM_{gen}}\) \\ Input Features & \(\times 10^{-2}\) & \(\times 10^{-2}\) & \(\times 10^{-2}\) & \(\times 10^{-2}\) \\ \hline Energy & 2.17\(\pm\)0.01 & 5.25\(\pm\)0.02 & 5.05\(\pm\)0.03 & 8.08\(\pm\)0.04 \\ Energy, local coordinates & 2.11\(\pm\)0.02 & 5.19\(\pm\)0.02 & 5.04\(\pm\)0.04 & 8.04\(\pm\)0.04 \\ Energy, local coordinates, PSD & 2.19\(\pm\)0.01 & 5.20\(\pm\)0.02 & 5.06\(\pm\)0.03 & 8.07\(\pm\)0.04 \\ Energy, local coordinates, time & 1.72\(\pm\)0.01 & 4.85\(\pm\)0.02 & 4.52\(\pm\)0.03 & 7.63\(\pm\)0.03 \\ Energy, local coordinates, time, PSD & 1.72\(\pm\)0.01 & 4.85\(\pm\)0.02 & 4.51\(\pm\)0.03 & 7.62\(\pm\)0.03 \\ \hline basf2 & 2.32\(\pm\)0.02 & 5.13\(\pm\)0.02 & 6.73\(\pm\)0.05 & 8.97\(\pm\)0.07 \\ \hline \end{tabular} \end{table} Table 4: Comparison of the performances of GNN models with different additional input features, and the performance of the basf2 baseline. Shown are the \(\mathrm{FWHM_{dep}}\) and \(\mathrm{FWHM_{gen}}\) (see Sec. 6), for 200 000 events in the validation data sets (see Sec. 4) with low and high beam background. The data sets cover an energy range of \(0.1<E_{\mathrm{gen}}<1.5\) GeV and the full detector range \(17^{\circ}<\theta_{\mathrm{gen}}<150^{\circ}\) and \(0^{\circ}<\phi_{\mathrm{gen}}<360^{\circ}\), each of which in uniform distribution. The uncertainties of the FWHM in each column are correlated since they use the same simulated events. The input features are described in detail in Sec. 3. Figure 11: Comparison of truth energy fractions (a), the reconstructed energy fraction by the GNN (b), and the reconstructed energy fraction by basf2 (c) for one example event with only one local maximum. Colors indicate the fractions belonging to each photon or background. The marker centers indicate the crystal centers, the marker area is proportional to the reconstructed energy in each crystal. used to reconstruct overlapping photons that only produced one LM (Fig. 11). The extension of the GNN algorithm to such overlapping signatures as well as to charged particles and neutral hadrons will be the focus of follow-up work. This is the first application of a GNN-based clustering algorithm at Belle II for a realistic detector geometry and realistic and high beam backgrounds. This is also the first time that an algorithm has shown to improve the performance of the photon reconstruction by explicitly including timing information on clustering level at Belle II. Data Availability Statement.The datasets generated during and analysed during the current study are property of the Belle II collaboration and not publicly available. Acknowledgments.The authors would like to thank the Belle II collaboration for useful discussions and suggestions on how to improve this work. The authors would like to thank Jan Kieseler for helpful discussions. The training of the models was performed on the TOpAS GPU cluster at the Steinbuch Centre for Computing (SCC) at KIT. This work is funded by Helmholtz (HGF) Young Investigators Group VH-NG-1303 and BMBF ErUM-Pro 05H23VKKBA. I. Haide is supported by the Landesgraduiertenforderung Baden-Wurttemberg. ## Compliance with ethical standards ### Conflict of interest The authors declare that they have no conflict of interest. \begin{table} \begin{tabular}{l l r r r r} \hline \hline \(E_{\gamma}^{(1)}\) (GeV) & \(E_{\gamma}^{(2)}\) (GeV) & \multicolumn{1}{c}{[0.1, 0.2]} & \multicolumn{1}{c}{[0.2, 0.5]} & \multicolumn{1}{c}{[0.5, 1.0]} & \multicolumn{1}{c}{[1.0, 1.5]} \\ \(\downarrow\) & \multicolumn{1}{c}{\(\rightarrow\)} & & & & \\ \hline [0.1, 0.2] & GNN & 24.77\(\pm\)0.83 & 24.10\(\pm\)0.76 & 24.02\(\pm\)0.60 & 24.72\(\pm\)0.63 \\ & basf2 & 33.12\(\pm\)1.08 & 32.82\(\pm\)1.38 & 31.28\(\pm\)0.79 & 32.42\(\pm\)0.88 \\ & **Improvement** & **33.7 \%** & **36.2 \%** & **30.3 \%** & **31.1 \%** \\ \hline [0.2, 0.5] & GNN & 13.16\(\pm\)0.30 & 13.96\(\pm\)0.20 & 14.17\(\pm\)0.16 & 14.17\(\pm\)0.16 \\ & basf2 & 17.73\(\pm\)0.47 & 17.56\(\pm\)0.31 & 17.62\(\pm\)0.24 & 16.88\(\pm\)0.23 \\ & **Improvement** & **34.8 \%** & **25.8 \%** & **24.3 \%** & **19.1 \%** \\ \hline [0.5, 1.0] & GNN & 8.07\(\pm\)0.12 & 8.56\(\pm\)0.08 & 8.71\(\pm\)0.06 & 8.84\(\pm\)0.06 \\ & basf2 & 10.53\(\pm\)0.19 & 10.77\(\pm\)0.12 & 10.75\(\pm\)0.09 & 10.73\(\pm\)0.08 \\ & **Improvement** & **30.6 \%** & **25.8 \%** & **23.4 \%** & **21.4 \%** \\ \hline [1.0, 1.5] & GNN & 6.05\(\pm\)0.08 & 6.33\(\pm\)0.05 & 6.42\(\pm\)0.04 & 6.54\(\pm\)0.04 \\ & basf2 & 7.52\(\pm\)0.12 & 7.56\(\pm\)0.07 & 7.60\(\pm\)0.06 & 7.68\(\pm\)0.06 \\ & **Improvement** & **24.2 \%** & **19.6 \%** & **18.3 \%** & **17.4 \%** \\ \hline \hline \end{tabular} \end{table} Table 6: \(\mathrm{FWHM_{gen}}\times 10^{2}\) of one photon with photon energy \(E_{\gamma}^{(1)}\) in dependence of the second photon energy \(E_{\gamma}^{(2)}\) for high beam background for the full detector (barrel and endcaps combined). The uncertainties of the FWHM for the two algorithms are correlated for each energy interval since they use the same simulated events. The improvement to the basf2 baseline is stated in percent for each energy interval. \begin{table} \begin{tabular}{l l r r r r} \hline \hline \(E_{\gamma}^{(1)}\) (GeV) & \multicolumn{1}{c}{\(E_{\gamma}^{(2)}\) (GeV)} & \multicolumn{1}{c}{[0.1, 0.2]} & \multicolumn{1}{c}{[0.2, 0.5]} & \multicolumn{1}{c}{[0.5, 1.0]} & \multicolumn{1}{c}{[1.0, 1.5]} \\ \(\downarrow\) & \multicolumn{1}{c}{\(\rightarrow\)} & & & & \\ \hline [0.1, 0.2] & GNN & 11.04\(\pm\)0.79 & 11.98\(\pm\)0.40 & 11.94\(\pm\)0.31 & 13.25\(\pm\)0.34 \\ & basf2 & 12.72\(\pm\)0.80 & 13.93\(\pm\)0.55 & 14.32\(\pm\)0.41 & 15.16\(\pm\)0.48 \\ & **Improvement** & **15.2 \%** & **16.3 \%** & **20.0 \%** & **14.4 \%** \\ \hline [0.2, 0.5] & GNN & 7.38\(\pm\)0.18 & 7.57\(\pm\)0.12 & 8.23\(\pm\)0.09 & 8.38\(\pm\)0.12 \\ & basf2 & 8.48\(\pm\)0.22 & 8.30\(\pm\)0.14 & 8.84\(\pm\)0.12 & 8.96\(\pm\)0.12 \\ & **Improvement** & **14.9 \%** & **9.7 \%** & **7.5 \%** & **7.0 \%** \\ \hline [0.5, 1.0] & GNN & 5.22\(\pm\)0.08 & 5.43\(\pm\)0.05 & 5.69\(\pm\)0.04 & 5.89\(\pm\)0.04 \\ & basf2 & 5.58\(\pm\)0.10 & 5.71\(\pm\)0.06 & 5.85\(\pm\)0.05 & 6.17\(\pm\)0.05 \\ & **Improvement** & **6.7 \%** & **5.1 \%** & **2.8 \%** & **4.9 \%** \\ \hline [1.0, 1.5] & GNN & 4.24\(\pm\)0.06 & 4.43\(\pm\)0.04 & 4.67\(\pm\)0.03 & 4.77\(\pm\)0.03 \\ & basf2 & 4.55\(\pm\)0.07 & 4.58\(\pm\)0.04 & 4.74\(\pm\)0.04 & 4.85\(\pm\)0.04 \\ & **Improvement** & **7.3 \%** & **3.4 \%** & **1.4 \%** & **1.8 \%** \\ \hline \hline \end{tabular} \end{table} Table 5: \(\mathrm{FWHM_{gen}}\times 10^{2}\) of one photon with photon energy \(E_{\gamma}^{(1)}\) in dependence of the second photon energy \(E_{\gamma}^{(2)}\) for low beam background for the full detector (barrel and endcaps combined). The uncertainties of the FWHM for the two algorithms are correlated for each energy interval since they use the same simulated events. The improvement over the basf2 baseline algorithm is stated in percent for each energy interval.
2310.10832
WTP$\,$10aaauow: Discovery of a new FU Ori outburst towards the RCW$\,$49 star-forming region in NEOWISE data
Large-amplitude accretion outbursts in young stars are expected to play a central role in proto-stellar assembly. Outburst identification historically has taken place using optical techniques, but recent, systematic infrared searches are enabling their discovery in heavily dust-obscured regions of the Galactic plane. Here, we present the discovery of WTP$\,$10aaauow, a large-amplitude mid-infrared (MIR) outburst identified in a systematic search of NEOWISE data using new image subtraction techniques. The source is located towards the RCW$\,$49 star-forming region, and estimated to be at a distance of $\approx 4\,$kpc via Gaia parallax measurement. Concurrent with the MIR brightening, the source underwent a $\gtrsim5\,$mag outburst in the optical and near-infrared (NIR) bands, reaching a peak luminosity of $\approx260\,$L$_\odot$ in 2014-2015, followed by a slow decline over the next 7 years. Analysis of the pre- and post-outburst spectral energy distributions reveal a pre-outburst stellar photosphere at a temperature of $3600-4000\,$K, surrounded by a likely two-component dust structure similar to a flat-spectrum or Class I type YSO. We present optical and NIR spectroscopy that show a GK-type spectrum in the optical bands exhibiting complex line profiles in strong absorption features, and evidence for a wind reaching a terminal velocity of $\approx 400\,$km$\,$s$^{-1}$. The NIR bands are characterized by a cooler M-type spectrum exhibiting a forest of atomic and molecular features. All together, the spectra demonstrate that WTP$\,$10aaauow is an FU Ori type outburst. Ongoing systematic infrared searches will continue to reveal the extent of this population in the Galactic disk.
Vinh Tran, Kishalay De, Lynne Hillenbrand
2023-10-16T21:14:19Z
http://arxiv.org/abs/2310.10832v7
WTP 10aaauow: Discovery of a new FU Ori outburst towards the RCW 49 star-forming region in NEOWISE data ###### Abstract Large-amplitude accretion outbursts in young stars are expected to play a central role in proto-stellar assembly. Yet most outburst identifications and detailed studies have resulted from searches in optical time domain surveys, which are not sensitive to events located in heavily dust-obscured regions of the Galactic plane. Here, we present the discovery of WTP 10aaauow, a large-amplitude mid-infrared (MIR) outburst identified in a systematic search of NEOWISE data using image subtraction techniques designed to recover transients in dense Galactic stellar fields. The new outburst is located towards the RCW 49 star-forming region, and estimated to be at a distance of \(\approx 4\) kpc. Concurrent with the MIR brightening, we show that the source underwent a \(\gtrsim 5\) mag outburst in the optical and near-infrared (NIR) bands around 2014-2015 reaching a peak luminosity of \(\approx 260\) L\({}_{\odot}\), followed by a slow decline over the next 7 years. Analysis of the pre- and post-outburst spectral energy distributions reveal a pre-outburst stellar photosphere at a temperature of \(3600-4400\) K, surrounded by a likely two-component dust structure similar to that of a flat spectrum or Class I type YSO. We present optical and NIR follow-up spectroscopy of the source, that show a GK-type spectrum in the optical bands exhibiting complex line profiles in strong absorption features, and evidence for a wind reaching a terminal velocity of \(\approx 400\) km s\({}^{-1}\). The NIR bands are characterized by a cooler M-type spectrum exhibiting a forest of atomic and molecular features. All together, the spectra demonstrate that WTP 10aaauow is a new FU Ori outburst. Ongoing systematic infrared searches will continue to reveal the extent of this population in the Galactic disk. keywords: methods: observational - stars: formation - ISM: H II regions ## 1 Introduction Star formation proceeds through the gravitational collapse of molecular cloud cores to form a central protostar, initially through spherical collapse, followed by accretion through a proto-planetary disk (Armitage, 2019). Variability is a fundamental property of accretion onto young stellar objects (YSOs). While minor effects such as time-varying dust extinction as well as rotation can cause low amplitude variability at short timescales, the largest amplitude outbursts are related to substantial changes in the accretion rate from the disk to the protostar, playing a crucial role in mass assembly in the protostellar phase (Fischer et al., 2022). The largest amplitude FU Ori outbursts, exhibit luminosity increases \(\gtrsim 100\times\) lasting for decades to centuries, and are a well-recognized mechanism for stellar growth. The known population of FU Ori stars suggests that they may be extremely rare, with around 14 confirmed FU Ori outbursts recorded and 13 sources identified via their spectra in the last seven decades (Connelley and Reipurth, 2018; Contreras Pena et al., 2019; Zakri et al., 2022), while the smaller amplitude EX Lup-type outbursts (Herbig, 1977) are more frequent. Observationally, FU Ori outbursts are characterized by brightness increases \(\gtrsim 5\) magnitudes and exhibit spectra characteristic of GK-type giants/supergiants in the optical bands (Herbig, 1977), transitioning to cooler M-type giants/supergiants in the near-infrared (NIR) bands (Connelley and Reipurth, 2018). The wavelength-dependent spectral types can be explained with accretion disk models including a temperature and velocity gradient with radius (Kenyon et al., 1988; Welty et al., 1992). Although optical time domain surveys have substantially advanced the field towards systematic discovery and characterization, our understanding of the distribution of the amplitudes, durations, and duty cycles of the largest outbursts remains highly uncertain. Infrared searches for FU Ori outbursts provide a unique opportunity to discover and develop a census of these events due to their sensitivity towards highly dust-obscured star-forming regions. Ground-based searches in the NIR have already begun to reveal a new window into YSO variability - both individual large-amplitude outbursts (e.g. Hillenbrand et al., 2021) amenable for detailed infrared characterization, as well as populations of smaller amplitude variables associated with young stellar variability (e.g. Contreras Pena et al., 2017; Guo et al., 2022). With its all-sky coverage, mid-infrared (MIR) sensitivity and long/uniformly sampled temporal baseline, the Wide-field Infrared Survey Explorer (WISE) provides a unique dataset to search for large-amplitude, long-duration FU Ori outbursts, towards developing a complete census of these events in the Galactic plane. To this end, our team has begun a systematic search for large-amplitude MIR outbursts utilizing image subtraction techniques on WISE images. In this paper, we present a new addition to the FU Ori class discovered in our search, named WTP 10aaauow. Section 2 describes our discovery of the source and subsequent follow-up observations. We accumulate archival information on the source in Section 2.4, and analyze its properties in Section 3. We conclude with a discussion of the source properties in Section 4. ## 2 Discovery and Observations ### Outburst Discovery in NEOWISE Data The Wide-field Infrared Survey Explorer (WISE) satellite (Wright et al., 2010), re-initiated as the NEOWISE mission (Mainzer et al., 2014), has been carrying out an all-sky MIR survey in the \(W1\) (3.4 \(\mu\)m) and \(W2\) (4.6 \(\mu\)m) bands since 2014. As part of an ongoing program to identify large-amplitude mid-IR transients in NEOWISE data, we carried out a systematic search for transients in time-resolved coadded images created as part of the unWISE project (Lang, 2014; Meisner et al., 2018). The details of this search will be presented in De et al. (in prep). In brief, we used a customized code (De et al., 2020) based on the ZOGY algorithm (Zackay et al., 2016) to perform image subtraction on the NEOWISE images using the co-added images of the WISE mission (obtained in 2010-2011) as reference images1. Our pipeline produces a database of all transients down to a statistical significance of \(\approx 10\)\(\sigma\). Follow-up for the sources was coordinated using the fritz astronomical data platform (van der Walt et al., 2019). Footnote 1: For all transients identified in the WISE Transient Pipeline (WTP, De in prep.), we adopt the naming scheme WTP XXXYYYYYY, where XX indicates the year of first detection and YYYYYY is a six letter alphabetical code. We identified the transient source WTP 10aaauow as a large-amplitude outburst at \(J2000\) coordinates \(\alpha=\)10:26:15.98, \(\delta=-\)58:20:37.80, near the H II region RCW 49 and Carina (Figure 1). The transient is coincident with a source in archival MIWISE images, exhibiting a shallow brightening phase of \(\approx 1\) mag between 2010 and 2014 (Figure 2), followed by a sudden brightening of \(\gtrsim 3\) mags between 2014 and 2015. The corresponding brightening rates are \(\approx 0.017\) mag month\({}^{-1}\) in the early shallow brightening phase and \(\approx 0.25\) mag month\({}^{-1}\) in the subsequent rapid brightening phase. The transient has since exhibited a slow fading, declining in flux at a rate of \(\approx 0.01\) mag month\({}^{-1}\) between 2015 and 2022. Although the outburst was first detected in the pipeline during the early brightening phase, the transient is formally saturated in the unWISE images near the peak of the outburst. Therefore, we derive the MIR light curve of the source by performing the recommended bias corrections for saturated sources in the NEOWISE single epoch catalogs as explained in the WISE Data Release document2. We use standard aperture photometry for epochs prior to the start of NEOWISE (when the source was faint). The complete WISE light curve of the source after correction of the instrumental effects and conversion to the AB magnitude system is shown in Figure 2. Footnote 2: [https://wise2.ipac.caltech.edu/docs/release/neowise/expsup/sec2_lciva.html](https://wise2.ipac.caltech.edu/docs/release/neowise/expsup/sec2_lciva.html) ### Follow-up Imaging We obtained one epoch of multi-color NIR imaging of WTP 10aaauow on August 4, 2022, using the Spartan camera (Loh et al., 2012) on the 4.1 m Southern Astrophysical Research (SOAR) Telescope as part of program SOAR 2022B-005 (PI: De). We obtained a series of 10 dithered exposures of the field with exposure times of 30 s, 20 s, and 10 s in the \(J\), \(H\), and \(Ks\) filters respectively. The data were detrended followed by astrometric and photometric calibration against nearby 2MASS sources using a modified version of the imaging pipeline described in De et al. (2020). Performing aperture photometry at the position of the transient, we measure NIR fluxes (all in the Vega magnitude system) of \(J=11.50\pm 0.01\) mag, \(H=10.20\pm 0.01\) mag and \(Ks=9.24\pm 0.01\) mag. We obtained follow-up optical imaging of the transient using the Sinistro imager on the 1 m Las Cumbres Observatory network (Brown et al., 2013; Program CON2022B-011). The data were acquired on UT 2023-01-21 in the \(gri\) filters for a total exposure time of 900 s each. The data were astrometrically calibrated and stacked using a custom reduction pipeline (De et al., 2020), and photometric calibration was performed against photometric calibrators in the field from the Legacy Imaging Survey (Dey et al., 2019) in the AB magnitude system. We measure \(g=19.35\pm 0.01\) mag, \(r=16.65\pm 0.01\) mag and \(i=15.19\pm 0.02\) mag for the transient at this epoch. ### Follow-up Spectroscopy We obtained follow-up NIR spectroscopy of WTP 10aaauow to confirm the classification of the transient. We obtained one epoch of near-IR (\(\approx 1.0-2.5\)\(\mu\)m) spectroscopy using the Folded Port Infrared Echellette (FIRE; Simcoe et al., 2013) on the 6.5 m Magellan Baade Telescope on UT 2022-05-25 and 2022-06-15. Observations in the first epoch were obtained in the low resolution prism mode (\(R\approx 100\)) using the 0.6 ''slit, over a set of dithered exposures amounting to a total exposure time of 300 s. The second epoch of spectroscopy was carried out in the echelle mode (\(R\approx 5000\)) to obtain higher resolution confirmation of the features detected in the first epoch. The echelle mode observations were carried out using the same 0.6 ''slit over a set of dithered exposures amounting a total exposure time of \(\approx 1200\) s. The data were reduced and extracted using the firehose pipeline3 for both the prism and echelle mode data, followed by flux calibration and telluric correction using the xtellcor package (Vacca et al., 2003). Footnote 3: [https://wikis.mit.edu/confluence/display/FIRE/FIRE=Data+Reduction](https://wikis.mit.edu/confluence/display/FIRE/FIRE=Data+Reduction) We obtained a red optical spectrum of WTP 10aaauow on UT 2023-01-11, using the Goodman High Throughput Spectrograph (Clemens et al., 2004) on SOAR as part of program SOAR 2022B-005 (PI: De). The data were acquired with the 600 lines/mm grating (630 - 893 nm; \(R=1400\)), for a total exposure time of 1200 s. Data reduction and flux calibration with a spectro-photometric standard were performed using the pypeit package (Prochaska et al., 2020). ### Archival Information #### 2.4.1 Survey Imaging The sky region near WTP 10aaauow has been previously observed in both the optical and NIR bands. In the optical, no source is detected in the pre-outburst Digitized Sky Survey images from Palomar Observatory in the 1980s. However, there is a source cataloged in the IPHaS/VPHaS+ (Drew et al., 2014) Galactic plane survey, with faint-state photometry reported from 2014 of \(r=21.99\pm 0.21\); \(i=19.89\pm 0.07\). There is also an \(H\alpha\) measurement of \(20.70\pm 0.17\) which suggests the source was a strong emission-line object in its quiescent state. In post-outburst imaging, a red source with extended nebulosity (Figure 1) is clearly detected in the DECam Galactic plane survey (Schlafly et al., 2018). Compared to archival DECam photometry taken at MJD \(\approx\) 57590 (close to the peak of the outburst in Figure 2), the source had faded by \(\approx\) 1.2 mag by the epoch of our optical imaging. In the infrared, a point source is clearly detected during the quiescent phase in archival images from 2MASS (Skrutskie et al., 2006), near the completeness limits of the survey, and from AllWISE (Wright et al., 2010). Comparing to archival 2MASS magnitudes, the recent NIR magnitude measurements (which are likely \(\approx 1-2\) mags fainter than peak outburst flux; Figure 2) suggest the source has exhibited \(\gtrsim 5\) mag brightening in the NIR. The source was also observed by Spitzer/IRAC in four channels as part of the GLIMPSE survey (Benjamin et al., 2003), and we obtained the photometry from the online Spitzer source catalog (Spitzer Science Center (SSC) & Figure 1: Color composite of the 24\({}^{\prime}\)\(\times\) 24\({}^{\prime}\) region surrounding WTP 10aaauow constructed from Spitzer Space Telescope (Werner et al., 2004) images taken with IRAC Channel 4 (8 \(\mu\)m) as red, Channel 3 (5.8 \(\mu\)m) as green, and Channel 2 (4.5 \(\mu\)m) as blue. The nebulosity in the northwest part belongs to the H II region RCW 49, and the blue circles mark potential YSOs in the area (see text). The zoom-in yellow inset on the bottom left shows an optical image of the 1\({}^{\prime}\)\(\times\) 1\({}^{\prime}\) region surrounding WTP 10aaauow, created using \(z\) band (926 nm; red), \(r\) band (642 nm; green), and \(g\) band (473 nm; blue) images taken from the DECam Galactic Plane Survey (Schlafly et al., 2018). There is nebulosity clearly visible surrounding the bright transient (highlighted with the green cross-hair). The two insets on the top left depict the pre-outburst (left) and post-outburst (right) \(K\) band images of the region from 2MASS (in 2000) and the SPARTAN Camera (in 2022), depicting the NIR brightening of the transient (circled in blue). Infrared Science Archive (IRSA) 2021). The MIR measurements are shown in Figure 2. #### 2.4.2 Catalog Data WTP 10aaauow has appeared in a number of YSO catalogs, namely Vioque et al. (2020) and Kuhn et al. (2021). In terms of source properties, Gaia DR3 (Gaia Collaboration et al., 2022) reports a parallax \(\pi=0.25\pm 0.03\) mas (corresponding to distance \(d=4000\pm 400\) pc); proper motions \(\mu_{RA}=-5.08\pm 0.04\) mas yr\({}^{-1}\) and \(\mu_{Dec.}=3.10\pm 0.03\) mas yr\({}^{-1}\); and radial velocity of \(-62.57\pm 5.14\) km s\({}^{-1}\). Elsewhere in literature catalogs, Fouesneau et al. (2022) derived extinction values, finding \(A_{0}=4.6\) mag as the "best" value and a \(A_{0}=3.6\) mag as the median value in their posterior. Due to the nature of their methods and the variable nature of the source, these estimates could be unreliable, but \(\approx 5\) mag of visual extinction appears reasonable for the source. However, we do not consider any of the stellar parameters derived for WTP 10aaauow as viable. Although no objects at the position of WTP 10aaauow appear in the Gaia Alerts service (Hodgkin et al., 2021), the source does have a well-sampled optical wavelength light curve in Gaia DR3 (Gaia Collaboration et al., 2022). As illustrated in Figure 2, Gaia captured the source brightening beginning in late 2014 to its peak in early 2015, and the subsequent plateau phase. Coincident with the brightening, the source became bluer in its optical color by several tenths of a magnitude, as it does in its MIR color. ## 3 Analysis In this section, we present evidence that WTP 10aaauow is an outbursting young stellar object of the FU Orionis type. This conclusion is based on i) its proximity to a large star-forming region with hundreds of YSOs within a few tens of arcmin, ii) its spectral energy distribution indicating a large dust envelope, iii) large-amplitude outburst and high peak luminosity, and iv) its spectroscopic signatures which are consistent with both stellar youth and a characteristic wavelength-dependent absorption spectrum for the FU Ori class. ### Identification of Nearby Candidate Young Stars WTP 10aaauow is located around 35\({}^{\prime}\) southeast of the H II region RCW 49 (or NGC 3247), home to several hundred young stellar object candidates as well as a hub of occurring star-formation (Whitney et al., 2004). This region is situated around 4.2 kpc from the Solar system (Churchwell et al., 2004), which is very similar to the distance of WTP 10aaauow directly reported in Gaia. Using this distance, we calculate the physical separation between our object and the region to be \(\approx 40\) pc, placing WTP 10aaauow near the edge of the H II region. There are no nearby candidate YSOs within 2\({}^{\prime}\) of WTP 10aaauow according to SIMBAD. We apply a color cut to the Spitzer/GLIMPSE (Churchwell et al., 2009) and 2MASS (Cutri et al., 2003; Skrutskie et al., 2006) source catalogs in the sky region to identify nearby candidate YSOs. We follow the process detailed in Gutermuth et al. (2009) to eliminate IR-excess contaminants, such as star-forming galaxies, broad-line active galactic nuclei, and shock emission regions. Afterward, we use a simpler color cut of \([3.6]-[4.5]>0.15\) and \(J-K>0.8\) to find IR-excess sources that are likely YSOs, resulting in 176 potential YSOs within 20\({}^{\prime}\) from the source. The candidate YSO sources, including our source, are shown in Figure 1, with blue circles marking their positions. The closest of the potential YSOs to our source is SSTSL2 J102621.26-582104.8 followed by SSTSL2 J102600.85-582049.8 with separa Figure 2: _Top:_ MIR light curve of WTP 10aaauow obtained from WISE photometry (supplemented by Spitzer data), together with optical photometry from Gaia DR3 (shifted by -8 mag, as indicated in the legend). The upper left inset shows a triplet of the science, template and difference image of the location obtained from our difference imaging pipeline, clearly showing the MIR brightening of WTP 10aaauow. _Bottom:_ The MIR \(W1-W2\) and the visual G-BP color evolution of WTP 10aaauow. tions of 50''and 119'', respectively. While these are the only sources within 2' (\(\approx\) 2.3 pc at the distance of WTP 10aaauow) of the source, there are an additional 12 sources within 4'. Additionally, out of the 176 potential YSOs, there are 16 sources with distances measured by Gaia DR3 (Gaia Collaboration et al., 2022). 11 of these sources are situated between 3.2 and 4.7 kpc, making these sources physically close to WTP 10aaauow. Based on the analysis above, we speculate that the majority of the 176 potential YSOs in the region are possibly proximal to the source. We interpret the existence of the large number of nearby YSO candidates as evidence in favor of the classification of WTP 10aaauow as a YSO in the same star-forming region. ### Spectral Energy Distribution (SED) Figure 3 shows the \(0.5-22\) microns photometric measurements of WTP 10aaauow pre- and post-outburst. The pre-outburst data comprises of AllWISE \(W1\), \(W2\), \(W3\), \(W4\) bands (\(\lambda\) = 3.4, 4.6, 12, 22 \(\mu\)m); Spitzer/IRAC channel 1, 2, 3, 4 (\(\lambda\) = 3.6, 4.5, 5.8, 8 \(\mu\)m); and 2MASS \(J\), \(H\), \(K\) bands (\(\lambda\) = 1.2, 1.6, 2.2 \(\mu\)m). However, it must be noted that these measurements are taken at different times, in 2010, 2004, and 2000, respectively, so the compiled SED is not derived from strictly contemporaneous measurements. Additionally, Gaia data are not used in the SED since, as shown in Figure 2, the Gaia observations were carried out very close to the outburst peak without a baseline hinting at a pre-outburst state. It is therefore likely that the earliest measurement from Gaia captured WTP 10aaauow while it was already undergoing the outburst. In contrast, the post-outburst data from NEOWISE \(W1\), \(W2\) band (\(\lambda\) = 3.4, 4.6 \(\mu\)m); SOAR/Spartan \(J\), \(H\), \(K\) band (\(\lambda\) = 1.2, 1.6, 2.2 \(\mu\)m); and Sinistro \(g\), \(r\), \(i\) band (\(\lambda\) = 472, 642, 784 nm) are all measured within half a year, indicating that the post-outburst SED is a viable estimate at 7 years after the outburst. We use photospheric models from NextGen2 (Hauschildt et al., 1999) over a range of visual extinctions to fit the \(J+H\) band region of the pre-outburst SED. Additionally, we use two black-body SEDs to fit the excess in the redder near-IR and the mid-IR region as a simple illustration of the temperatures involved. The excess is most likely due to the presence of an extended dust envelope surrounding an accretion disk that we do not model in detail. We choose the temperatures for the black-body SEDs based on the peak near WISE \(W1\) and \(W2\) band, as well as the excess near WISE \(W4\) band, resulting in T\({}_{dust}\)\(\approx\) 700 K and 120 K (noting that the temperature of the colder component is poorly constrained due to the limited wavelength coverage). We find that the pre-outburst data can be explained by photospheric models as cold as 1000 K and hot as 8000 K over a wide range of extinction, dust temperature, and black-body SEDs. However, the total integrated Galactic extinction along this line-of-sight has been measured to be \(A_{V}\)\(\approx\) 7.3 mag (Schlafly & Finkbeiner, 2011). Although somewhat uncertain, this constrains the effective temperature of the stellar model to be in the range \(3600-4000\) K. The corresponding SED models are shown in Figure 3. Empirically, the spectral index over the standard 2 to 24 \(\mu\)m range (Greene et al., 1994) of the progenitor source is \(\alpha=+0.14\), classifying WTP 10aaauow as a Class I or flat spectrum type YSO. ### Outburst Luminosity Integrating the SED from Figure 3, we find the post-outburst luminosity of WTP 10aaauow, measured at \(\approx\) 7 years post-outburst to be \(\approx\) 110 L\({}_{\odot}\) for a distance of 4 kpc. However, the luminosity at outburst peak is expected to have been higher. From Figure 2, we notice that in the \(W1\) and \(W2\) bands, the current magnitudes dim by \(\approx\) 0.9 mag compared to the corresponding peak magnitudes. Additionally, the \(W1-W2\) color evolution also stays relatively constant; thus, we can assume that the effective temperature and bolometric correction of the source stays constant. Taking into account the fading since outburst peak, we find the peak luminosity of the outburst to be around 260 L\({}_{\odot}\), which is in accordance with other FU Ori sources (Connelley & Reipurth, 2018; Zakri et al., 2022). ### Spectroscopic Features The optical spectrum of WTP 10aaauow is shown in Figure 4 in comparison to the recent FU Ori outburst source Gaia 17bpi; (Hillenbrand et al., 2018). The absorption aspects of the two spectra are similar in many respects. We clearly detect in WTP 10aaauow features of Ba II, Ca I and Fe I, together with a prominent Li I line at 6707 A with an equivalent width of \(W_{A}\) = 0.48 A - a strong signature of youth. These signatures demonstrate that the optical spectrum of WTP 10aaauow is characteristic of GK-type stars as seen in other FU Ori outbursts. However, the wind indicators in WTP 10aaauow are weaker than those near the outburst epoch of Gaia 17bpi, likely because our spectrum was obtained 7 years after the onset of the WTP 10aaauow outburst and FU Ori winds can evolve quickly (Carvalho et al., 2023). Nevertheless, prominent lines of H\(\alpha\), K I, O I, and the Ca II triplet, all show asymmetric profiles and terminal velocities - as measured from the bluest edge of the absorption lines - of several hundred km s\({}^{-1}\), indicating the presence of a strong wind. In particular, the H\(\alpha\) and 8542 A Ca II lines (highlighted in Figure 4) have terminal velocities approaching \(\approx-400\) km s\({}^{-1}\). We measure the equivalent Figure 3: The post- (blue) and pre-outburst (orange) spectral energy distribution of WTP 10aaauow. Reddened photosphere models with effective temperatures of \(3600-4400\) K and visual extinction of \(5-8\) magnitudes are shown in orange-timed lines. This temperature range is chosen around the estimated integrated Galactic extinction along this line of sight. The pre-outburst models are normalized to the pre-outburst \(J\), \(H\), \(K\) bands. In addition, two black-body spectral energy distributions with effective temperatures of 120 K and 700 K are added to the photosphere models shown as blue-timed lines to explain the IR excess at longer wavelengths. width of the H\(\alpha\) absorption to be \(W_{\lambda}=-2.1\) A, and \(W_{\lambda}=4.0\) A for the 8542 A Ca II line. In addition, the upper Paschen lines of H i are seen, which may arise at least partially in a wind. The NIR spectrum of WTP 10aaauow in Figure 5 shows classic signatures of FU Ori outbursts - strong absorption \({}^{12}\)CO(2,0) bandhead in the 2.3 \(\mu\)m region seen in late K, M stars, as well as the H\({}_{2}\)O bands from \(1.3-1.5\)\(\mu\)m and \(1.75-2.05\)\(\mu\)m (Connelley & Reipurth, 2018). We detect absorption in Na I and Ca I lines in \(K\)-band, and in the Mg I and Si I lines in \(H\)-band. At shorter wavelengths, VO bands (centered at 1.06 and 1.19 \(\mu\)m), as well as TiO bands (centered at 1.11 and 1.26 \(\mu\)m) are visible characteristic of late M-type spectra. We measure the equivalent widths (following the prescription of Messineo et al., 2021) of the \({}^{12}\)CO(2,0) line to be \(W_{\lambda}=25.85\) A, \(W_{\lambda}=1.65\) A for the 2.206 and 2.209 \(\mu\)m Na I lines and \(W_{\lambda}=2.17\) A for the 2.263 and 2.266 \(\mu\)m Ca I lines. Comparing these values to the spectroscopic diagnostics presented in Figure 10 of Connelley & Reipurth (2018) squarely places WTP 10aaauow in the region occupied by FU Ori and FU Ori-like objects. In terms of wind signatures, the He I 10830 A transition is detected, indicating hot gas, and exhibits a blue-shifted center of absorption (by \(\approx-100\) km s\({}^{-1}\)) similar to the optical Ca II triplet lines. While the lower H i Paschen transitions are detected in the NIR region, the Br series is not clearly detected. ## 4 Discussion and Summary We have presented the discovery and follow-up observations of a large-amplitude outburst WTP 10aaauow in the southern Galactic disk, located towards the RCW 49 star forming region. The discovery was enabled with the implementation of an image subtraction pipeline to search for MIR transients directly in NEOWISE data. Combined with archival observations from Spitzer, 2MASS, Gaia, and the DECam legacy surveys, we show that the source exhibited a \(\gtrsim 5\) mag outburst between 2010 and 2015, followed by a slow decline. The MIR light curve shows a slow rise between 2010 and 2014 (at a rate of \(\approx 0.017\) mag month\({}^{-1}\) followed by a steep rise of \(\approx 3\) mag (at a rate of \(\approx 0.25\) mag month\({}^{-1}\)) between 2014 and 2015. The observed rise is similar to that seen in other FU Ori outbursts (Hillenbrand et al., 2018, 2021). The final part of the brightening was also detected with the Gaia mission, exhibiting a \(\gtrsim 5\) mag rise in the optical bands, while simultaneously becoming bluer in both optical and MIR colors. Archival optical images from the DECam Galactic plane survey (taken after the outburst peak) clearly show extended nebulosity around the source. The source has subsequently exhibited a plateau/slow decline of \(\approx 0.01\) mag month\({}^{-1}\). The spectroscopic characteristics of the source are strikingly similar to that of other FU Ori outbursts - GK-type spectrum in the optical bands, including asymmetric, complex line profiles (with terminal velocities reaching 400 km s\({}^{-1}\)) likely arising from a wind and strong Li I absorption. In the NIR, we observed a cooler M-type spectrum exhibiting a rich forest of atomic and molecular lines, suggestive of a mixed temperature spectrum. Quantitatively, we show the equivalent widths of the CO, Na I and Ca I lines in \(K\)-band are consistent with those seen in FU Ori stars (Connelley & Reipurth, 2018). By examining the environment of the source to identify likely nearby young stars with infrared excesses, we find a large number (\(\approx 176\)) of nearby sources at distances consistent with that of the RCW 49 region as well as the outburst itself (as measured by Gaia; \(\approx 4\) kpc). The estimated distance places WTP 10aaauow in the outskirts of the RCW 49 region. Fitting stellar templates to pre-outburst photometry, we invoke the presence of a two-component dust disk (at temperatures of \(\approx 120\) and \(\approx 700\) K) to explain the observed infrared excess. The corresponding photospheric temperatures and foreground extinctions range between 3600 - 4400 K, and \(5-8\) mag respectively, as constrained by the total line-of-sight extinction. In Figure 4: SOAR red optical spectrum of WTP 10aaauow (orange) compared to that of the FU Ori outburst Gaia 17bpi (blue; Hillenbrand et al., 2018). Regions with significant telluric contamination are marked with \(\oplus\), while prominent lines are marked and indicated with dashed lines. The two insets show zoom-ins of H\(\alpha\) and Ca II 8542 Å line regions, and are plotted in rest-frame velocity units. The green dash-dotted lines indicate rest frame (zero velocity) positions, the blue lines show linear fits of the surrounding continuum to highlight the absorption, and the dashed grey lines mark the terminal edges of the spectral lines. The Paschen 15 line is shown in the Ca II 8542 Å inset, explaining the profile’s extra red-shifted absorption region. Figure 5: From top to bottom, the combined \(YJ\), \(H\), and \(K\) band spectrum of WTP 10aaauow (orange), compared to that of Gaia17 bpi (blue). Prominent lines are marked with their names and dashed lines; additional metal lines are approximately annotated with smaller markers. TiO, VO, and H\({}_{2}\)O bands are marked with light, medium and dark gray regions, respectively. quiescence, the spectral slope in the infrared (\(\alpha\approx 0.14\)) is consistent with that of a Class I or flat spectrum YSO. Together with the inferred distance to the region, the SED indicates a post-outburst luminosity at peak of \(\approx 260\) L\({}_{\odot}\), consistent with that seen for other FU Ori stars. WTP 10aaauow adds to a rare class of FU Ori outbursts with well-sampled light curves in both the optical and MIR bands, the other examples being Gaia 17bpi (Hillenbrand et al., 2018) and Gaia 18dvy (Szegedi-Elek et al., 2020). The exquisitely sampled Gaia light curve near the peak of the outburst clearly shows the outburst amplitude is larger in the optical bands compared to the MIR, consistent with expectations (Hillenbrand & Rodriguez, 2022). While optical surveys have succeeded at identifying visible sources, the IR bands offer the only opportunity to recover outbursts in the earliest stages of star formation where the sources are still embedded and the rate of FU Ori outbursts are expected to be highest (Bae et al., 2014). The identification of WTP 10aaauow as a bright MIR transient that saturates in WISE data, but yet overlooked thus far highlights the importance of applying systematic transient recovery techniques on the NEOWISE dataset to uncover the full population of recent FU Ori outbursts in the Galactic plane. In particular, we note the importance of image subtraction techniques in dense Galactic stellar fields where sources may be easily confused at the spatial resolution of WISE, as well as in the published point source catalogs. ## Acknowledgements We thank Ryan Lau for providing access to the LCO time for follow-up of the source. We thank J. J. Hermes, A. Meisner, and R. Simcoe for valuable comments. We thank A. Meisner for assistance with processing the unvi se data. K. D. was supported by NASA through the NASA Hubble Fellowship grant #HST-HF2-51477.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministerio da Ciencia, Tecnologia e Inovacoles (MCTI/LNA) do Brasil, the US National Science Foundation's NOIRLab, the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU). This paper includes data gathered with the 6.5-meter Magellan Telescopes located at Las Campanas Observatory, Chile. A part of the LCO follow-up observations is supported by the International Top Young-Fellow (ITYF) program of JAXA. We thank Mansi Kasliwal, Josh Bloom, and the SkyPortal team for assistance with using the data platform for this project. ## Data Availability The photometry and spectra will be made available as machine readable files as part of the online publication.
2307.14043
Unravelling the fluorescence kinetics of light-harvesting proteins with simulated measurements
The plant light-harvesting pigment-protein complex LHCII is the major antenna sub-unit of PSII and is generally (though not universally) accepted to play a role in photoprotective energy dissipation under high light conditions, a process known Non-Photochemical Quenching (NPQ). The underlying mechanisms of energy trapping and dissipation within LHCII are still debated. Various proposed models differ considerably in their molecular and kinetic detail, but are often based on different interpretations of very similar transient absorption measurements of isolated complexes. Here we present a simulated measurement of the fluorescence decay kinetics of quenched LHCII aggregates to determine whether this relatively simple measurement can discriminate between different potential NPQ mechanisms. We simulate not just the underlying physics (excitation, energy migration, quenching and singlet-singlet annihilation) but also the signal detection and typical experimental data analysis. Comparing this to a selection of published fluorescence decay kinetics we find that: (1) Different proposed quenching mechanisms produce noticeably different fluorescence kinetics even at low (annihilation free) excitation density, though the degree of difference is dependent on pulse width. (2) Measured decay kinetics are consistent with most LHCII trimers becoming relatively slow excitation quenchers. A small sub-population of very fast quenchers produces kinetics which do not resemble any observed measurement. (3) It is necessary to consider at least two distinct quenching mechanisms in order to accurately reproduce experimental kinetics, supporting the idea that NPQ is not a simple binary switch switch.
Callum Gray, Lekshmi Kailas, Peter G. Adams, Christopher D. P. Duffy
2023-07-26T08:49:13Z
http://arxiv.org/abs/2307.14043v1
# Unravelling the fluorescence kinetics of light-harvesting proteins with simulated measurements ###### Abstract The plant light-harvesting pigment-protein complex LHCII is the major antenna sub-unit of PSII and is generally (though not universally) accepted to play a role in photoprotective energy dissipation under high light conditions, a process known Non-Photochemical Quenching (NPQ). The underlying mechanisms of energy trapping and dissipation within LHCII are still debated. Various proposed models differ considerably in their molecular and kinetic detail, but are often based on different interpretations of very similar transient absorption measurements of isolated complexes. Here we present a simulated measurement of the fluorescence decay kinetics of quenched LHCII aggregates to determine whether this relatively simple measurement can discriminate between different potential NPQ mechanisms. We simulate not just the underlying physics (excitation, energy migration, quenching and singlet-singlet annihilation) but also the signal detection and typical experimental data analysis. Comparing this to a selection of published fluorescence decay kinetics we find that: (1) Different proposed quenching mechanisms produce noticeably different fluorescence kinetics even at low (annihilation free) excitation density, though the degree of difference is dependent on pulse width. (2) Measured decay kinetics are consistent with most LHCII trimers becoming relatively slow excitation quenchers. A small sub-population of very fast quenchers produces kinetics which do not resemble any observed measurement. (3) It is necessary to consider at least two distinct quenching mechanisms in order to accurately reproduce experimental kinetics, supporting the idea that NPQ is not a simple binary switch switch. ## 1 Introduction The light-harvesting antenna system of Photosystem II (PSII) in higher plants is a large, modular assembly of pigment-protein complexes, the major one being the cyclic heterotrim LHCII [1]. Under low intensity illumination the PSII antenna operates with a quantum efficiency of 0.8 - 0.85 [2] though this drops significantly at higher intensities [3]. This reflects the non-photochemical quenching (NPQ) mechanism, a photoprotective response involving the (mostly) reversible down regulation of PSII antenna efficiency following a sudden increase in light levels [4; 5]. The purpose of this is to mitigate _photoinhibition_ of PSII [6], the slowly reversible (and metabolically costly) oxidative damage to the PSII reaction centres. Why the exact mechanism of NPQ is still the matter of debate but a general (though not universal) consensus on the basic scheme has emerged in the last decade. The primary trigger is the formation of a steep \(\Delta\)pH across the thylakoid membrane [7]. This interacts with the PSII antenna, predominantly LHCII trimers but possibly also in minor (monomermic) antenna complexes and even the PSII core [8], resulting in both subtle conformational change in the individual LHCII complexes [9; 10; 11] and their mutual clustering/aggregation [12; 13; 14; 15; 16; 17]. This somehow alters the interactions between the pigment molecules within LHCII, creating molecular states (known as _quenchers_) that trap and dissipate excitations where before they promoted rapid excitation transfer to the reaction centres. Also involved are the PSII subunit PsbS [18; 19] and the enzymatic de-epoxidation of the LHCII-associated carotenoid violaxanthin to zeaxanthin (the xanthophyll cycle [20]), though both are allosteric regulators of NPQ rather than prerequisite triggers [21]. The molecular details of the quencher states and the structural changes that form them are still unclear, and for a comprehensive discussion of the various proposed models the reader is directed to several reviews [4; 21; 22] and a collected work [23]. Briefly, most models assume that the quenchers involve the various carotenoid pigments bound by the PSII antenna, which are attractive candidates due to their inherently short excited state lifetimes, \(\tau_{\text{car}}\sim 10\)ps [24], compared to that of chlorophyll (ChI), \(\tau_{\text{ChI}}\sim 4n\)s. Some suggest quenching via excitation energy transfer (EET) from a low energy cluster of ChIs to the carotenoid lutein within LHCII [9; 10; 25; 26; 27], while others propose chlorophyll-carotenoid charge transfer (CT) [28; 29] or _excitonic_[30; 31; 32] states. Not all models involve carotenoids, instead proposing excitation quenching Chl-ChI CT states [33; 34]. These models all differ in the kinetics, location and density of quenchers, and experimentally determining which are involved in _in vivo_ NPQ (at least definitively) is difficult. Experimental measurements of quenching roughly fall into two categories: ultrafast time-resolved spectroscopy applied to isolated complexes, or lower-resolution fluorescence techniques applied to both isolated LHCII and more intact systems. The first includes transient absorption (TA) measurements of LHCII aggregates/oligomers in aqueous suspension [9; 33; 35], trimers immobilized in gel [36], and more recently two-dimensional (2D) electronic spectroscopy measurement on LHCII in lipid nanodiscs [26; 27; 37]. The measured kinetics have been fit to various kinetic models or qualitatively compared to quantum mechanical simulations of a single LHCII trimer [25; 29; 38; 39; 40], but they have yet to reveal a definitive mechanism (or mechanisms). Moreover, induction of quenching in these _in vitro_ LHCII systems generally requires low detergent conditions, meaning these quenching processes maybe different to those that occur _in vivo_. The second category, fluorescence techniques, includes Pulse Amplitude Modulation (PAM) fluorescence measurements which were originally used to probe the kinetics of NPQ formation and relaxation in intact plants and membranes [41] and various fluorescence lifetime measurements of leaves and chloroplasts [15; 42; 43], LHCII aggregates in solution [38, 44], in liposomes [17, 45, 46, 47] and on mica surfaces [48]. While fluorescence lifetime measurements have a lower time resolution than TA/2D, they do probe a critical part of the NPQ mechanism: the competition between the quenchers and long-range energy transfer between neighbouring LHCII trimers. Analysis of the fluorescence decay kinetics therefore involve course grained models of energy transfer and quenching/trapping within large LHCII assemblies [15, 49, 50]. In 2011, Valkunas _et al._ published such a model, addressing whether fluorescence decay kinetics could discriminate between different proposed quenching mechanisms [49]. They assumed that the aggregate was composed of quenched and unquenched LHCII that could exchange energy. They defined three characteristic timescales: the migration time, \(\tau_{\rm mig}\), which characterized energy transfer from the antenna bulk to the vicinity of the quencher, the _transfer-to-trap_ time, \(\tau_{\rm tt}\), for transfer to the quenched site from its nearest neighbour, and the quenching time, \(\tau_{\rm Q}\), the actual timescale of excitation dissipation. They then considered two limiting quenching scenarios, \(\tau_{\rm tt}>>\tau_{\rm Q}\), termed _slow/fast_ (S/F), and \(\tau_{\rm tt}<<\tau_{\rm Q}\) (_fast/slow_ or F/S). They showed that at low-excitation density (much less than one excitation per LHCII) the fluorescence decay kinetics of the two models were both essentially mono-exponential. However, at higher excitation densities, where quenching must compete with excitation annihilation, the S/F kinetics became bi-exponential, which is consistent with experimental observation. This was judged to support the idea of quenching via EET to a carotenoid over a mechanism such as Chl-Chl CT states, since EET is relatively slow (\(10-100\) ps) and dissipation by the Car excited is relatively fast (\(<10\) ps).. Here we revisit the idea of identifying the quenching mechanism from the fluorescence decay kinetics of LHCII aggregates, addressing several limitations in previous models. Firstly, previous models tend to simulate the _excitation_ decay kinetics of the aggregate following instantaneous excitation. Here we simulate the actual _fluorescence_ decay kinetics as measured by Time-Correlated Single Photon Counting (TCSPC), which is a wide-spread and accessible experimental technique used to (indirectly) measure excitation decay. In TCSPC, sample excitation is via a laser pulse of finite (\(\sim 10-100\) ps) temporal width which will overlap with fast processes such as annihilation. Measured traces are also noisy and analysed by fitting lifetime components which may obscure any fine kinetic differences between different quenching mechanisms. Secondly, thanks to various TA/2D measurements and atomistic models of LHCII, we have more detailed hypotheses about the mechanism of excitation trapping and dissipation. By combining more detailed models of the quenching mechanism with a realistic simulation of a typical measurement we extract a surprising amount of detail from simple fluorescence measurements. In a meta-study of several published fluorescence experiments we show that observations are consistent with most LHCII trimers switching to a relatively weak quenching state and that at least two, kinetically distinct, mechanisms are involved. ## 2 Theoretical Framework Our analysis of TCSPC kinetics of LHCII aggregates is based on a model with two couplled sub-layers. The first layer is the kinetic Monte-Carlo simulation of photon absorption, energy migration, exciton annihilation, non-radiative decay, fluorescence and quenching within a lattice of coarse-grained LHCII trimers. These are the processes that reflect the internal function of the LHCII assembly but, apart from fluorescence, they are not observable. The second layer deals with the observable, binning of fluorescence decays in the same manner as in a TCSPC experiment and performing multiple re-convolution fits of the resulting decay curves, effectively treating it as if it were experimental data. ### Thermodynamic model of an isolated LHCII trimer The basic assumption of this work is that the internal excitation equilibration within an LHCII timer are much faster than energy transfer between trimers and deactivation processes like fluorescence, internal conversion and annihilation. In detailed models of the former [51], the coupling between Chl electronic transitions gives rise to a manifold of delocalized exciton states. Following photo-excitation the exciton rapidly (\(1-2\) ps) equilibrates across the lowest energy (predominantly Chl \(a\)) states (see the left side of Fig. 1**a.**). In our model we replace the \(\mathrm{N}_{e}\) exciton states with \(\mathrm{\tilde{N}}<\mathrm{N}_{e}\)_thermodynamically equivalent_ states (see right side of Fig. 1**a.**), \[\mathrm{\tilde{N}}=\left[\sum_{i=1}^{\mathrm{N}_{e}}\exp\left(-\frac{\epsilon_ {i}}{\mathrm{k}_{B}T}\right)\right] \tag{1}\] where \(\epsilon_{i}\) is energy of the \(i^{th}\) exciton state relative to the lowest one, which are taken from [51]. At \(T=300\) K \(\mathrm{\tilde{N}}\approx 15\) (5 states per monomer subunit). If \(n(t)\leqslant\mathrm{\tilde{N}}\) is the number of excitations within the trimer then the equation of motion for an ensemble of of isolated LHCII trimers is, \[\frac{d}{dt}n=\sigma J\left(\mathrm{\tilde{N}}-n\right)-\sigma_{\mathrm{SE}}J( t)n-\mathrm{k}_{\mathrm{decay}}n-\gamma n\left(n-1\right) \tag{2}\] where \(J(t)\) is the photon flux from the excitation pulse, and \(\sigma\) and \(\sigma_{\mathrm{SE}}\) are the absorption and stimulated emission (SE) cross-sections of the trimer respectively. \(\mathrm{k}_{decay}\) and \(\mathrm{gamma}\) are rate constants for excitation decay (fluorescence and internal conversion) and annihilation respectively. Generally, we assume \(\sigma\sim 2\times 10^{-15}\mathrm{cm}^{2}\), although this varies strongly with excitation wavelength. In Fig. 1**b.** we show the a TCSPC trace for LHCII in solution with an excitation wavelength of \(\lambda=485\) nm [48], where absorption by carotenoids will be significant. Here, rather than estimate a particular value for \(\sigma\), we define, \[n_{\mathrm{ex}}=\sigma\int_{-\infty}^{\infty}dtJ(t) \tag{3}\] as the average number of excitations per trimer one would have _if_ all of the energy delivered by the pulse arrived simultaneously (\(J(t)\rightarrow\delta(t)\)). We can therefore define excitation conditions in terms of \(n_{\mathrm{ex}}\) which is agnostic of any particular experimental conditions. As a rough guide, if the excitation wavelength is 485 nm and we assume \(\sigma\sim 1\times 10^{-15}\mathrm{cm}^{2}\) (see [52]) then \(n_{\mathrm{ex}}=0.1\), 1.0 and 5.0 would require laser fluences, \(\int_{-\infty}^{\infty}dtJ(t)\sim 0.04\), 0.4, and 2.0 mJ cm\({}^{-2}\) respectively. These laser fluences are typical of those used in spectroscopy experiments although annihilation is generally purposely avoided in measurements by keeping to the lower end of this range. \(\sigma_{\mathrm{SE}}\) is also difficult to define absolutely since complex excitation equilibration within the pulse duration. We assume that \(\sigma_{\mathrm{SE}}\sim\sigma\), meaning that SE becomes more likely than absorption when the trimer is half-filled with excitations. \(k_{\mathrm{decay}}=k_{\mathrm{IC}}+k_{\mathrm{F}}\) is composed of internal conversion and fluorescence but rather than specify both we realise that \(k_{\mathrm{IC}}\) effectively rescales \(k_{\mathrm{F}}\) and \(k_{\mathrm{decay}}\) is an effective fluorescence rate. In Fig. 1**b**. we fit a model with \(n_{\mathrm{ex}}=1\) and \(k_{\mathrm{decay}}^{-1}=3.6\) ns to an experimental TCSPC trace for LHCII in solution, assuming a 200 ps (FWHM) Gaussian pulse (i.e. the Instrument Response Function, IRF). The fluorescence kinetics are purely mono-exponential. For \(n_{\mathrm{ex}}>1\) annihilation effects become apparent, though typical TCSPC experiments will tune laser fluence to avoid this. The annihilation time constant, \(\gamma^{-1}\), in LHCII aggregates has been measured via TA, with values \(16-24\) ps depending on the model used to analyse the TA kinetics [53]. We take 16 ps throughout this work. Fig. 1**c**. show how annihilation introduces a second faster component to the TCSPC kinetics of an isolated LHCII trimer. Interestingly, it is a very small effect, even for very high fluences (\(n_{\mathrm{ex}}=5\)), which arises from two effects. Firstly, the annihilation is very fast and energy delivery occurs over a finite (and quite long) laser pulse. Secondly, at high fluences absorption competes with SE which means the excitation density never actually reaches \(n_{\mathrm{ex}}\). Although the second component is visually detectable, bi-exponential re-convolution fits (see below) are not very stable and so are omitted. The time-evolution of the system is simulated via a kinetic Monte Carlo algorithm. We define an array of 200 unconnected LHCII trimers, turn on the simulated laser pulse, and perform Monte Carlo sweeps every \(dt=1ps\). In a given step each trimer is considered once on average. The occupancy of the trimers and the various rate constants determine the probability of different processes occurring and we consider each process on average once per time-step, \(dt\). All processes are assume to be Poissonian and so the probability of process \(m\) occurring in the interval \(t\rightarrow(t+dt)\) is, \[p_{m}\left(dt\right)=\nu(dt)e^{-\nu_{m}\left(dt\right)dt} \tag{4}\] where \(\nu_{m}\left(dt\right)\) is the calculated rate. We do not consider the possibility of the same process occurring multiple times in the same trimer in the same time-step and choose \(dt=1\) ps to ensure the likelihood of that occurring is \(<0.1\%\). We compute the TCSPC trace by binning the times at which 'emissive decays', that is decays via the \(k_{\mathrm{decay}}\) channel (though we separately bin all other 'non-emissive' decay processes for book-keeping), until either \(J(t)=0\) and \(n=0\) for all trimers (i.e. nothing else can happen). A histogram bin width of \(\Delta t=50\) ps is used here, similar to typical experiments. We then reseed the Monte Carlo algorithm, trigger another pulse, and repeat the process until one of the emissive time bins exceeds a threshold count (here chosen to be 10,000 in line with common TCSPC data collection). Due to the definition of the probabilities in Eqn. (4), the traces automatically reproduce the Poissonian noise of TCSPC measurements. Lastly, the simulated traces are subject to a _reconvolution fit_ in which an exponential decay series, \[F(t)=H(t_{0})\sum_{m}=A_{m}exp\left(-\frac{t-t_{0}}{\tau_{m}}\right) \tag{5}\] is convolved with the IRF, which in our case is the laser pulse. \(H(t_{0})\) is simply the Heaviside function used to define the start time, \(t_{0}\). Mono-, bi-, and tri-exponential fits are generated and the best is selected on the basis of the variances on the fit parameters and whether the addition of a component visually improves the fit. We then calculate the amplitude-weighted, \[\tau_{\text{amp}}=\frac{\sum_{m}A_{m}\tau_{m}}{\sum_{m}A_{m}} \tag{6}\] and intensity-weighted, \[\tau_{\text{int}}=\frac{\sum_{m}A_{m}\tau_{m}^{2}}{\sum_{m}A_{m}\tau_{m}} \tag{7}\] lifetimes. Throughout we will consider \(\tau_{\text{amp}}\) as this is generally the quantity quoted in the literature. The reason for this is that \(\tau_{\text{amp}}\) is directly proportional to the fluorescence quantum yield (and inversely proportional to the overall extent of quenching) whereas \(\tau_{\text{int}}\) is not [54]. ### Extension of the model to aggregated LHCII (without quenchers) Fig. 2 a. shows a typical laminar aggregate of LHCII, deposited on mica, as visualized by AFM [48]. By simply packing circles of the approximate diameter of an LHCII timer (\(5\,\mathrm{nm}\)) into the area we can construct a very approximate map. It shows that aggregates are small (10-100 trimers) and heterogeneous in terms of coordination number, \(n_{c}\), of each trimer. Instead of considering specific aggregates we consider ideal lattices of 100 trimers with \(n_{c}=2\), 3, 4, and 6 (see Fig. 2**b.**). In the absence of any quenchers the coupled equations of motion of the aggregate become, \[\begin{split}\frac{d}{dt}n_{i}&=\sigma J\left( \tilde{N}-n_{i}\right)-\sigma_{SE}J(t)n_{i}-k_{decay}n_{i}-\gamma n_{i}\left(n_ {i}-1\right)\\ &-\sum_{j=1}^{n_{c}}\left(K_{n_{i},n_{j}}^{n_{i}-1,n_{j}+1}n_{i}- K_{n_{i},n_{j}}^{n_{i}+1,n_{j}-1}n_{j}\right)\end{split} \tag{8}\] where \(K_{n_{i},n_{j}}^{n_{i}-1,n_{j}+1}\) and \(K_{n_{i},n_{j}}^{n_{i}+1,n_{j}-1}\) are the rate constants for the transfer of an excitation from trimer \(i\) to neighbouring trimer \(j\) and vice versa. They depend on the initial occupancies of the two trimers due to an _entropic repulsion_ effect in which the transfer of an excitation to an already crowded trimer is thermodynamically penalized. We define, \[K_{n_{i},n_{j}}^{n_{i}-1,n_{j}+1}=\begin{cases}\tau_{\text{hop}}^{-1}\quad \text{for}\quad\Delta S_{n_{i},n_{j}}^{n_{i}-1,n_{j}+1}\leqslant 0\\ \tau_{\text{hop}}^{-1}\exp\left(\frac{1}{k_{B}}\Delta S_{n_{i},n_{j}}^{n_{i}-1,n_{j}+1}\right)\quad\text{for}\quad\Delta S_{n_{i},n_{j}}^{n_{i}-1,n_{j}+1}>0 \end{cases} \tag{9}\] where \(\tau_{\text{hop}}\) is a phenomenological hopping time. Values of \(5<\tau_{\text{t}\text{u}\text{o}p}<25\,\mathrm{ps}\) have been variously assumed in course-grained models of the PSII antenna [55] and we use \(\tau_{\text{hop}}=25\,\mathrm{ps}\), since variations in this range have almost no effect on the simulated TCSPC kinetics other than a tiny re-scaling of the mean lifetime. \(\Delta S_{n_{i},n_{j}}^{n_{i}-1,n_{j}+1}\) is the entropy change of for the transfer of the excitation, \[\Delta S_{n_{i},n_{j}}^{n_{i}-1,n_{j}+1}=k_{B}\text{ln}\left[\frac{n_{i}\left( \tilde{N}-n_{j}\right)}{\left(n_{j}+1\right)\left(\tilde{N}-(n_{i}-1)\right)}\right] \tag{10}\] where this reduces to \(K_{1,0}^{0,1}=K_{1,0}^{0,1}\equiv\tau_{\text{hop}}^{-1}\) in the single excitation limit. A derivation of Eqn. (10) is included in the Supplementary Material. Fig. 2**c.** shows the (lack of) dependence on the TCSPC kinetics on aggregate geometry for low (\(n_{\mathrm{ex}}=0.5\)) and very high (\(n_{\mathrm{ex}}=5\)) fluences. We see that the effect of annihilation is enhanced with respect to the isolated trimers, since the excitations can now hop around the aggregate and encounter each other. Fig. 2**d.** shows a typical re-convolution fit of these kinetics (shown is \(n_{\mathrm{ex}}\), the onset of annihilation) including the weighted residuals. From \(0<n_{\mathrm{ex}}<2\) the trace is mono-exponential with \(\tau_{\mathrm{amp}}=3.6\) ns (by construction). For \(n_{\mathrm{ex}}\geqslant 2\) a second component, \(\tau_{2}=40-140\) ps, is needed and \(\tau_{\mathrm{amp}}=1-2\) ns. This is shown in Fig. S1 of the Supplementary material. An excellent quality of fit and low residuals as shown in this example were achieved for all simulations. ### Extension of the model to include quenching mechanisms Experimentally, aggregation of LHCII is always accompanied by excitation quenching (\(\tau_{\mathrm{amp}}<2\) ns) and it is assumed that this is due to a sub-population of trimers that are in some quenched state. Rather than address particular published quenching models we adopt a generalized kinetic scheme which is sketched in Fig. 3**a.**. We assume that there is a quencher with a very short lifetime, \(\Gamma\), which we assume is \(\sim 10\) ps throughout. This is roughly the excitation lifetime of the various carotenoids proposed as quenchers [9, 10] or the decay time of the Chl-carotenoid CT states in other models [40] and measurements on LHCII aggregates imply that the quenching state is very short-lived [50]. Next, we have a _pre-quencher_ which is a sub-set, \(\tilde{\mathrm{N}}^{\mathrm{pq}}\), of the antenna states that the quencher can directly access. Transfer of energy from the pre-quencher to the quencher is assumed to be irreversible, since the quencher functions as an excitation trap. Lastly, we have the _antenna pool_, the rest of the antenna states, \(\tilde{\mathrm{N}}^{\mathrm{a}}-\tilde{\mathrm{N}}-\tilde{\mathrm{N}}^{\mathrm{ pq}}\), left unchanged by the formation of the quencher but are coupled to it via the pre-quencher. Energy transfer between the quencher and pre-quencher state is assumed to be bi-directional, depending largely on the balance of \(\tilde{\mathrm{N}}^{\mathrm{a}}\) and \(\tilde{\mathrm{N}}^{\mathrm{pq}}\). The equations of motion are, \[\begin{split}\frac{\mathrm{d}}{\mathrm{dt}}n_{\mathrm{i}}^{\mathrm{a }}&=\sigma\mathrm{J}\left(\tilde{\mathrm{N}}^{\mathrm{a}}-n_{ \mathrm{i}}^{\mathrm{a}}\right)-\sigma_{\mathrm{SE}}\mathrm{J}(\mathrm{t})n_{ \mathrm{i}}^{\mathrm{a}}-k_{\mathrm{decay}}n_{\mathrm{i}}^{\mathrm{a}}-\gamma n_ {\mathrm{i}}^{\mathrm{A}}\left(n_{\mathrm{i}}^{\mathrm{a}}-1\right)\\ &-\sum_{j\neq\mathrm{i}}\left(K_{n_{\mathrm{i}}^{\mathrm{a}},n_{ \mathrm{j}}^{\mathrm{a}}+1}^{n_{\mathrm{i}}^{\mathrm{a}}-1,n_{\mathrm{i}}^{ \mathrm{a}}+1}n_{\mathrm{i}}^{\mathrm{a}}-K_{n_{\mathrm{i}}^{\mathrm{a}},n_{ \mathrm{j}}^{\mathrm{a}}-1}^{n_{\mathrm{i}}^{\mathrm{a}}+1,n_{\mathrm{j}}^{ \mathrm{a}}-1}n_{\mathrm{j}}^{\mathrm{a}}\right)-k_{\mathrm{a},\mathrm{p}\, \mathrm{q}}n_{\mathrm{i}}^{\mathrm{a}}+k_{\mathrm{p}\,\mathrm{q},\mathrm{a}}n_{ \mathrm{i}}^{\mathrm{p}\,\mathrm{q}}\end{split} \tag{11}\] \[\begin{split}\frac{\mathrm{d}}{\mathrm{dt}}n_{\mathrm{i}}^{\mathrm{ pq}}&=\sigma\mathrm{J}\left(\tilde{\mathrm{N}}^{\mathrm{pq}}-n_{ \mathrm{i}}^{\mathrm{p}\,\mathrm{q}}\right)-\sigma_{\mathrm{SE}}\mathrm{J}( \mathrm{t})n_{\mathrm{i}}^{\mathrm{p}\,\mathrm{q}}-k_{\mathrm{decay}}n_{ \mathrm{i}}^{\mathrm{p}\,\mathrm{q}}-\gamma n_{\mathrm{i}}^{\mathrm{p}\, \mathrm{q}}\left(n_{\mathrm{i}}^{\mathrm{p}\,\mathrm{q}}-1\right)\\ &-\left(k_{\mathrm{p}\,\mathrm{q},\mathrm{a}}+k_{\mathrm{p}\, \mathrm{q},\mathrm{q}}\right)n_{\mathrm{i}}^{\mathrm{p}\,\mathrm{q}}+k_{\mathrm{a },\mathrm{p}\,\mathrm{q}}n_{\mathrm{i}}^{\mathrm{a}}\end{split} \tag{12}\] \[\frac{\mathrm{d}}{\mathrm{dt}}n_{\mathrm{i}}^{\mathrm{q}}=k_{\mathrm{p}\, \mathrm{q},\mathrm{q}}n_{\mathrm{i}}^{\mathrm{p}\,\mathrm{q}}-\frac{1}{\Gamma}n_ {\mathrm{i}}^{\mathrm{q}} \tag{13}\] where \(n_{\mathrm{i}}^{\mathrm{a}}(\mathrm{t})\leqslant\tilde{\mathrm{N}}_{\mathrm{i} }^{\mathrm{a}}\), \(n_{\mathrm{i}}^{\mathrm{p}\,\mathrm{q}}(\mathrm{t})\leqslant\tilde{\mathrm{N}}^{ \mathrm{pq}}\), and \(n_{\mathrm{i}}(\mathrm{t})\leqslant 1\) are excitation occupations of the antenna pool, pre-quencher and quencher respectively, \(k_{\mathrm{a},\mathrm{p}\,\mathrm{q}}\) is rate constant for energy transfer _from_ the antenna pool _to_ the pre-quencher (\(k_{\mathrm{p}\,\mathrm{q},\mathrm{a}}\) is the reverse), and \(k_{\mathrm{p}\,\mathrm{q},\mathrm{q}}\) is the transfer rate _from_ the pre-quencher to the quencher. There is enough flexibility in the model to qualitatively represent most proposed quenching mechanisms, though here we are focusing on those mediated by a carotenoid-like (short-lived, optically-forbidden) state. We then consider _fast_, _moderate_, and _slow quenchers (as compared to \(\tau_{\text{hop}}\)) as \(k_{pq,q}^{-1}=1\), 25, and 100 ps respectively. In a fast quencher an excitation is more likely to be quenched than it is to hop to a neighbouring LHCII trimer and vice versa in a slow quencher. The \(<1\)ps energy transfer from Chl \(a\) to lutein observed in the 2D electronic spectra of LHCII in native membranes [27], or the Chl-carotenoid excitonic quenching mechanism based on two-photon excitation spectroscopy [31], would be examples of fast quenchers. By contrast, quenching by incoherent Chl-lutein EET (\(\sim 10-50\) ps) [9, 56] or by the formation of Chl-lutein CT states [29, 57], would be a moderate to slow mechanisms. If we assume the antenna pool and pre-quencher are iso-energetic then \(k_{\alpha,pq}/k_{pq,\alpha}\) is dictated the relative sizes of the two domains (essentially entropy) \[\frac{k_{\alpha,p\,q}}{k_{pq,\alpha}}=\exp\left(\frac{1}{k_{B}}\Delta S_{n^{ \alpha},n^{pq}+1}^{n^{\alpha}-1,n^{pq}+1}\right) \tag{14}\] If we assume that the base rate of energy relaxation amongst the ChIs in trimer is \(\tilde{k}^{-1}\sim 1\) ps then we can define two limiting cases. An _entropically-limited_ or _local_ quencher, \[k_{\alpha,p\,q}=\frac{1}{\tilde{N}-1}\tilde{k} \tag{15}\] \[k_{p\,q,\alpha}=\tilde{k} \tag{16}\] is one in which the quencher is connected only to a single Chl state which results in an entropic 'bottleneck' between the antenna ChIs and the quencher. Models that propose quenching by the Chl \(a612\)-Lutein620 heterodimer [9, 27, 39] are examples of local quenchers. We also have _delocalised_ quenchers in which the antenna-pool and the pre-quencher are comparable in size, \[k_{\alpha,p\,q}=k_{p\,q,\alpha}=\tilde{k} \tag{17}\] Mechanisms in which the carotenoid is coupled to several ChIs [40] are delocalised. Table 1 summarises the different parameter schemes. To implement quenching in our TCSPC simulation we assume that quenching in our aggregate arises from a random fraction, \(\rho_{q}=N_{q}/N_{LHCII}\) of the quenched trimers. This random distribution is reinitialized before every excitation pulse ensuring we average over a large number of possible configurations. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Quenching Model & \(k_{\alpha,p\,q}^{-1}\) (ps) & \(k_{p\,q,\alpha}^{-1}\) (ps) & \(k_{p\,q,\alpha}^{-1}\)(ps) \\ \hline Slow Localised (SL) & 5 & 1 & 100 \\ Slow Delocalised (SD) & 1 & 1 & 100 \\ Medium Localised (ML) & 5 & 1 & 25 \\ Medium Delocalised (MD) & 1 & 1 & 25 \\ Fast Localised (FL) & 5 & 1 & 1 \\ Fast Delocalised (FD) & 1 & 1 & 1 \\ \hline \end{tabular} \end{table} Table 1: Relevant parameters for the various quenching models used in our LHCII aggregate simulations. In all cases \(\Gamma=10\) ps. Results ### Simulated TCSPC kinetics of different quenching models Initially we considered quenching in annihilation-free conditions, choosing \(n_{\mathrm{ex}}=0.05\) for an aggregate of 200 LHCII trimers, meaning that each pulse delivers an average of 10 excitations. We then took each quenching model listed in Table (1) and adjusted the fraction of quenched LHCII, \(\rho_{\mathrm{q}}\), until we (approximately) obtained a representative quenched lifetime of \(\tau_{\mathrm{amp}}\sim 500\) ps (although \(200-1200\) ps is generally considered 'quenched' [44]). Getting an exact match between models was quite difficult as \(\tau_{\mathrm{amp}}\) is as sensitive to fit quality as it is to the underlying excitation dynamics. Regardless, it is only qualitative differences in the kinetics that we are interested in as these are probably all that are observable experimentally. The traces are shown in Fig. 3**b**. (on a logarithmic scale) and the quencher densities and re-convolution fit parameters are listed in Table 2. For SL and SD a quencher concentration of \(\rho_{\mathrm{q}}=0.9\) and \(0.85\) respectively are required to reduce the lifetime to \(\tau_{\mathrm{amp}}\sim 380-530\) ps. Since the kinetics are marginally bi-exponential (particularly SL) we listed both mono- and bi-exponential fits Table 2, with the traces and fits plotted in Figs. S3 and S4 of the Supplementary Material. The bi-exponential fits are visually closer and yield \(\tau_{\mathrm{amp}}\sim 530\) and \(521\) ps for SL and SD respectively. However, the slow component, \(\tau_{1}\sim 4.3-5.7\) ns, is significantly slower than the free decay of an unquenched LHCII trimer (\(\kappa_{\mathrm{decay}}^{-1}=3.6\) ns), probably indicating that neither mono- nor bi-exponential fits are perfect. The fast component is \(\tau_{2}\sim 264-383\) ps, indicating that the'slow' mechanisms can significantly quench an isolated LHCII trimer (as previously noted [39]). For ML and MD a quencher density of \(\rho_{\mathrm{q}}=0.80\) yields \(\tau_{\mathrm{amp}}\sim 568\) and \(584\) ps respectively, and kinetics that are obviously bi-exponential (Fig. 3**b**.). The slow component, \(\tau_{1}=3.7-3.9\) ns is essentially just free decay of unquenched LHCII while the fast component, \(\tau_{2}=167-264\) ps, is comparable to the timescale of the IRF. Interestingly, despite the ML and MD quenchers being much faster than SL and SD (\(\kappa_{\mathrm{p}\,\mathrm{q},\mathrm{q}}^{-1}=25\) ps compared to \(100\) ps) a high density of quenchers is still needed to quench the whole aggregate. This is due to the fact that, since \(\tau_{\mathrm{hop}}=k_{\mathrm{p}\,\mathrm{q},\mathrm{q}}=25\) ps, an excitation still has a fairly high probability of evading a quencher by hopping to a neighbouring LHCII trimer. The behaviour of FL and FD is complicated by the fact that the kinetics of the quencher are significantly faster than the 200 ps pulse-width (IRF) initially used in our model, with the decay kinetics having a seemingly counter-intuitive dependence on \(\rho_{\mathrm{q}}\). In Fig. 4**a**. (red line) we show the dependence of \(\tau_{\mathrm{amp}}\) on \(\rho_{\mathrm{q}}\) in the range \(0.01\leqslant\rho_{\mathrm{q}}\leqslant 0.8\), for the FD mechanism (the behaviour of FL is identical). The blue line shows the ratio of total excitations quenched, \(\Sigma_{\mathrm{q}}\), to the total excitations that decay emissivity, \(\Sigma_{\gamma}\). For very low quencher density (\(\rho_{\mathrm{q}}\sim 0.01\)) the aggregate is essentially unquenched and \(\tau_{\mathrm{amp}}\approx\kappa_{\mathrm{decay}}^{-1}\). For \(0.02\leqslant\rho_{\mathrm{q}}\leqslant 0.2\), \(\tau_{\mathrm{amp}}\) drops to and plateaus at \(\sim 2.2\) ns, indicating a small degree of overall quenching. At \(\rho_{\mathrm{q}}\sim 0.2\), \(\tau_{\mathrm{amp}}\) discontinuously jumps to \(3.6\)ns and remains at this level up to \(\rho_{\mathrm{q}}\sim 0.8\). Despite the fact that the amount of excitation quenching increases significantly (blue line, Fig. 4**a**.), it is not 'detected' in the emission histogram. Finally, in the region \(0.8\leqslant\rho_{\mathrm{q}}\leqslant 1.0\) the kinetics become sharply multi-exponential and the deconvolution fits become very unreliable. The interpretation of this is actually straightforward: At low quencher densities there is simply very little quenching. Although FL and FD are labelled 'fast' they are not irreversible traps and therefore excitations have a reasonable chance of evading the quencher. The level of quenching increases sharply at some critical point around \(\rho_{\rm q}\sim 0.2\) though it is effectively masked by the wide IRF. Excitations are delivered to the aggregate over 200 ps but are typically quenched within the first detection bin (\(\rm t<50\) ps). All that is detected is the small fraction of excitations that evade the traps and undergo free decay. Essentially, the kinetics are bi-exponential, with the short component completely masked by the IRF. At \(\rho_{\rm q}=1.0\), all LHCII trimers are equivalent and deeply quenched and the TCSPC signal is simply the IRF. In the range \(0.8\leqslant\rho_{\rm q}<1.0\) there is a sharp transition between these two regimes. The only way to clearly resolve the FL and FD mechanisms is to reduce the pulse width to \(\rm FWHM=50\) ps and the bin width to \(\rm\Delta t=10\) ps (reducing the bin width alone does not significantly alter the kinetics as shown in Fig. S2 of the Supplementary Material). This higher resolution is still experimentally feasible but requires high performance instrumentation. At a quencher density of \(\rho_{\rm q}\sim 0.8\) the FL and FD mechanisms result in sharply bi-exponential TCSPC kinetics with \(\tau_{\rm amp}\sim 652\) and 538 ps respectively. The slow component is, as with ML and MD, \(\tau_{1}\sim\rm k_{decay}^{-1}=3.6\) ns, while the fast component is \(\tau_{2}\sim 34\) and 25 ps. The fits are shown in Fig. S6 of the supplementary material and it is apparent that longer lifetime of the FL model (\(\tau_{\rm amp}\sim 652\) ns) is the result of poor fit quality. Lastly, to confirm that we were truly comparing comparable quenching regimes across the different models, we compared \(\Sigma_{\rm q}\) at \(\tau_{\rm amp}\sim 500\) ps (see Fig. 4 **c.**). In all cases there is significant quenching, with \(0.84\leqslant\Sigma_{\rm q}\leqslant 0.89\) for all models. This shows that all mechanism have the some functional consequences, quenching 80-90% of the energy absorbed by the aggregate, the experimental signatures are very different. ### Different quenching models at high excitation density Using the same excitation densities, \(\rho_{\rm q}\), as in Table 2, the excitation density was increased to \(n_{\rm ex}=5\) to introduce annihilation (traces shown in Fig. 3 **c.**). Visually, the \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{\(n_{\rm ex}=0.05\)} \\ \hline Model & \(\rho_{\rm q}\) & \(\tau_{\rm amp}\) (ps) & \(A_{1}\) & \(\tau_{1}\) (ps) & \(A_{2}\) & \(\tau_{2}\) (ps) \\ \hline \(\rm SL_{mono}\) & 0.9 & \(434\pm 5\) & \(1.683\pm 0.006\) & \(434\pm 5\) & — & — \\ \(\rm SL_{bi}\) & 0.90 & \(530\pm 35\) & \(0.049\pm 0.002\) & \(5750\pm 1132\) & \(1.729\pm 0.005\) & \(383\pm 5\) \\ \(\rm SD_{mono}\) & 0.85 & \(386\pm 11\) & \(1.765\pm 0.016\) & \(386\pm 11\) & — & — \\ \(\rm SD_{bi}\) & 0.85 & \(521\pm 16\) & \(0.137\pm 0.000\) & \(4298\pm 219\) & \(2.008\pm 0.007\) & \(264\pm 3\) \\ ML & 0.80 & \(568\pm 18\) & \(0.284\pm 0.004\) & \(3932\pm 135\) & \(2.382\pm 0.024\) & \(167\pm 4\) \\ MD & 0.80 & \(584\pm 11\) & \(0.403\pm 0.003\) & \(3774\pm 60\) & \(2.806\pm 0.032\) & \(126\pm 3\) \\ FL & 0.80 & \(652\pm 4\) & \(0.436\pm 0.001\) & \(3645\pm 14\) & \(2.113\pm 0.012\) & \(34\pm 4\) \\ FD & 0.80 & \(538\pm 3\) & \(0.470\pm 0.001\) & \(3618\pm 10\) & \(2.824\pm 0.018\) & \(25\pm 3\) \\ \hline \end{tabular} \end{table} Table 2: Fit parameters for each quenching model, assuming \(n_{\rm ex}=0.05\) and tuning \(\rho_{\rm q}\) to achieve an amplitude-weighted lifetime, \(\tau_{\rm amp}\). of roughly 500 ps. We show SL and SD parameters for both mono- and bi-exponential fits and note that \(\tau_{\rm amp}\sim 650\) ps for the FL model is due to a poor fit rather than any significant differences in the trace itself. SL and SD models do not appear to have changed, the ML and MD traces seem to decay a little faster and the FL and FD traces are noticeably altered. This is confirmed by the reconvolution fits, for which the parameters are listed in Table (3) and the plots shown in Figs. S7 - S10 in the Supplementary Material. The SL and SD mechanisms remain essentially unchanged by the presence of annihilation, though there is some slight difference for SD. The reason for this is the large difference in timescales between SL/SD quenching and annihilation, coupled with the fairly wide excitation pulse. Annihilation is fast and with relatively slow excitation delivery, is largely over by the end of the pulse. SD/SL quenching occurs later and so to two processes do not compete in a way that is detectable in the signal. ML/MD and particularly FL/FD do compete with annihilation and so their kinetics are visibly dependent on excitation density. This can also be seen in Fig. 4**c**. which shows the decrease in \(\Sigma_{\mathrm{q}}\) due to annihilation, which is most significant for the SL and SD quenchers. ### Measured fluorescence kinetics of LHCII aggregates TCSPC measurements on quenched LHCII aggregates have been reported in a variety of experimental conditions. Here we compare (annihilation-free) traces from LHCII crystals [44], LHCII trimers in gel [58], in proteoliposomes [45], aggregates in solution [43], and laminar aggregates on mica [48]. The published traces were digitised using the WebPlotDigitizer service [59] and are plotted (normalized) in Fig. 5**a**. Visually, they seem to resemble our SD and SL model traces, lacking the sharply and obviously bi-exponential kinetics of the faster quenching models. Still, the kinetics are clearly heterogeneous and they are generally reported with a bi-exponential fit consisting of a'slow' (\(\sim 2\) ns) and a 'fast' (\(\sim 200-600\) ps) component. The fast component matches the \(\tau_{2}\sim 300-400\) ps component of our SL and SD models, which reflects migration to and then trapping by a very large distribution of slow quenchers. In all of our models the slow component, \(\tau_{1}\sim 3-5\) ns, essentially reflects the free decay of excitations in unquenched trimers. Using our simulations we attempted to fit some of these experimental traces. We focused on LHCII in proteoliposomes [45] and LHCII aggregates on a mica surface [48] since these are quasi-2D arrays that should resemble our model. LHCII crystals and aggregates in solution, despite being highly non-native structures, produce very similar TCSPC kinetics to LHCII in proteoliposomes and on mica respectively (but with a narrower IRF). LHCII in gel is equally non-native, with gel-immobilization assumed \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{\(n_{\mathrm{ex}}=5.0\)} \\ \hline Model & \(\rho_{\mathrm{q}}\) & \(\tau_{\mathrm{amp}}\) (ps) & \(\mathrm{A_{1}}\) & \(\tau_{1}\) (ps) & \(\mathrm{A_{2}}\) & \(\tau_{2}\) (ps) \\ \hline SLmono & 0.9 & \(433\pm 5\) & \(1.664\pm 0.008\) & \(433\pm 5\) & — & — \\ SLbi & 0.90 & \(524\pm 36\) & \(0.047\pm 0.003\) & \(5723\pm 1186\) & \(1.714\pm 0.009\) & \(381\pm 5\) \\ SDmono & 0.85 & \(377\pm 8\) & \(1.759\pm 0.008\) & \(377\pm 8\) & — & — \\ SDbi & 0.85 & \(516\pm 36\) & \(1.900\pm 0.012\) & \(5186\pm 702\) & \(0.089\pm 0.004\) & \(297\pm 6\) \\ ML & 0.80 & \(449\pm 31\) & \(0.143\pm 0.004\) & \(4699\pm 474\) & \(2.410\pm 0.022\) & \(197\pm 6\) \\ MD & 0.80 & \(428\pm 28\) & \(0.169\pm 0.005\) & \(4481\pm 402\) & \(2.674\pm 0.028\) & \(172\pm 6\) \\ FL & 0.80 & \(192\pm 9\) & \(0.086\pm 0.002\) & \(4275\pm 232\) & \(2.407\pm 0.012\) & \(46\pm 1\) \\ FD & 0.80 & \(193\pm 8\) & \(0.095\pm 0.002\) & \(4196\pm 211\) & \(2.520\pm 0.014\) & \(43\pm 1\) \\ \hline \end{tabular} \end{table} Table 3: Fit parameters for the same models (and model parameters) but for \(n_{\mathrm{ex}}=5.0\). to induce quenching in the absence of aggregation. The fits are plotted in Fig. 5**b**. Both experimental traces could be fit to an SL quenching model, assuming \(\rho_{q}=0.82\) and \(0.97\) respectively. The dotted lines show the best fits we could obtain with the constraint that \(k_{decay}^{-1}=3.6\) ns (taken from fitting LHCII in detergent in Fig. 1**b.**) and while the short time kinetics are fit very well the longer times are not. The closest visual fits (solid lines in Fig. 5**b.**) are obtained by setting \(k_{decay}^{-1}=2.2\) ns implying that, in addition to a large sub-population of slow, localized quenchers, there is some global reduction of Chl excited state lifetime. A schematic diagram illustrating this _mixed quenching_ scenario is shown in Fig. 5**c.** ## 4 Discussion The essential aim of this work was to determine whether one could identify the nature of the NPQ quencher from a relatively'simple' TCSPC measurement on an LHCII aggregate. It had been previously suggested that this may be the case if one considered excitation densities high enough to induce singlet-singlet annihilation [49]. We sought to evolve this work in two ways. Firstly, we aimed to model, as closely as possible, the actual experiment, including finite instrument responses, limited temporal resolution and reconvolution fitting of the resulting signal, all factors that could completely mask any fine kinetic detail of the quenching mechanism. Secondly, various works combining quantum chemical modelling of isolated LHCII trimers with ultra-fast optical spectroscopies have provided us with a selection of plausible (if not definitive) quenching mechanisms, meaning we can focus our model on a narrower parameters space than previously considered. ### The TCSPC kinetics of different quenching mechanisms In previous work it was argued that two limiting models of quenching, fast trapping and slow quenching (F/S) and vice-versa (S/F), have identical fluorescence kinetics in the absence of annihilation but different dependencies on increasing excitation density [49]. While it is difficult to compare directly due to model difference, our results are rather different. Firstly, we see stark differences in the simulated signals between different quenching scenarios, even at very low excitation density. Our SL and SD models, which are closest to the previous S/F model, result in fluorescence kinetics that are almost mono-exponential, but for a small-amplitude, long-time tail. Conversely, the faster quenching mechanisms are sharply bi-exponential. For faster quenching models the difference in the lifetimes of quenched and unquenched LHCII trimers is larger and fewer LHCII trimers need to be quenched to obtain the same overall lifetime, \(\tau_{amp}\), for the aggregate. When we introduced annihilation the TCSPC kinetics of the SL and SD models barely change, while those of the faster models are associated with a significant drop in \(\tau_{amp}\). This is contrary to previous theoretical study in which slow-trapping (S/F) quenching models were suggested to be far more sensitive to excitation density than fast-trapping (F/S) models [49]. The rational was that fast-trapping quenchers trap excitons before they have a chance to annihilate while slow-trapping quenchers don't. The reason we see the opposite in our more realistic simulations is as follows: * In a TCSPC measurement excitations are delivered over a time period defined by the pulse width, typically \(\sim 50-200\) ps. Since annihilation occurs on the order of (\(\gamma^{-1}\sim 20\) ps) the majority of annihilation occurs inside the pulse. Add to this the contribution of stimulated emission and it is quite difficult to accumulate a large _residual_ exciton population outside of the pulse (remember \(n_{ex}\) is crude estimate of excitation density). The majority of the annihilation is effectively masked by the IRF, unless more expensive/advanced lasers with ultra-narrow temporal width are used. * The remaining excitons can still diffuse about the aggregate but annihilation is hindered by their mutual _entropic repulsion_, which means they have a tendency to avoid occupying the same timer. This is why annihilation kinetics have little dependence on aggregate topology (Fig. 2**c**.). * For SL and SD almost every trimer is a quencher site meaning quenching is by far the most likely dissipation pathway regardless, up to a point, of the residual excitation density. * For ML and MD a slightly smaller population of quenchers is required to give the same overall level of quenching, meaning the likelihood of two residual excitons meeting each other and annihilating increases slightly. There is therefore a small dependence of \(\tau_{amp}\) on excitation density. * The very large decrease in lifetime for the FL and FD models at high excitation density is primarily the result of the narrower laser pulse (\(\sim 50\) ps) needed to resolve the quenching. Excitations are delivered over a shorter timescale, meaning that a higher transient population is achieved and more excitons are present after the pulse. This is also the reason annihilation is so obvious in TA measurements which use sub-picosecond excitation pulses. This implies that annihilation may not be very useful in discriminating between different quenching mechanisms in real measurements. Different mechanisms should have different fluorescence kinetics even at low excitation densities and the annihilation kinetics are more sensitive to the excitation pulse width than to the quenching mechanism. ### The identity of the quencher in NPQ Surprisingly, TCSPC measurement, when analysed with our simulated measurements, may yield significant information about the mechanism responsible for quenching in various LHCII assemblies (Fig. 5**a**. and **b**.). For LHCII aggregates on mica surfaces [48] and in proteoliposomes [45], the measured fluorescence kinetics are consistent with a very large sub-population (\(\rho_{q}\sim 80-90\%\) or trimers) of'slow quenchers'. That is some quenching configuration in which energy is delivered slowly (\(k_{p\,q,q}^{-1}\sim 100\) ps) but irreversibly to a trap state with a short lifetime (\(\Gamma\sim 10\) ps). Interestingly, the only difference between the two appears to be the quencher density, \(\rho_{q}\). This seems in line with very recent time-resolved fluorescence measurements on LHCII in model membranes, which showed that \(\rho_{q}\) depends on the lipid-LHCII ratio [47]. However, whether this slow NPQ trap state is coupled only to particular Chl (SL) or accessible from many (SD) is probably not resolvable with TCSPC, since the fluorescence kinetics for each scenario are _qualitatively_ identical. Of course we did not explore the entire parameter space of our model. For example, we did not consider a state that a quencher with a fast trapping time (\(\kappa_{\mathrm{p}\,\mathrm{q},\mathrm{q}}^{-1}\sim 1\) ps) and slow quenching time (\(\Gamma>100\)ps), mainly because this is not suggested by the majority of plausible models. Most models assume the quencher is a carotenoid [9, 27, 39, 40] or some short-lived Chl-carotenoid excitonic [31] or CT state [28, 29]. Models aside, it was observed in early TA measurements of LHCII aggregates [9] that the quencher did not accumulate a detectable population, implying it was both short-lived and slowly-populated (hence the initial hypothesis of a carotenoid as a quencher). Our models do imply that a fast-trapping (\(\kappa_{\mathrm{p}\,\mathrm{q},\mathrm{q}}^{-1}<10\) ps), fast-quenching (\(\Gamma\sim 10\) ps) mechanism would yield very sharply bi-exponential kinetics, something that doesn't appear to be reported despite a wide range of experimental conditions. Of course, one could argue that this behaviour could be masked by a combination of a wide excitation pulse and low detection resolution, but this seems unlikely given the number of independent measurements in vastly different conditions. However, two recent measurements have suggested some form of ultra-fast transfer to a carotenoid or carotenoid-like state is involved in quenching. TA measurements on quenched LHCII immobilized in gel and in low detergent conditions showed the instantaneous (within resolution) appearance of a carotenoid-like signal upon excitation of Chl \(a\) or Chl \(b\)[36]. Similarly, 2D measurements of LHCII in lipid nano-disks showed \(<500\) fs Chl-carotenoid EET [27]. While one could argue that the gel immobilization represents a highly non-native set of conditions, it is harder to argue this for lipid nano-disks. The most interesting revelation is that at least two quenching mechanisms are needed to reproduce the observed fluorescence kinetics. The reduction of _both_ lifetime components has always been a feature of the measured fluorescence kinetics of quenching but it was never clear how this related to the actual internal processes of exciton migration, trapping and decay. The'second' quenching pathway appears to be rather slow (\(\sim 2\) ns) and uniformly distributed within the aggregate. We modelled this as an _ad hoc_ reduction in \(\kappa_{\mathrm{decay}}^{-1}\), but we are not necessarily proposing that some process radically alters the fundamental excited state lifetime of Chl. We are not excluding this possibility either, given that it is well known that non-planar distortions to tetrapyroles can significantly shorten their fluorescence lifetimes (though generally by enhancing triplet formation) [60, 61]. Either way, this shortening of the LHCII lifetime from \(\sim 4\) ns to \(\sim 2.2\) ns is also observed in the thylakoid membrane in the absence of quenching. It was initially thought to be due to radical pair recombination in the PSII reaction centre [62, 63]. However, the same lifetime reduction was observed in plants depleted of PSII due to being grown on the PSII repair inhibitor limomycin, implying that \(\sim 2.2\) ns lifetime comes from some decay process in the antenna [64]. Our previous energy relaxation simulation of LHCII suggested while simple EET to the carotenoid lutein could not explain the level of quenching seen in NPQ, it may explain the 'background quenching' responsible for the \(\sim 2.2\) ns lifetime. ### Why focus on Time-Correlated Single Photon Counting? High resolution non-linear spectroscopies (TA, 2D, transient grating, etc.) are the state-of-the art technique for unravelling the ultra-fast and branching relaxation processes in isolated photosynthetic complexes, their utility vastly enhanced by an extremely well-developed theoretical framework [65, 66]. Still, they have yet to unambiguously reveal key functional aspects such as the underlying mechanism of NPQ. This is possibly due to a combination of a lack of fine structural information on the quenched and unquenched states, and the fact that these measurements generally have to be performed in highly non-native conditions. To understand the actual functional characteristics of NPQ one must consider its place within a larger network of complexes, the clusters of LHCII that form in the membrane and even the thylakoid grana as a whole. These length- and time-scales are better probed by techniques such as Fluorescence Lifetime Imaging (FLIM) which, when correlated with visualization techniques such as AFM, offer insight into the relationship between membrane organization and function [48]. FLIM is based on TCSPC and direct data analysis is generally limited to reconvolution fitting of the fluorescence decay kinetics. Models of the underlying processes have in the past been highly phenomenological or overlooked the technical limitations of the method. We have shown here that while factors such as pulse width and detection resolution can have a profound impact on the kinetics and the subsequent fits, this is not necessarily a problem, so long as they are carefully factored into any model as they have been here. In conclusion, we here present a first look at new method for analysing the _measured_ fluorescence kinetics of large networks of photosynthetic complexes. The approach is general and easily extendable to include non-trivial structures and additional decay pathways such as photochemical quenching in the reaction centre. ## 5 Acknowledgements All authors acknowledge the support of BBSRC (joint grant BB/T000023/1). ## 6 Software details We use a mix of Fortran with OpenMPI for the TCSPC simulation and Python for the fitting and analysis. The pRNG used in the Fortran code is xoshiro1024 (see here) and the code itself is available at [https://github.com/QMUL-DuffyLab/aggregates](https://github.com/QMUL-DuffyLab/aggregates).
2302.00830
A note on connectedness of Blaschke products
Consider the space $\mathcal{F}$ of all inner functions on the unit open disk under the uniform topology, which is a metric topology induced by the $H^{\infty}$-norm. In the present paper, a class of Blaschke products, denoted by $\mathcal{H}_{SC}$, is introduced. We prove that for each $B\in\mathcal{H}_{SC}$, $B$ and $zB$ belong to the same path-connected component of $\mathcal{F}$. It plays an important role of a method to select a fine subsequence of zeros. As a byproduct, we obtain that each Blaschke product in $\mathcal{H}_{SC}$ has an interpolating and one-component factor.
Yue Xin, Bingzhe Hou
2023-02-02T02:37:53Z
http://arxiv.org/abs/2302.00830v1
# A note on connectedness of Blaschke products ###### Abstract Consider the space \(\mathcal{F}\) of all inner functions on the unit open disk under the uniform topology, which is a metric topology induced by the \(H^{\infty}\)-norm. In the present paper, a class of Blaschke products, denoted by \(\mathcal{H}_{SC}\), is introduced. We prove that for each \(B\in\mathcal{H}_{SC}\), \(B\) and \(zB\) belong to the same path-connected component of \(\mathcal{F}\). It plays an important role of a method to select a fine subsequence of zeros. As a byproduct, we obtain that each Blaschke product in \(\mathcal{H}_{SC}\) has an interpolating and one-component factor. Blaschke products; path-connectedness; pseudo-hyperbolic distance; interpolating; one-component. Yue Xin and Bingzhe Hou Primary 30J05, 30J10; Secondary 54C35. ## 1 Introduction Let \(\mathbb{D}\) be the unit open disk in the complex plane \(\mathbb{C}\) and let \(\partial\mathbb{D}\) be the boundary of \(\mathbb{D}\), i.e., the unit circle. The pseudo-hyperbolic distance on the unit open disk \(\mathbb{D}\), denoted by \(\rho\), is given by \[\rho(z,w)=|\frac{z-w}{1-\overline{z}w}|,\quad\text{for any }z,w\in\mathbb{D}.\] Let \(H^{\infty}\) be the Banach algebra of bounded analytic functions on \(\mathbb{D}\) equipped with the norm \(\|f\|_{\infty}=\sup_{z\in\mathbb{D}}|f(z)|\). A bounded analytic function \(f\) on \(\mathbb{D}\) is called an inner function if it has unimodular radial limits almost everywhere on the boundary \(\partial\mathbb{D}\) of \(\mathbb{D}\). Furthermore, denote by \(\mathcal{F}\) the set of all inner functions. We are interested in the space \(\mathcal{F}\) under uniform topology, which is a metric topology induced by the \(H^{\infty}\)-norm. Notice that the uniform topology on \(\mathcal{F}\) is very complicated and interesting, see [6, 9, 10] for instance. A Blaschke product is an inner function of the form \[B(z)=\lambda z^{m}\prod_{n}\frac{|z_{n}|}{z_{n}}\frac{z_{n}-z}{1-\overline{z_{ n}}z},\] where \(m\) is a nonnegative integer, \(\lambda\) is a complex number with \(|\lambda|=1\), and \(\{z_{n}\}\) is a sequence of points in \(\mathbb{D}\setminus\{0\}\) satisfying the Blaschke condition \(\sum_{n}(1-|z_{n}|)<\infty\). Moreover, if \(\lambda=1\), we say that \(B\) is normalized. If for every bounded sequence of complex numbers \(\{w_{n}\}_{n=1}^{\infty}\), there exists \(f\) in \(H^{\infty}\) satisfying \(f(z_{n})=w_{n}\) for every \(n\in\mathbb{N}\), then both the sequence \(\{z_{n}\}_{n=1}^{\infty}\) and the Blaschke product \(B(z)\) are called interpolating. Following from a celebrated result of Carleson [3], one can see that \(B(z)\) is an interpolating Blaschke product if and only if \(\{z_{n}\}\) is a uniformly separated sequence, i.e., \[\inf_{n\in\mathbb{N}}\prod_{k\neq n}|\frac{z_{k}-z_{n}}{1-\overline{z_{k}}z_{ n}}|>0.\] Moreover, if \[\lim_{n\to\infty}\prod_{k\neq n}|\frac{z_{k}-z_{n}}{1-\overline{z_{k}}z_{n}}| =1,\] both the sequence \(\{z_{n}\}_{n=1}^{\infty}\) and the Blaschke product \(B(z)\) are called thin. In addition, A Blaschke product is called Carleson-Newman if it is a product of finitely many interpolating Blaschke products. Interpolating Blaschke products and Carleson-Newman Blaschke products play an important role in the study of \(H^{\infty}\). As well-known, inner functions can be approximated uniformly by Blaschke products, and Carleson-Newman Blaschke products can be approximated uniformly by interpolating Blaschke products [8]. However, there is still an open problem whether the set of all interpolation Blaschke products is dense in the inner function space \(\mathcal{F}\). There has been obtained some related results. For instance, Marshall ([7]) proved that finite linear combinations of Blaschke products are dense in \(H^{\infty}\); Nicolau and Suarez [11] characterize the connected components of the subset \(CN^{*}\) of \(H^{\infty}\) formed by the products \(bh\), where \(b\) is a Carleson-Newman Blaschke product and \(h\in H^{\infty}\) is an invertible function. In particular, a result of K. Tse [13] tells us that a sequence \(\{z_{n}\}_{n=1}^{\infty}\) of points contained in a Stolz domain \[\{z\in\mathbb{D}:|1-\overline{\xi}z|\leq C(1-|z|)\},\] where \(\xi\) is a constant with \(|\xi|=1\), is interpolating if and only if it is separated, i.e., \[\inf_{m\neq n}\rho(z_{m},z_{n})>0.\] Then, if the zero points lie in a Stolz domain, we may say something more about the Blaschke products. For example, A. Reijonen gave a sufficient condition for a Blaschke product with zeros in a Stolz domain to be a one-component inner function in [12]. An inner function \(u\) in \(H^{\infty}\) is said to be one-component if there is \(\eta\in(0,1)\) such that the level set \(\Omega_{u}(\eta):=\{z\in\mathbb{D}:|u(z)|<\eta\}\) is connected. More details of one-component inner functions, we refer to [1, 4, 12]. In this paper, we focus on the path-connected components of the space \(\mathcal{F}\) under the uniform topology. Recall that the topology of the uniform convergence on the set \(\mathcal{F}\) is induced by the following metric \[d(f,g)=\|f-g\|_{\infty}=\sup_{z\in\mathbb{D}}|f(z)-g(z)|=\sup_{\theta\in\mathbb{ R}}\mathrm{ess}|f(e^{\mathbf{i}\theta})-g(e^{\mathbf{i}\theta})|.\] For any two inner functions \(f\) and \(g\), if they belong to the same path-connected component in the space \(\mathcal{F}\), we denote \(f\sim g\). The path of inner functions has always been of great interest and is related to many important issues. D. Herrero [6] considered the path-connected components of the space \(\mathcal{F}\). He showed that a component of \(\mathcal{F}\) can contain nothing but Blaschke products with infinitely many zeroes, exactly one (up to a constant factor) singular inner function or infinitely many pairwise coprime singular inner functions, which answered a problem of Douglas. Furthermore, V. Nestoridis studied the invariant and noninvariant connected components of the inner functions space \(\mathcal{F}\). He proved that the inner functions \(d(z)=\exp\{(z+1)/(z-1)\}\) and \(zd\) belong to the same connected component [9], and gave a family of inner functions, denoted by \(H\), such that for every \(B\in H\), \(B\) and \(zB\) don't belong to the same component [10]. In particular, the family \(H\) contains only Blaschke products, and contains all of thin Blaschke products. In addition, several authors have studied the connected component of inner functions in the context of model spaces and operator theory (see [2, 5] for instance). In the present paper, we aim to give a class of Blaschke products, denoted by \(\mathcal{H}_{SC}\), such that each \(B\in\mathcal{H}_{SC}\), \(B\) and \(zB\) belong to the same connected component. Firstly, let us define a class of subsets of the unit open disk, named strip cones and denoted by \(SC(\xi,\theta,T_{1},T_{2})\). In this paper, we study the Blaschke products with zeros lying in a strip cone. **Definition 1.1**.: Let \(\theta\in(0,\pi)\), \(\xi\in\partial\mathbb{D}\), \(T_{1}\) and \(T_{2}\) be two nonzero real numbers. Denote by \(J_{i}\) the arc on the circle \[|z-(1-T_{i}e^{\mathbf{i}\theta})\xi|=|T_{i}|\] in the unit open disk, for \(i=1,2\). Write \(\xi_{i}\) as the intersection point of \(J_{i}\) and \(\partial\mathbb{D}\) other than \(\xi\), \(i=1,2\). We define \(SC(\xi,\theta,T_{1},T_{2})\) is the region bounded by \(J_{1}\), \(J_{2}\) and \(\widehat{\xi_{1}\xi_{2}}\) which is the arc on the unit circle \(\partial\mathbb{D}\) from \(\xi_{1}\) to \(\xi_{2}\) without \(\xi\), and call it a strip cone. If \(T_{1}=T_{2}\in\mathbb{R}\setminus\{0\}\), then \(SC(\xi,\theta,T_{1},T_{2})\) is just the arc \(J_{1}=J_{2}\). In particular, if \(T_{1}\) and \(T_{2}\) are infinity, then \(SC(\xi,\theta,T_{1},T_{2})\) is just the segment \(J_{1}=J_{2}=(-1,1)\). One can see some examples of strip cones in Figure 1. Next, we explain why we name it strip cone. **Definition 1.2**.: Let \(\theta\in(-\frac{\pi}{2},\frac{\pi}{2})\), and \(L_{1}\) and \(L_{2}\) be two parallel straight lines with an angle of \(\theta\) to the real axis. Denote by \(SL(\theta,L_{1},L_{2})\) the strip region between \(L_{1}\) and \(L_{2}\) in the right half plane. Given any strip cone \(SC(\xi,\theta,T_{1},T_{2})\). Consider the fractional linear transformation \(\varphi_{\xi}(z)=(\xi+z)/(\xi-z)\), where \(|\xi|=1\). It is easy to see that \(\varphi_{\xi}\) maps the unit open disk onto the right half plane, and map the arcs \(J_{1}\) and \(J_{2}\) in Definition 1.1 to some parallel straight lines \(L_{1}\) and \(L_{2}\) with the angle of \(\theta\) to the imaginary axis in the right half plane. Then, \(\varphi_{\xi}\) is an analytic bijection from \(SC(\xi,\theta,T_{1},T_{2})\) to \(SL(\frac{\pi}{2}-\theta,L_{1},L_{2})\) (see Figure 2 for instance). This is the reason we call the subset \(SC(\xi,\theta,T_{1},T_{2})\) strip cone. Without loss of generality, we may assume that \(\xi=1\), because it is no difference to deal with \(\xi=1\) and general \(\xi\) with \(|\xi|=1\). Moreover, we always denote \(\varphi(z)=(1+z)/(1-z)\) through this paper. **Definition 1.3**.: Denote by \(\mathcal{H}_{SC}\) the family of all Blaschke products \(B\) satisfying the following conditions, 1. the zeros \(\{z_{n}\}_{n=1}^{\infty}\) of \(B\) lie in some certain strip cone \(SC(\xi,\theta,T_{1},T_{2})\); 2. \(|\xi-z_{n}|\) non-increasingly tends to \(0\); 3. there exists a positive number \(\delta<1\), such that \(\rho(z_{n},z_{n+1})\leq\delta\) for any \(n\in\mathbb{N}\). Now we show our main result as the following theorem. Figure 1. Examples of strip cones **Main Theorem** For any \(B\in\mathcal{H}_{SC}\), \(B\) and \(zB\) belong to the same path-connected component of the inner functions space \(\mathcal{F}\) under the uniform topology. In the next section, we will introduce a method to select a "fine" factor of a Blaschke product in \(\mathcal{H}_{SC}\), which plays an important role to prove the main theorem. As a byproduct, we obtain that each Blaschke product in \(\mathcal{H}_{SC}\) has an interpolating and one-component factor. Then, we will complete the proof of the main theorem in the last section. ## 2 Preliminaries First of all, let us start from the following simple result. **Lemma 2.1**.: _Let \(f=\varphi_{1}\cdot\varphi_{2}\) and \(g=\psi_{1}\cdot\psi_{2}\), where \(f,g,\varphi_{1},\varphi_{2},\psi_{1},\psi_{2}\in\mathcal{F}\). If \(\varphi_{1}\sim\psi_{1}\) and \(\varphi_{2}\sim\psi_{2}\), then \(f\sim g\). In particular, if \(\varphi_{1}\sim z\varphi_{1}\), then \(f\sim zf\)._ To prove \(B\sim zB\), it suffices to prove \(\widetilde{B}\sim z\widetilde{B}\), if \(\widetilde{B}\) is factor of \(B\). In this section, it will be shown that we can select a factor \(\widetilde{B}\) of \(B\) such that \(\widetilde{B}\) satisfies more conditions than \(B\). The major is how to choose a subsequence of the zeros sequence of \(B\). Recall that for a Blaschke product \(B\in\mathcal{H}_{SC}\) with zeros \(\{z_{n}\}_{n=1}^{\infty}\), there exists a positive number \(\delta<1\), such that \(\rho(z_{n},z_{n+1})\leq\delta\) for any \(n\in\mathbb{N}\). **Lemma 2.2**.: _Let \(a\), \(a^{\prime}\), \(b\), \(b^{\prime}\) be real numbers satisfying_ \[0\leq a\leq a^{\prime}<1\ \text{ and }\ 0\leq b\leq b^{\prime}<1.\] _Then,_ \[\frac{a+b}{1+ab}\leq\frac{a^{\prime}+b^{\prime}}{1+a^{\prime}b^{\prime}}\] Proof.: \[\frac{a^{\prime}+b^{\prime}}{1+a^{\prime}b^{\prime}}-\frac{a+b}{1 +ab} =\frac{(a^{\prime}+b^{\prime}+a^{\prime}ab+abb^{\prime})-(a+b+aa^{ \prime}b^{\prime}+a^{\prime}bb^{\prime})}{(1+ab)(1+a^{\prime}b^{\prime})}\] \[=\frac{(a^{\prime}-a)(1-bb^{\prime})+(b^{\prime}-b)(1-a^{\prime} a)}{(1+ab)(1+a^{\prime}b^{\prime})}\] \[\geq 0.\] **Lemma 2.3**.: _Let \(\{z_{n}\}_{n=1}^{\infty}\) be a sequence of complex numbers in the unit open disk, satisfying that_ 1. _for any_ \(m\in\mathbb{N}\)_,_ \(\rho(z_{m},z_{n})\to 1\) _as_ \(n\to\infty\)_;_ 2. _there exists a positive number_ \(0<\delta<1\)_, such that_ \(\rho(z_{n},z_{n+1})\leq\delta\) _for any_ \(n\in\mathbb{N}\)_._ _Then, for any \(0<\varepsilon<1\), we can choose a subsequence \(\{z_{n_{k}}\}_{n_{k}=1}^{\infty}\) of \(\{z_{n}\}_{n=1}^{\infty}\) such that_ \[0<\varepsilon\leq\rho(z_{n_{k}},z_{n_{k+1}})\leq\frac{\varepsilon+\delta}{1+ \varepsilon\delta}<1.\] Proof.: Given any \(0<\varepsilon<1\). Put \(z_{n_{1}}=z_{1}\). Since \(\rho(z_{n_{1}},z_{i})\to 1\) as \(i\to\infty\), we can choose \[n_{2}=\min\{i;\ \rho(z_{n_{1}},z_{i})\geq\varepsilon\}.\] It is obvious that \(\rho(z_{n_{1}},z_{n_{2}})\geq\varepsilon\). With the same method, we choose \(n_{k+1}\) by \[n_{k+1}=\min\{i;\ i>n_{k},\ \rho(z_{n_{k}},z_{i})\geq\varepsilon\}.\] Then, \(\rho(z_{n_{k}},z_{n_{k+1}})\geq\varepsilon\). Moreover, for each \(k=1,2,\ldots\), \[0\leq\rho(z_{n_{k}},z_{n_{k+1}-1})<\varepsilon<1\ \ \mbox{and}\ \ 0\leq\rho(z_{n_{k+1}-1},z_{n_{k+1}})<\delta<1.\] By Lemma 2.2, we have \[\frac{\rho(z_{n_{k}},z_{n_{k+1}-1})+\rho(z_{n_{k+1}-1},z_{n_{k+1}})}{1+\rho(z_ {n_{k}},z_{n_{k+1}-1})\rho(z_{n_{k+1}-1},z_{n_{k+1}})}\leq\frac{\varepsilon+ \delta}{1+\varepsilon\delta}.\] Therefore, \[0<\varepsilon\leq\rho(z_{n_{k}},z_{n_{k+1}})\leq\frac{\rho(z_{n_{k}},z_{n_{k+1 }-1})+\rho(z_{n_{k+1}-1},z_{n_{k+1}})}{1+\rho(z_{n_{k}},z_{n_{k+1}-1})\rho(z_{n _{k+1}-1},z_{n_{k+1}})}\leq\frac{\varepsilon+\delta}{1+\varepsilon\delta}<1.\] **Lemma 2.4**.: _Let \(\alpha\) and \(\beta\) be two complex numbers in the unit open disk with \(|1-\alpha|<|1-\beta|\). Denote \(\alpha=s_{1}+\mathbf{i}(1-s_{1})\cot\theta_{1}\) and \(\beta=s_{2}+\mathbf{i}(1-s_{2})\cot\theta_{2}\), where \(\theta_{1},\theta_{2}\in(0,\pi)\). For the pair of \(\alpha\) and \(\beta\), let the positive numbers \(\varepsilon\), \(\delta\), \(\theta_{0}\), \(C\), \(\tau\) and \(\eta\) satisfy the following conditions,_ 1. \(0<\varepsilon\leq\rho(\alpha,\beta)\leq\delta<1\)_;_ 2. \(\theta_{0}\in(0,\pi)\) _and_ \(C=|1-\mathbf{i}\cot\theta_{0}|\)_;_ 3. \(|\cot\theta_{i}-\cot\theta_{0}|<\tau<\sqrt{\frac{3C^{2}\varepsilon^{2}}{16C^{ 2}(1-\varepsilon^{2})+3\varepsilon^{2}}}\)_, for_ \(i=1,2\)_;_ 4. \(3\leq 4-2\eta\leq(1+s_{1}-(1-s_{1})\cot^{2}\theta_{1})(1+s_{2}-(1-s_{2})\cot ^{2}\theta_{2})\leq 4\)_._ _Then_ \[0<C_{1}\leq\big{|}\frac{1-\alpha}{1-\beta}\big{|}\leq C_{2}<1.\] _where_ \[C_{1} =\frac{C-\tau}{C+\tau}\left(\frac{C-\tau}{C+\tau}+\frac{2\delta ^{2}-2\sqrt{\delta^{4}+\delta^{2}(C^{2}-\tau^{2})(1-\delta^{2})}}{(C+\tau)^{ 2}(1-\delta^{2})}\right),\] \[C_{2} =\frac{C+\tau}{C-\tau}\left(\frac{C+\tau}{C-\tau}+\frac{(2-\eta) \varepsilon^{2}-\sqrt{(2-\eta)^{2}\varepsilon^{4}+2\varepsilon^{2}(2-\eta)( C^{2}-\tau^{2})(1-\varepsilon^{2})}}{(C-\tau)^{2}(1-\varepsilon^{2})}\right).\] Proof.: For \(i=1,2\), it follows from condition (3) that \[|(1-\mathbf{i}\cot\theta_{i})-(1-\mathbf{i}\cot\theta_{0})|<\tau,\] and consequently, \[0<C-\tau<|1-\mathbf{i}\cot\theta_{i}|<C+\tau.\] Denote \[K=\frac{1-s_{1}}{1-s_{2}}.\] Notice that \[0<(1-s_{1})(C-\tau)\leq|1-\alpha|=(1-s_{1})|1-{\bf i}\cot\theta_{1}| \leq(1-s_{1})(C+\tau),\] \[0<(1-s_{2})(C-\tau)\leq|1-\beta|=(1-s_{2})|1-{\bf i}\cot\theta_{2} |\leq(1-s_{2})(C+\tau).\] Then \[\frac{K(C-\tau)}{C+\tau}\leq\big{|}\frac{1-\alpha}{1-\beta}\big{|}\leq\frac{K( C+\tau)}{C-\tau}. \tag{2.1}\] Since \[\rho(\alpha,\beta)^{2}=\frac{|\alpha-\beta|^{2}}{(1-\overline{\alpha}\beta)( 1-\overline{\beta}\alpha)}=\frac{1}{\frac{(1-|\alpha|^{2})(1-|\beta|^{2})}{| \alpha-\beta|^{2}}+1},\] it follows from condition (1) that \[0<\frac{1}{\delta^{2}}-1\leq\frac{(1-|\alpha|^{2})(1-|\beta|^{2})}{|\alpha- \beta|^{2}}\leq\frac{1}{\varepsilon^{2}}-1. \tag{2.2}\] Consider \(\frac{(1-|\alpha|^{2})(1-|\beta|^{2})}{|\alpha-\beta|^{2}}\). We have \[\frac{(1-|\alpha|^{2})(1-|\beta|^{2})}{|\alpha-\beta|^{2}}\] \[= \frac{(1-(s_{1}^{2}+(1-s_{1})^{2}\cot^{2}\theta_{1}))\cdot(1-(s_{ 2}^{2}+(1-s_{2})^{2}\cot^{2}\theta_{2}))}{|(1-\beta)-(1-\alpha)|^{2}}\] \[= \frac{(1-s_{1})(1-s_{2})(1+s_{1}-(1-s_{1})\cot^{2}\theta_{1})(1+s _{2}-(1-s_{2})\cot^{2}\theta_{2}))}{|(1-s_{1})(1-{\bf i}\cot\theta_{1})-(1-s_{ 2})(1-{\bf i}\cot\theta_{2})|^{2}}.\] Denote \[W_{1}=1+s_{1}-(1-s_{1})\cot^{2}\theta_{1}\ \ \mbox{and}\ \ W_{2}=1+s_{2}-(1-s_{2}) \cot^{2}\theta_{2}.\] Then, \[3\leq 4-2\eta\leq W_{1}\cdot W_{2}\leq 4.\] Now we give the lower bound and upper bound of \(K\), respectively. Then, by inequality (2.1), we can complete the proof. **Lower bound of \(K\).** If \(K\leq\frac{C-\tau}{C+\tau}\), we have \[\frac{(1-|\alpha|^{2})(1-|\beta|^{2})}{|\alpha-\beta|^{2}}= \frac{(1-s_{1})(1-s_{2})W_{1}W_{2}}{|(1-s_{1})(1-{\bf i}\cot\theta _{1})-(1-s_{2})(1-{\bf i}\cot\theta_{2})|^{2}}\] \[= \frac{KW_{1}W_{2}}{|K(1-{\bf i}\cot\theta_{1})-(1-{\bf i}\cot \theta_{2})|^{2}}\] \[\leq \frac{4K}{(|(1-{\bf i}\cot\theta_{2})|-|K(1-{\bf i}\cot\theta_{1} )|)^{2}}\] \[\leq \frac{4K}{((C-\tau)-K(C+\tau))^{2}}.\] It follows from inequality (2.2) that \[\frac{4K}{((C-\tau)-K(C+\tau))^{2}}\geq\frac{(1-|\alpha|^{2})(1-|\beta|^{2})} {|\alpha-\beta|^{2}}\geq\frac{1}{\delta^{2}}-1.\] Then, \[K^{2}-2\left(\frac{C-\tau}{C+\tau}+\frac{2\delta^{2}}{(C+\tau)^{2}(1-\delta^{2})} \right)K+\frac{(C-\tau)^{2}}{(C+\tau)^{2}}\leq 0,\] Let \(a_{1}\) and \(a_{2}\) be the two roots of the above quadratic polynomial of \(K\), \[a_{1}=\frac{C-\tau}{C+\tau}+\frac{2\delta^{2}-2\sqrt{\delta^{4}+\delta^{2}(C^{ 2}-\tau^{2})(1-\delta^{2})}}{(C+\tau)^{2}(1-\delta^{2})},\] \[a_{2}=\frac{C-\tau}{C+\tau}+\frac{2\delta^{2}+2\sqrt{\delta^{4}+\delta^{2}(C^{ 2}-\tau^{2})(1-\delta^{2})}}{(C+\tau)^{2}(1-\delta^{2})}.\] One can see that \[0<a_{1}<\frac{C-\tau}{C+\tau}<a_{2}.\] Then, we always have \[K\geq a_{1}>0.\] Moreover, put \(C_{1}=\frac{C-\tau}{C+\tau}\cdot a_{1}\), \[\big{|}\frac{1-\alpha}{1-\beta}\big{|}\geq\frac{K(C-\tau)}{C+\tau}\geq C_{1}>0.\] **Upper bound of \(K\).** Notice that \[\frac{(1-|\alpha|^{2})(1-|\beta|^{2})}{|\alpha-\beta|^{2}}\] \[= \frac{KW_{1}W_{2}}{|K(1-\mathbf{i}\cot\theta_{1})-(1-\mathbf{i} \cot\theta_{2})|^{2}}\] \[= \frac{KW_{1}W_{2}}{\big{|}iK(\cot\theta_{0}-\cot\theta_{1})-i( \cot\theta_{0}-\cot\theta_{2})+(K-1)(1-\mathbf{i}\cot\theta_{0})\big{|}^{2}}\] \[\geq \frac{(4-2\eta)K}{\big{(}\big{|}K(\cot\theta_{0}-\cot\theta_{1}) \big{|}+\big{|}\cot\theta_{0}-\cot\theta_{2}\big{|}+\big{|}(K-1)C\big{|}\big{)} ^{2}}\] \[\geq \frac{(4-2\eta)K}{(|(K-1)|C+(K+1)\tau)^{2}}.\] Firstly, let us prove that \(K\) must be less than \(1\). If \(K\geq 1\), we have \[\frac{(1-|\alpha|^{2})(1-|\beta|^{2})}{|\alpha-\beta|^{2}}\geq\frac{(4-2\eta) K}{((1-K)C+(K+1)\tau)^{2}}.\] It follows from inequality (2.2) that \[\frac{(4-2\eta)K}{((K-1)C+(K+1)\tau)^{2}}\leq\frac{(1-|\alpha|^{2})(1-|\beta| ^{2})}{|\alpha-\beta|^{2}}\leq\frac{1}{\varepsilon^{2}}-1.\] Then, \[K^{2}-2\left(\frac{C-\tau}{C+\tau}+\frac{(2-\eta)\varepsilon^{2}}{(1- \varepsilon^{2})(C+\tau)^{2}}\right)K+\frac{(C-\tau)^{2}}{(C+\tau)^{2}}\geq 0,\] Let \(\lambda_{1}\) and \(\lambda_{2}\) be the two roots of the above quadratic polynomial of \(K\), \[\lambda_{1} =\frac{C-\tau}{C+\tau}+\frac{(2-\eta)\varepsilon^{2}-\sqrt{(2-\eta )^{2}\varepsilon^{4}+2\varepsilon^{2}(2-\eta)(C^{2}-\tau^{2})(1-\varepsilon^{ 2})}}{(C+\tau)^{2}(1-\varepsilon^{2})},\] \[\lambda_{2} =\frac{C-\tau}{C+\tau}+\frac{(2-\eta)\varepsilon^{2}+\sqrt{(2- \eta)^{2}\varepsilon^{4}+2\varepsilon^{2}(2-\eta)(C^{2}-\tau^{2})(1- \varepsilon^{2})}}{(C+\tau)^{2}(1-\varepsilon^{2})}.\] Since \[\left(\frac{C+\tau}{C-\tau}\right)^{2}-2\left(\frac{C-\tau}{C+\tau }+\frac{(2-\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C+\tau)^{2}}\right)\left( \frac{C+\tau}{C-\tau}\right)+\frac{(C-\tau)^{2}}{(C+\tau)^{2}}\] \[= \left(\frac{C+\tau}{C-\tau}-\frac{C-\tau}{C+\tau}\right)^{2}- \frac{(4-2\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C+\tau)^{2}}\cdot\left( \frac{C+\tau}{C-\tau}\right)\] \[= \frac{16C^{2}\tau^{2}}{(C^{2}-\tau^{2})^{2}}-\frac{(4-2\eta) \varepsilon^{2}}{(1-\varepsilon^{2})(C^{2}-\tau^{2})}\] \[= \frac{16C^{2}}{(C^{2}-\tau^{2})^{2}}\left(\tau^{2}-\frac{3 \varepsilon^{2}(C^{2}-\tau^{2})}{16C^{2}(1-\varepsilon^{2})}\right)\] \[< 0,\] one can see that \[0<\lambda_{1}<\frac{C-\tau}{C+\tau}<1<\frac{C+\tau}{C-\tau}<\lambda_{2}.\] Then, we have \[K\geq\lambda_{2}>\frac{C+\tau}{C-\tau}>1.\] However, by inequality (2.1), \[\big{|}\frac{1-\alpha}{1-\beta}\big{|}\geq\frac{K(C-\tau)}{C+\tau}\geq\lambda _{2}\cdot\frac{C-\tau}{C+\tau}>1.\] It is a contradiction to \(|1-\alpha|<|1-\beta|\). Now we have known \(K<1.\) Then, \[\frac{(1-|\alpha|^{2})(1-|\beta|^{2})}{|\alpha-\beta|^{2}}\geq\frac{(4-2\eta) K}{((1-K)C+(K+1)\tau)^{2}}.\] It follows from inequality (2.2) that \[\frac{(4-2\eta)K}{((1-K)C+(K+1)\tau)^{2}}\leq\frac{(1-|\alpha|^{2})(1-|\beta| ^{2})}{|\alpha-\beta|^{2}}\leq\frac{1}{\varepsilon^{2}}-1.\] Then, \[K^{2}-2\left(\frac{C+\tau}{C-\tau}+\frac{(2-\eta)\varepsilon^{2}}{(1- \varepsilon^{2})(C-\tau)^{2}}\right)K+\frac{(C+\tau)^{2}}{(C-\tau)^{2}}\geq 0.\] Let \(A_{1}\) and \(A_{2}\) be the two roots of the above quadratic polynomial of \(K\), \[A_{1} =\frac{C+\tau}{C-\tau}+\frac{(2-\eta)\varepsilon^{2}-\sqrt{(2- \eta)^{2}\varepsilon^{4}+2\varepsilon^{2}(2-\eta)(C^{2}-\tau^{2})(1- \varepsilon^{2})}}{(C-\tau)^{2}(1-\varepsilon^{2})},\] \[A_{2} =\frac{C+\tau}{C-\tau}+\frac{(2-\eta)\varepsilon^{2}+\sqrt{(2- \eta)^{2}\varepsilon^{4}+2\varepsilon^{2}(2-\eta)(C^{2}-\tau^{2})(1- \varepsilon^{2})}}{(C-\tau)^{2}(1-\varepsilon^{2})}.\] Since \[\left(\frac{C-\tau}{C+\tau}\right)^{2}-2\left(\frac{C+\tau}{C-\tau} +\frac{(2-\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C-\tau)^{2}}\right)\left( \frac{C-\tau}{C+\tau}\right)+\frac{(C+\tau)^{2}}{(C-\tau)^{2}}\] \[= \left(\frac{C-\tau}{C+\tau}-\frac{C+\tau}{C-\tau}\right)^{2}- \frac{(4-2\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C-\tau)^{2}}\cdot\left( \frac{C-\tau}{C+\tau}\right)\] \[= \frac{16C^{2}\tau^{2}}{(C^{2}-\tau^{2})^{2}}-\frac{(4-2\eta) \varepsilon^{2}}{(1-\varepsilon^{2})(C^{2}-\tau^{2})}\] \[= \frac{16C^{2}}{(C^{2}-\tau^{2})^{2}}\left(\tau^{2}-\frac{3 \varepsilon^{2}(C^{2}-\tau^{2})}{16C^{2}(1-\varepsilon^{2})}\right)\] \[< 0,\] one can see that \[0<A_{1}<\frac{C-\tau}{C+\tau}<1<A_{2}.\] Then, we always have \[K\leq A_{1}<\frac{C-\tau}{C+\tau}<1.\] Moreover, put \(C_{2}=\frac{C+\tau}{C-\tau}\cdot A_{1}\), \[|\frac{1-\alpha}{1-\beta}|\leq\frac{K(C+\tau)}{C-\tau}\leq C_{2}<1.\] **Lemma 2.5**.: _Let \(\{z_{n}\}_{n=1}^{\infty}\) be a sequence of complex numbers in a strip cone \(SC(\xi,\theta_{0},T_{1},T_{2})\), \(\theta_{0}\in(0,\pi)\), satisfying that_ 1. \(|1-z_{n}|\) _non-increasingly tends to_ \(1\)_, as_ \(n\to\infty\)_;_ 2. _there exist two positive numbers_ \(0<\varepsilon\leq\delta<1\)_, such that for any_ \(n\in\mathbb{N}\)_,_ \[0<\varepsilon\leq\rho(z_{n},z_{n+1})\leq\delta<1.\] _Then there exists a positive integer \(N\) and two positive constants \(C_{1}\) and \(C_{2}\), such that for any \(n\geq N\),_ \[0<C_{1}\leq\big{|}\frac{1-z_{n+1}}{1-z_{n}}\big{|}\leq C_{2}<1.\] _Furthermore, this implies \(\sum_{n=1}^{\infty}|1-z_{n}|<\infty\)._ Proof.: Without loss of generality, we may assume that \(\{z_{n}\}_{n=1}^{\infty}\) lie in a strip cone \(SC(1,\theta_{0},T_{1},T_{2})\) (see Figure 3). Denote \[z_{n}=x_{n}+\mathbf{i}(1-x_{n})\cot\theta_{n},\quad\text{ for }n=1,2,\ldots,\] and \[C=|1-\mathbf{i}\cot\theta_{0}|.\] Following from \(z_{n}\in SC(1,\theta_{0},T_{1},T_{2})\), we could also denote \[z_{n}=(1-t_{n}\mathbf{e}^{\mathbf{i}\theta_{0}})+t_{n}\mathbf{e}^{\mathbf{i} \zeta_{n}},\] where \(t_{n}\in[T_{1},T_{2}]\) if \(T_{1}T_{2}>0\), and \(t_{n}\in(-\infty,T_{1}]\cup[T_{2},+\infty)\) if \(T_{1}T_{2}<0\). Furthermore, \[\lim_{n\to\infty}1-z_{n}=\lim_{n\to\infty}t_{n}(\mathrm{e}^{\mathrm{i}\theta_{0 }}-\mathrm{e}^{\mathrm{i}\zeta_{n}})=0=\lim_{n\to\infty}1-x_{n}.\] Notice that \(t_{n}\) is uniformly far away from \(0\) whenever \(T_{1}T_{2}>0\) or \(T_{1}T_{2}<0\). Then, \(\zeta_{n}\to\theta_{0}\), as \(n\to\infty\). Consequently, \[\lim_{n\to\infty}\cot\theta_{n}=\lim_{n\to\infty}\frac{\mathrm{Imz}_{n}}{1- \mathrm{Rez}_{n}}=\lim_{n\to\infty}\frac{t_{n}(\sin\zeta_{n}-\sin\theta_{0})}{ t_{n}(\cos\zeta_{n}-\cos\theta_{0})}=\cot\theta_{0}.\] Hence, for any two positive number \[0<\eta\leq\frac{1}{2}\quad\text{and}\quad 0<\tau\leq\sqrt{\frac{3C^{2}\varepsilon ^{2}}{16C^{2}(1-\varepsilon^{2})+3\varepsilon^{2}}},\] there exists \(N\in\mathbb{N}\) such that, for all \(n\geq N\), \[3\leq 4-2\eta\leq(1+x_{n}-(1-x_{n})\cot^{2}\theta_{n})(1+x_{n+1}-(1-x_{n+1}) \cot^{2}\theta_{n+1})\leq 4\] and \[|\cot\theta_{n}-\cot\theta_{0}|<\tau.\] Therefore, by Lemma 2.4, there exist two positive constants \(C_{1}\) and \(C_{2}\), such that for any \(n\geq N\), \[0<C_{1}\leq\big{|}\frac{1-z_{n+1}}{1-z_{n}}\big{|}\leq C_{2}<1.\] Furthermore, this implies \[\sum_{n=1}^{\infty}|1-z_{n}|\leq\sum_{n=1}^{\infty}|1-z_{1}|C_{2}^{n-1}=\frac{ |1-z_{1}|}{1-C_{2}}<\infty.\] Now, we show that one can select a "fine" subsequence of zeros of each Blaschke product \(B\in\mathcal{H}_{SC}\). **Definition 2.6**.: A sequence \(\{z_{n}\}_{n=1}^{\infty}\) is said to be fine, if it satisfies the following conditions. 1. The sequence \(\{z_{n}\}_{n=1}^{\infty}\) lies in some certain strip cone \(SC(\xi,\theta,T_{1},T_{2})\). 2. \(|\xi-z_{n}|\) non-increasingly tends to \(\xi\), as \(n\to\infty\); 3. There exist two positive numbers \(0<\varepsilon\leq\delta<1\) such that for any \(n\in\mathbb{N}\), \[0<\varepsilon\leq\rho(z_{n},z_{n+1})\leq\delta.\] Moreover, there exists a positive integer \(N\) and two positive constants \(C_{1}\) and \(C_{2}\), such that for any \(n\geq N\), \[0<C_{1}\leq\big{|}\frac{\xi-z_{n+1}}{\xi-z_{n}}\big{|}\leq C_{2}<1.\] In particular, \(\sum\limits_{n=1}^{\infty}|\xi-z_{n}|<\infty\). 4. \(\mathrm{Re}\varphi_{\xi}(z_{n})\) monotonically tends to \(+\infty\), where \(\varphi_{\xi}(z)=(\xi+z)/(\xi-z)\). Moreover, there are two positive numbers \(\widetilde{C_{1}}\) and \(\widetilde{C_{2}}\) such that \[0<\widetilde{C_{1}}\leq\frac{\mathrm{Re}\varphi_{\xi}(z_{n})}{\mathrm{Re} \varphi_{\xi}(z_{n+1})}\leq\widetilde{C_{2}}<1.\] Furthermore, a Blaschke product is said to be fine if its zeros sequence \(\{z_{n}\}_{n=1}^{\infty}\) is fine. And denote by \(\widetilde{\mathcal{H}}_{SC}\) the family of all fine Blaschke products. **Lemma 2.7**.: _Let \(B\) be a Blaschke product in \(\mathcal{H}_{SC}\) with zeros \(\{z_{n}\}_{n=1}^{\infty}\). Then \(B\) has a factor in \(\widetilde{\mathcal{H}}_{SC}\), i.e., the sequence \(\{z_{n}\}_{n=1}^{\infty}\) has a fine subsequence._ Proof.: Without loss of generality, assume \(\{z_{n}\}_{n=1}^{\infty}\) lies in some certain strip cone \(SC(1,\theta_{0},T_{1},T_{2})\). Recall that \(\{z_{n}\}_{n=1}^{\infty}\) satisfies the following conditions. 1. \(|1-z_{n}|\) non-increasingly tends to \(1\), as \(n\to\infty\). 2. There exists a positive number \(0<\delta<1\), such that \(\rho(z_{n},z_{n+1})\leq\delta\) for any \(n\in\mathbb{N}\). Denote by \(SL(\frac{\pi}{2}-\theta_{0},L_{1},L_{2})\) the image of the strip cone \(SC(1,\theta_{0},T_{1},T_{2})\) under \(\varphi\). We may write \[L_{1}:y-\tan(\frac{\pi}{2}-\theta_{0})\cdot x-c_{1}=0\quad\text{and}\quad L_{2 }:y-\tan(\frac{\pi}{2}-\theta_{0})\cdot x-c_{2}=0.\] Since \(|1-z_{n+1}|\leq|1-z_{n}|\) for each \(n\in\mathbb{N}\), we have \[|\varphi(z_{n+1})|= \big{|}\frac{1+z_{n+1}}{1-z_{n+1}}\big{|}\] \[= \big{|}1-\frac{2}{1-z_{n+1}}\big{|}\] \[\geq \big{|}\frac{2}{1-z_{n+1}}\big{|}-1\] \[\geq \big{|}\frac{2}{1-z_{n}}-1\big{|}-2\] \[= |\varphi(z_{n})|-2.\] By Lemma 2.3, there exists a subsequence \(\{z_{n_{k}}\}_{k=1}^{\infty}\) of \(\{z_{n}\}_{n=1}^{\infty}\) and positive numbers \(\varepsilon\) and \(\delta\), such that \[0<\varepsilon\leq\rho(z_{n_{k}},z_{n_{k+1}})\leq\delta<1.\] Consequently, by Lemma 2.5, there are positive numbers \(C_{1}\) and \(C_{2}\) such that \[0<C_{1}\leq\big{|}\frac{1-z_{n_{k+1}}}{1-z_{n_{k}}}\big{|}\leq C_{2}<1.\] Since \(z_{n_{k}}\to 1\) as \(k\to\infty\) and \(\varphi(z_{n_{k}})\in SL(\frac{\pi}{2}-\theta_{0},L_{1},L_{2})\), we have \[\lim_{k\to\infty}|\varphi(z_{n_{k}})|=\lim_{k\to\infty}\big{|}\frac{1+z_{n_{k} }}{1-z_{n_{k}}}\big{|}=+\infty,\] \[\lim_{k\to\infty}\frac{\mathrm{Re}\varphi(z_{n_{k}})}{|\varphi(z_{n_{k}})|}= \cos(\frac{\pi}{2}-\theta_{0})>0\] and \[\lim_{k\to\infty}\big{|}\frac{\varphi(z_{n_{k}})}{\varphi(z_{n_{k+1}})}\big{|} \bigg{/}\big{|}\frac{1-z_{n_{k+1}}}{1-z_{n_{k}}}\big{|}=\lim_{k\to\infty}\big{|} \frac{1+z_{n_{k}}}{1+z_{n_{k+1}}}\big{|}=1.\] Furthermore, \[\lim_{k\to\infty}\left(\frac{\mathrm{Re}\varphi(z_{n_{k}})}{\mathrm{Re} \varphi(z_{n_{k+1}})}\right)\bigg{/}\big{|}\frac{1-z_{n_{k+1}}}{1-z_{n_{k}}} \big{|}=1.\] Then, there exists a positive integer \(k_{0}\) such that for any \(k\geq k_{0}\), \[0<\frac{C_{1}}{2}\leq\frac{\mathrm{Re}\varphi(z_{n_{k}})}{\mathrm{Re} \varphi(z_{n_{k+1}})}\leq\frac{1+C_{2}}{2}<1.\] Therefore, the subsequence \(\{z_{n_{k}}\}_{k=k_{0}}^{\infty}\) is as required. In particular, \[\mathrm{Re}\varphi(z_{n_{k+1}})>\mathrm{Re}\varphi(z_{n_{k}}).\] Then, to prove our main theorem, it suffices to consider the Blaschke products in \(\widetilde{\mathcal{H}}_{SC}\). By the way, we could also find that fine sequences implies some other properties of Blaschke products. More precisely, we could obtain that each \(B\in\mathcal{H}_{SC}\) has a factor being an interpolating and one-component Blaschke product. **Lemma 2.8** (Corollary 2.5 in [4]).: _Let \(B\) be a Blaschke product whose zeros \(z_{n}\) are contained in a Stolz domain and are separated. Suppose that \(\rho(z_{n},z_{n+1})\leq\eta<1\). Then \(B\) is a one-component inner function._ **Theorem 2.9**.: _Each \(B\in\mathcal{H}_{SC}\) has an interpolating and one-component Blaschke product factor._ Proof.: By Lemma 2.7, it suffices to prove that each \(B\in\widetilde{\mathcal{H}}_{SC}\) has an interpolating and one-component Blaschke product factor. Obviously, any fine sequence has a tail contained in some certain Stolz domain, and then a Blaschke product in \(\widetilde{\mathcal{H}}_{SC}\) is interpolating if and only if its zeros sequence is separated, i.e., \[\inf_{m\neq n}\rho(z_{n},z_{m})>0.\] Furthermore, together with Lemma 2.8, we only need to prove any fine sequence has a separated subsequence. Without loss of generality, assume \(\{z_{n}\}_{n=1}^{\infty}\) is a fine sequence in a strip cone \(SC(1,\theta,T_{1},T_{2})\). Since \(\varphi(z)=(1+z)/(1-z)\), we have \(z=(\varphi(z)-1)/(\varphi(z)+1)\), and then \[|z_{m}-z_{n}| = |\frac{\varphi(z_{m})-1}{\varphi(z_{m})+1}-\frac{\varphi(z_{n})- 1}{\varphi(z_{n})+1}|\] \[= |\frac{2(\varphi(z_{m})-\varphi(z_{n}))}{(\varphi(z_{m})+1)( \varphi(z_{n})+1)}|\] and \[|1-\overline{z_{m}}z_{n}| = |1-\frac{\overline{\varphi(z_{m})}-1}{\varphi(z_{m})+1}\cdot \frac{\varphi(z_{n})-1}{\varphi(z_{n})+1}|\] \[= |\frac{2(\overline{\varphi(z_{m})}+\varphi(z_{n}))}{(\varphi(z_{ m})+1)(\varphi(z_{n})+1)}|\] \[= |\frac{2[\overline{\varphi(z_{m})}+\varphi(z_{n})]}{(\varphi(z_{ m})+1)(\varphi(z_{n})+1)}|.\] Therefore, \[\rho(z_{m},z_{n}) =\big{|}\frac{z_{m}-z_{n}}{1-\overline{z_{m}}z_{n}}\big{|}\] \[=\big{|}\frac{\varphi(z_{m})-\varphi(z_{n})}{\varphi(z_{m})+\varphi( z_{n})}\big{|}\] \[\geq\big{|}\frac{|\varphi(z_{m})|-|\varphi(z_{n})|}{|\varphi(z_{m} )|+|\varphi(z_{n})|}\big{|}\] \[=\big{|}\frac{1-|\frac{\varphi(z_{n})}{\varphi(z_{m})}|}{1+|\frac {\varphi(z_{n})}{\varphi(z_{m})}|}\big{|}.\] In addition, there are positive numbers \(C_{1}\) and \(C_{2}\) such that \[0<C_{1}\leq\big{|}\frac{1-z_{n_{k+1}}}{1-z_{n_{k}}}\big{|}\leq C_{2}<1.\] Since \(|1-z_{n}|\) non-increasingly tends to \(1\), as \(n\to\infty\), there exists a positive integer \(N\in\mathbb{N}\) such that for any \(n\geq N\), \[\big{|}\frac{1+z_{n}}{1+z_{n+1}}\big{|}\leq\frac{1+C_{2}}{2C_{2}}.\] Then, we have \[\big{|}\frac{\varphi(z_{n})}{\varphi(z_{n+1})}\big{|} =|\frac{1+z_{n}}{1-z_{n}}\cdot\frac{1-z_{n+1}}{1+z_{n+1}}\big{|}\] \[=\big{|}\frac{1+z_{n}}{1+z_{n+1}}\big{|}\big{|}\frac{1-z_{n+1}}{1 -z_{n}}\big{|}\] \[\leq\frac{C_{2}+1}{2}\] \[<1,\] Consequently, for any \(m>n\), \[\big{|}\frac{\varphi(z_{n})}{\varphi(z_{m})}\big{|}\leq \bigg{(}\frac{C_{2}+1}{2}\bigg{)}^{m-n}\] \[\leq \frac{C_{2}+1}{2}\] \[\leq 1.\] Then, \[\rho(z_{m},z_{n}) \geq|\frac{1-|\frac{\varphi(z_{n})}{\varphi(z_{m})}|}{1+|\frac{ \varphi(z_{n})}{\varphi(z_{m})}|}|\] \[=1-\frac{2}{\frac{1}{\left|\frac{\varphi(z_{n})}{\varphi(z_{m})} \right|}+1}\] \[\geq 1-\frac{2}{\frac{1}{\frac{C_{2}+1}{2}}+1}\] \[=\frac{1-C_{2}}{3+C_{2}}.\] Therefore, we have \[\inf_{m\neq n}\rho(z_{n},z_{m})\geq\frac{1-C_{2}}{3+C_{2}}>0.\] This finishes the proof. ## 3 Proof of the main theorem In this section, we would prove our main theorem. Consider a Blaschke product \[B=\lambda z^{m}\prod_{n=1}^{\infty}\frac{\overline{z_{n}}}{\mid z_{n}\mid} \cdot\frac{z_{n}-z}{1-\overline{z_{n}}z}.\] where \(|\lambda|=1\). It is well known that the finite Blaschke products with the same order are in the same component. Then, \(B\sim zB\) if and only if for some certain \(N\in\mathbb{N}\), \[\prod_{n=N}^{\infty}\frac{\overline{z_{n}}}{\mid z_{n}\mid}\cdot\frac{z_{n}- z}{1-\overline{z_{n}}z}\ \sim\ \prod_{n=N+1}^{\infty}\frac{\overline{z_{n}}}{\mid z_{n}\mid}\cdot\frac{z_{n}- z}{1-\overline{z_{n}}z}. \tag{3.1}\] Recall that \(\varphi(z)=(1+z)/(1-z)\) is the fractional linear transformation from the unit open disk to the right half plane. Let \(\alpha_{n}(t)\) be the unique point in \(\mathbb{D}\) such that \[\varphi(\alpha_{n}(t))=(1-t)\varphi(z_{n})+t\varphi(z_{n+1}),\] for \(t\in[0,1]\) and \(n=1,2,\cdots\). Furthermore, following from the idea of Nestoridis [9], define the map \(B_{t}\) from \([0,1]\) to \(\mathcal{F}\) by \[t\longmapsto B_{t}=\prod_{n=N}^{\infty}\frac{\overline{\alpha_{n}(t)}}{\mid \alpha_{n}(t)\mid}\cdot\frac{\alpha_{n}(t)-z}{1-\overline{\alpha_{n}(t)}z}. \tag{3.2}\] If the above map is continuous, then the relation (3.1) holds and consequently \(B\sim zB\). To prove the continuity of \(B_{t}\), the following theorem plays an important role. **Lemma 3.1** (Lemma 1 in [9]).: _Let_ \[K_{1}=\prod_{n=1}^{\infty}\frac{\overline{\alpha_{n}}}{|\alpha_{n}|}\frac{ \alpha_{n}-z}{1-\overline{\alpha_{n}}z}\quad\text{and}\quad K_{2}=\prod_{n=1}^ {\infty}\frac{\overline{\beta_{n}}}{|\beta_{n}|}\frac{\beta_{n}-z}{1- \overline{\beta_{n}}z}\] _be two infinite Blaschke products such that \(K_{1}(0)>0\) and \(K_{2}(0)>0\), then we have the following inequality,_ \[\begin{split}\|K_{1}-K_{2}\|_{\infty}\leq&\sum_{n} \big{|}\text{arg}\frac{\alpha_{n}}{\beta_{n}}\big{|}+2\sum_{n}\big{|}\text{ arg}\frac{1-\alpha_{n}}{1-\beta_{n}}\big{|}\\ &+2\sup_{y\in\mathbb{R}}\operatorname{ess}\sum_{n}\big{|}\text{ arg}\frac{\varphi(\alpha_{n})-\mathbf{i}y}{\varphi(\beta_{n})-\mathbf{i}y}\big{|}.\end{split} \tag{3.3}\] Now, we would apply Lemma 3.1 to verify the continuity of the path \(B_{t}\). More precisely, \[\begin{split}\|B_{t}-B_{t+\Delta t}\|_{\infty}\leq& \sum_{n}\big{|}\text{arg}\frac{\alpha_{n}(t)}{\alpha_{n}(t+\Delta t )}\big{|}+2\sum_{n}\big{|}\text{arg}\frac{1-\alpha_{n}(t)}{1-\alpha_{n}(t+ \Delta t)}\big{|}\\ &+2\sup_{y\in\mathbb{R}}\operatorname{ess}\sum_{n}\big{|}\text{ arg}\frac{\varphi(\alpha_{n}(t))-\mathbf{i}y}{\varphi(\alpha_{n}(t+\Delta t))- \mathbf{i}y}\big{|}.\end{split} \tag{3.4}\] Then, to prove \(\lim_{\Delta t\to 0}\|B_{t}-B_{t+\Delta t}\|=0\), it suffices to prove the three items in the right of the inequality (3.4) tend to \(0\) as \(\Delta t\to 0\). **Proposition 3.2**.: _For any \(B\in\widetilde{\mathcal{H}}_{SC}\), there exists a positive integer \(N_{1}\in\mathbb{N}\) and a positive number \(K_{1}\) such that for any \(t\in[0,1]\) and small positive number \(\Delta t\),_ \[\sum_{n=N_{1}}^{\infty}\big{|}\text{arg}\frac{\alpha_{n}(t)}{\alpha_{n}(t+ \Delta t)}\big{|}\leq K_{1}\Delta t.\] Proof.: Without loss of generality, suppose that the zeros sequence of \(B\in\widetilde{\mathcal{H}}_{SC}\) is fine in a strip cone \(SC(1,\theta,T_{1},T_{2})\). By the definition of \(\alpha_{n}(t)\), we have \[\begin{split} 1-\alpha_{n}(t)=& 1-\varphi^{-1}( \varphi(\alpha_{n}(t)))\\ =& 1-\frac{\varphi(\alpha_{n}(t))-1}{\varphi(\alpha_{n}( t))+1}\\ =&\frac{2}{\varphi(\alpha_{n}(t))+1}\end{split}\] Furthermore, \[\begin{split}&\alpha_{n}(t+\Delta t)-\alpha_{n}(t)\\ =&(1-\alpha_{n}(t))-(1-\alpha_{n}(t+\Delta t))\\ =&\frac{2\Delta t(\varphi(z_{n+1})-\varphi(z_{n}))}{ (\varphi(\alpha_{n}(t))+1)(\varphi(\alpha_{n}(t+\Delta t))+1)}\\ =&\frac{4\Delta t(z_{n+1}-z_{n})}{(1-z_{n})(1-z_{n+ 1})}\cdot\frac{1}{(\varphi(\alpha_{n}(t))+1)(\varphi(\alpha_{n}(t+\Delta t))+ 1)}.\end{split}\] Since \(\mathrm{Re}\varphi(z_{n})\) increasingly tends to \(+\infty\) and \[\lim_{k\to\infty}\frac{\mathrm{Re}\varphi(z_{n})}{|\varphi(z_{n})|}=\cos(\frac{ \pi}{2}-\theta)>0,\] there is a positive number \(R_{1}>0\) such that \[|\varphi(\alpha_{n}(t))+1|\geq|\mathrm{Re}(\varphi(\alpha_{n}(t))+1)|\geq| \mathrm{Re}(\varphi(z_{n})+1)|\geq\frac{|\varphi(z_{n})+1|}{R_{1}}.\] In addition, the followings hold. 1. The sequence \(|1-z_{n}|\) non-increasingly tends to \(1\), as \(n\to\infty\). 2. by Lemma 2.5, there exists a positive constants \(C_{1}\) such that \[0<C_{1}\leq\big{|}\frac{1-z_{n+1}}{1-z_{n}}\big{|}.\] Then, we have \[|\alpha_{n}(t+\Delta t)-\alpha_{n}(t)|\] \[= \big{|}\frac{4\Delta t(z_{n+1}-z_{n})}{(1-z_{n})(1-z_{n+1})} \big{|}\cdot\big{|}\frac{1}{(\varphi(\alpha_{n}(t))+1)(\varphi(\alpha_{n}(t+ \Delta t))+1)}\big{|}\] \[\leq \frac{4\Delta t(|1-z_{n+1}|+|1-z_{n}|)}{|1-z_{n}||1-z_{n+1}|} \cdot\frac{R_{1}^{2}}{|\varphi(z_{n})+1|^{2}}\] \[\leq \frac{8\Delta t|1-z_{n}|}{|1-z_{n}||1-z_{n+1}|}\cdot\frac{R_{1}^{ 2}}{|\frac{2}{1-z_{n}}|^{2}}\] \[= 2R_{1}^{2}\cdot\Delta t\cdot\frac{|1-z_{n}|}{|1-z_{n+1}|}\cdot| 1-z_{n}|\] \[\leq \frac{2R_{1}^{2}}{C_{1}}\cdot|\Delta t|\cdot|1-z_{n}|.\] Given a positive number \(0<r<1\), it follows from \(z_{n}\to 1\) that there exists a positive integer \(N_{1}\in\mathbb{N}\) such that, for any \(n\geq N_{1}\) and any \(t\in[0,1]\), \[|\alpha_{n}(t)|\geq r.\] Thus, considering the area of the triangle with vertices \(0\), \(\alpha_{n}(t+\Delta t)\) and \(\alpha_{n}(t)\), we have \[\big{|}\mathrm{arg}\frac{\alpha_{n}(t)}{\alpha_{n}(t+\Delta t)} \big{|}\] \[\leq \frac{\pi}{2}\cdot\sin\big{|}\mathrm{arg}\frac{\alpha_{n}(t)}{ \alpha_{n}(t+\Delta t)}\big{|}\] \[\leq \frac{\pi}{2}\cdot\frac{|\alpha_{n}(t+\Delta t)-\alpha_{n}(t)| \cdot 1}{|\alpha_{n}(t+\Delta t)|\cdot|\alpha_{n}(t)|}\] \[\leq \frac{R_{1}^{2}\pi}{r^{2}C_{1}}\cdot\Delta t\cdot|1-z_{n}|.\] Consequently, by \(\sum_{n}|1-z_{n}|<\infty\), for any \(t\in[0,1]\), \[\sum_{n=N_{1}}^{\infty}\left|\text{arg}\frac{\alpha_{n}(t)}{\alpha_{n}(t+\Delta t )}\right|\leq\Delta t\cdot\frac{R_{1}^{2}\pi}{r^{2}C_{1}}\sum_{n=N_{1}}^{ \infty}|1-z_{n}|.\] Moreover, write \[K_{1}=\frac{R_{1}^{2}\pi}{r^{2}C_{1}}\sum_{n=1}^{\infty}|1-z_{n}|\] as required. **Proposition 3.3**.: _For any \(B\in\widetilde{\mathcal{H}}_{SC}\), there exists a positive integer \(N_{2}\in\mathbb{N}\) and a positive number \(K_{2}\) such that for any \(t\in[0,1]\) and small positive number \(\Delta t\),_ \[\sup_{y\in\mathbb{R}}\text{ess}\sum_{n=N_{2}}^{\infty}\left|\text{arg}\frac{ \varphi(\alpha_{n}(t))-\mathbf{i}y}{\varphi(\alpha_{n}(t+\Delta t))-\mathbf{i }y}\right|\leq K_{2}\Delta t.\] Proof.: Without loss of generality, suppose that the zeros sequence of \(B\in\widetilde{\mathcal{H}}_{SC}\) is fine in a strip cone \(SC(1,\theta,T_{1},T_{2}).\) Let the strip \(SL(\frac{\pi}{2}-\theta,L_{1},L_{2})\) be the image of the strip cone \(SC(1,\theta,T_{1},T_{2})\) under the map \(\varphi(z)\), where \[L_{1}:y-\tan(\frac{\pi}{2}-\theta)\cdot x-c_{1}=0\quad\text{and}\quad L_{2}:y -\tan(\frac{\pi}{2}-\theta)\cdot x-c_{2}=0.\] Denote by \(L\) the straight line passing \(\varphi(z_{n})\) and parallel \(L_{1}\), and denote by \(\omega_{n}\) the angle between \(L\) and the straight line passing through \(\varphi(z_{n})\) and \(\varphi(z_{n+1}).\) Then \[\sin\omega_{n}\leq\frac{|c_{1}-c_{2}|}{|\varphi(z_{n+1})-\varphi(z_{n})|}.\] Since there are positive numbers \(C_{1}\) and \(C_{2}\) such that \[0<C_{1}\leq\big{|}\frac{1-z_{n+1}}{1-z_{n}}\big{|}\leq C_{2}<1,\] and \[\lim_{n\to\infty}\big{|}\frac{\varphi(z_{n})}{\varphi(z_{n+1})}\big{|}\Big{/} \Big{|}\frac{1-z_{n+1}}{1-z_{n}}\big{|}=\lim_{k\to\infty}\big{|}\frac{1+z_{n}} {1+z_{n+1}}\big{|}=1,\] one can see that \[\lim_{n\to\infty}|\varphi(z_{n+1})-\varphi(z_{n})| =\lim_{n\to\infty}|\varphi(z_{n})|\left(\frac{|\varphi(z_{n+1})|} {|\varphi(z_{n})|}-1\right)\] \[\leq\lim_{n\to\infty}\left(\frac{1}{C_{2}}-1\right)|\varphi(z_{n})|\] \[= \infty.\] Consequently, \[\lim_{n\to\infty}\omega_{n}\leq\frac{\pi}{2}\lim_{n\to\infty}\sin\omega_{n} \leq\frac{\pi}{2}\lim_{n\to\infty}\frac{|c_{1}-c_{2}|}{|\varphi(z_{n+1})- \varphi(z_{n})|}=0,\] that is \[\lim_{n\to\infty}\text{arg}(\varphi(z_{n+1})-\varphi(z_{n}))=\frac{\pi}{2}-\theta.\] Denote by \(\vartheta_{n}\) the angle between the imaginary axis and the straight line passing through \(\varphi(z_{n})\) and \(\varphi(z_{n+1})\). Then, for any \(\epsilon>0\), there exists a positive integer \(N_{2}\in\mathbb{N}\) such that for every \(n\geq N_{2}\) \[(1-\epsilon)\sin\theta\leq\sin\vartheta_{n}\leq(1+\epsilon)\sin\theta\quad\text {and}\quad\text{Re}z_{n}>0.\] Denote by \(h_{n}\) the distance from \(\mathbf{i}y\) to the straight line passing through \(\varphi(\alpha_{n}(t))\) and \(\varphi(\alpha_{n}(t+\Delta t))\). Consider the area of the triangle with vertices \(\mathbf{i}y\), \(\varphi(\alpha_{n}(t))\) and \(\varphi(\alpha_{n}(t+\Delta t))\), one can see that \[\sin\left(\arg\frac{\varphi(\alpha_{n}(t+\Delta t))-\mathbf{i}y}{ \varphi(\alpha_{n}(t))-\mathbf{i}y}\right)\] \[= \frac{|\varphi(\alpha_{n}(t+\Delta t))-\varphi(\alpha_{n}(t))| \cdot h_{n}}{|\varphi(\alpha_{n}(t+\Delta t))-\mathbf{i}y|\cdot|\varphi( \alpha_{n}(t))-\mathbf{i}y|}\] \[\leq \frac{\frac{\text{Re}\varphi(\alpha_{n}(t+\Delta t))-\text{Re} \varphi(\alpha_{n}(t))}{(1-\epsilon)\sin\theta}\cdot\max\{|y-c_{1}|,|y-c_{2}| \}(1+\epsilon)\sin\theta}{|\varphi(\alpha_{n}(t+\Delta t))-\mathbf{i}y|\cdot| \varphi(\alpha_{n}(t))-\mathbf{i}y|}\] \[= \Delta t\cdot\frac{1+\epsilon}{1-\epsilon}\cdot\frac{(\text{Re} \varphi(z_{n+1})-\text{Re}\varphi(z_{n}))\cdot\max\{|y-c_{1}|,|y-c_{2}|\}}{| \varphi(\alpha_{n}(t+\Delta t))-\mathbf{i}y|\cdot|\varphi(\alpha_{n}(t))- \mathbf{i}y|}.\] Since \(\{\text{Re}\varphi(z_{n})\}\) is an increasing sequence of positive numbers tending to \(+\infty\), we have \[\left|\arg\frac{\varphi(\alpha_{n}(t))-\mathbf{i}y}{\varphi(\alpha _{n}(t+\Delta t))-\mathbf{i}y}\right|\] \[\leq \frac{\pi}{2}\sin\left(\arg\frac{\varphi(\alpha_{n}(t+\Delta t))- \mathbf{i}y}{\varphi(\alpha_{n}(t))-\mathbf{i}y}\right)\] \[\leq \Delta t\cdot\frac{\pi(1+\epsilon)}{2(1-\epsilon)}\cdot\frac{( \text{Re}\varphi(z_{n+1})-\text{Re}\varphi(z_{n}))\cdot\max\{|y-c_{1}|,|y-c_{2 }|\}}{|\varphi(\alpha_{n}(t+\Delta t))-\mathbf{i}y|\cdot|\varphi(\alpha_{n}(t ))-\mathbf{i}y|}.\] Moreover, \[\left|\arg\frac{\varphi(\alpha_{n}(t))-\mathbf{i}y}{\varphi(\alpha _{n}(t+\Delta t))-\mathbf{i}y}\right|\] \[\leq \Delta t\cdot\frac{\pi(1+\epsilon)}{2(1-\epsilon)}\cdot\frac{( \text{Re}\varphi(z_{n+1})-\text{Re}\varphi(z_{n}))\cdot\max\{|y-c_{1}|,|y-c_{2 }|\}}{|\text{Re}\varphi(z_{n})|\cdot|\text{Re}\varphi(z_{n})|}\] \[= \Delta t\cdot\frac{\pi(1+\epsilon)}{2(1-\epsilon)}\cdot\left( \frac{\text{Re}\varphi(z_{n+1})}{\text{Re}\varphi(z_{n})}-1\right)\cdot\frac{ \max\{|y-c_{1}|,|y-c_{2}|\}}{\text{Re}\varphi(z_{n})}\] \[\leq \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_{1}})}{2 \widetilde{C_{1}}(1-\epsilon)}\cdot\frac{\max\{|y-c_{1}|,|y-c_{2}|\}}{\text{ Re}\varphi(z_{n})}.\] 1. Suppose that \(y\) satisfies \(\min\{|y-c_{1}|,|y-c_{2}|\}\leq|c_{1}-c_{2}|\). In this case, \[\max\{|y-c_{1}|,|y-c_{2}|\}\leq\min\{|y-c_{1}|,|y-c_{2}|\}+|c_{1}-c_{2}|\leq 2 |c_{1}-c_{2}|.\] Then, \[\sum_{n=N_{2}}^{\infty}\left|\arg\frac{\varphi(\alpha_{n}(t))- \mathbf{i}y}{\varphi(\alpha_{n}(t+\Delta t))-\mathbf{i}y}\right|\] \[\leq \sum_{n=N_{2}}^{\infty}\Delta t\cdot\frac{\pi(1+\epsilon)(1- \widetilde{C_{1}})}{2\widetilde{C_{1}}(1-\epsilon)}\cdot\frac{2|c_{1}-c_{2}|}{ \text{Re}\varphi(z_{n})}\] \[\leq \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_{1}})|c_{1}-c_{ 2}|}{\widetilde{C_{1}}(1-\epsilon)}\cdot\sum_{n=N_{2}}^{\infty}\frac{1}{| \varphi(z_{n})|(1-\epsilon)\sin\theta}\] \[= \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_{1}})|c_{1}-c_ {2}|}{\widetilde{C_{1}}(1-\epsilon)^{2}\sin\theta}\cdot\sum_{n=N_{2}}^{\infty }\frac{|1-z_{n}|}{|1+z_{n}|}\] \[\leq \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_{1}})|c_{1}-c_ {2}|}{\widetilde{C_{1}}(1-\epsilon)^{2}\sin\theta}\cdot\sum_{n=N_{2}}^{\infty }|1-z_{n}|.\] 2. Suppose that \(y\) satisfies \(\min\{|y-c_{1}|,|y-c_{2}|\}\geq|c_{1}-c_{2}|\). Let \(N\geq N_{2}\) be the first positive such that \[\text{Re}\varphi(z_{N})\geq\max\{|y-c_{1}|,|y-c_{2}|\}.\] Then, \[\sum_{n=N}^{\infty}\left|\arg\frac{\varphi(\alpha_{n}(t))- \mathbf{i}y}{\varphi(\alpha_{n}(t+\Delta t))-\mathbf{i}y}\right|\] \[\leq \sum_{n=N}^{\infty}\Delta t\cdot\frac{\pi(1+\epsilon)(1- \widetilde{C_{1}})}{\widetilde{C_{1}}(1-\epsilon)}\cdot\frac{\max\{|y-c_{1}|, |y-c_{2}|\}}{\text{Re}\varphi(z_{n})}\] \[= \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_{1}})}{ \widetilde{C_{1}}(1-\epsilon)}\cdot\frac{\max\{|y-c_{1}|,|y-c_{2}|\}}{\text{ Re}\varphi(z_{N})}\cdot\sum_{n=N}^{\infty}\frac{\text{Re}\varphi(z_{n})}{\text{ Re}\varphi(z_{N})}\] \[\leq \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_{1}})}{ \widetilde{C_{1}}(1-\epsilon)}\cdot\sum_{k=0}^{\infty}\widetilde{C_{2}}^{k}\] \[= \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_{1}})}{ \widetilde{C_{1}}(1-\epsilon)(1-\widetilde{C_{2}})}.\] Notice that for \(t\in[0,1]\) \[|\varphi(\alpha_{n}(t))-\mathbf{i}y|\geq\min\{|y-c_{1}|,|y-c_{2}|\}\sin\theta \geq|c_{1}-c_{2}|\sin\theta.\] It easy to see that \[\frac{\max\{|y-c_{1}|,|y-c_{2}|\}}{|\varphi(\alpha_{n}(t))- \mathbf{i}y|}\leq \frac{\min\{|y-c_{1}|,|y-c_{2}|\}+|c_{1}-c_{2}|}{\min\{|y-c_{1}|,|y-c _{2}|\}\sin\theta}\] \[= \left(1+\frac{|c_{1}-c_{2}|}{\min\{|y-c_{1}|,|y-c_{2}|\}}\right) \cdot\frac{1}{\sin\theta}\] \[\leq \frac{2}{\sin\theta}.\] Then, for any \(N_{2}\leq n\leq N-1\), \[\sum\limits_{n=N_{2}}^{N-1}\left|\arg\frac{\varphi(\alpha_{n}(t))- \mathbf{i}y}{\varphi(\alpha_{n}(t+\Delta t))-\mathbf{i}y}\right|\] \[\leq \sum\limits_{n=N_{2}}^{N-1}\Delta t\cdot\frac{\pi(1+\epsilon)}{2( 1-\epsilon)}\cdot\frac{(\mathrm{Re}\varphi(z_{n+1})-\mathrm{Re}\varphi(z_{n}) )\cdot\max\{|y-c_{1}|,|y-c_{2}|\}}{|\varphi(\alpha_{n}(t+\Delta t))-\mathbf{i} y|\cdot|\varphi(\alpha_{n}(t))-\mathbf{i}y|}\] \[\leq \Delta t\cdot\frac{\pi(1+\epsilon)}{2(1-\epsilon)}\cdot\sum \limits_{n=N_{2}}^{N-1}\frac{(\mathrm{Re}\varphi(z_{n+1})-\mathrm{Re}\varphi(z _{n}))\cdot\max\{|y-c_{1}|,|y-c_{2}|\}}{\min\{|y-c_{1}|,|y-c_{2}|\}\sin\theta \cdot|\varphi(\alpha_{n}(t))-\mathbf{i}y|}\] \[\leq \Delta t\cdot\frac{\pi(1+\epsilon)}{(1-\epsilon)\min\{|y-c_{1}|, |y-c_{2}|\}\sin^{2}\theta}\cdot\sum\limits_{n=N_{2}}^{N-1}(\mathrm{Re}\varphi (z_{n+1})-\mathrm{Re}\varphi(z_{n}))\] \[= \Delta t\cdot\frac{\pi(1+\epsilon)}{(1-\epsilon)\sin^{2}\theta} \cdot\frac{\mathrm{Re}\varphi(z_{N})-\mathrm{Re}\varphi(z_{N_{2}})}{\min\{|y- c_{1}|,|y-c_{2}|\}}\] \[\leq \Delta t\cdot\frac{\pi(1+\epsilon)}{(1-\epsilon)\sin^{2}\theta} \cdot\frac{\frac{\mathrm{Re}\varphi(z_{N})}{\mathrm{Re}\varphi(z_{N-1})}\cdot \mathrm{Re}\varphi(z_{N-1})}{\min\{|y-c_{1}|,|y-c_{2}|\}}\] \[\leq \Delta t\cdot\frac{\pi(1+\epsilon)}{(1-\epsilon)\sin^{2}\theta} \cdot\frac{\frac{1}{C_{1}}\max\{|y-c_{1}|,|y-c_{2}|\}}{\min\{|y-c_{1}|,|y-c_{2 }|\}}\] \[= \Delta t\cdot\frac{2\pi(1+\epsilon)}{(1-\epsilon)\widetilde{C_{1} }\sin^{2}\theta}.\] Thus, in this case, \[\sum\limits_{n=N_{2}}^{\infty}\left|\arg\frac{\varphi(\alpha_{n}(t ))-\mathbf{i}y}{\varphi(\alpha_{n}(t+\Delta t))-\mathbf{i}y}\right|\] \[\leq \Delta t\cdot\left(\frac{2\pi(1+\epsilon)}{(1-\epsilon)\widetilde {C_{1}}\sin^{2}\theta}+\frac{\pi(1+\epsilon)(1-\widetilde{C_{1}})}{\widetilde {C_{1}}(1-\epsilon)(1-\widetilde{C_{2}})}\right).\] Therefore, we could write \[K_{2}=\left\{\begin{array}{c}\frac{\pi(1+\epsilon)(1-\widetilde{C_{1}})|c_{ 1}-c_{2}|}{\widetilde{C_{1}}(1-\epsilon)^{2}\sin\theta}\cdot\sum\limits_{n=N_{ 2}}^{\infty}|1-z_{n}|,\\ \frac{2\pi(1+\epsilon)}{(1-\epsilon)\widetilde{C_{1}}\sin^{2}\theta}+\frac{ \pi(1+\epsilon)(1-\widetilde{C_{1}})}{\widetilde{C_{1}}(1-\epsilon)(1-\widetilde {C_{2}})}\end{array}\right\}\] as required. **Proposition 3.4**.: _For any \(B\in\widetilde{\mathcal{H}}_{SC}\), there exists a positive integer \(N_{3}\in\mathbb{N}\) and a positive number \(K_{3}\) such that for any \(t\in[0,1]\) and small positive number \(\Delta t\),_ \[\sum\limits_{n}\left|\arg\frac{1-\alpha_{n}(t)}{1-\alpha_{n}(t+\Delta t)} \right|\leq K_{3}\Delta t.\] Proof.: Without loss of generality, suppose that the zeros sequence of \(B\in\widetilde{\mathcal{H}}_{SC}\) is fine in a strip cone \(SC(1,\theta,T_{1},T_{2})\). Let the strip \(SL(\frac{\pi}{2}-\theta,L_{1},L_{2})\) be the image of \(SC(1,\theta,T_{1},T_{2})\) under the map \(\varphi(z)\), where \[L_{1}:y-\tan(\frac{\pi}{2}-\theta)\cdot x-c_{1}=0\quad\text{and}\quad L_{2}:y- \tan(\frac{\pi}{2}-\theta)\cdot x-c_{2}=0.\] For convenience, assume the line \(L_{2}\) is on the right of the line \(L_{1}\). By translating \(L_{2}\) one unit to the right, we obtain a new line \(\widehat{L_{2}}\), more precisely, \[\widehat{L_{2}}:\ y-\tan(\frac{\pi}{2}-\theta)\cdot(x-1)-c_{2}=0.\] Let \[\varphi(\widehat{z_{n}})=\varphi(z_{n})+1\ \ \text{and}\ \ \varphi(\widehat{ \alpha}_{n}(t))=\varphi(\alpha_{n}(t))+1.\] It is not difficult to see that \(\{\varphi(\widehat{z_{n}})\}_{n=1}^{\infty}\) also satisfies the following conditions as well as a fine sequence. 1. The sequence \(\{\varphi(\widehat{z_{n}})\}_{n=1}^{\infty}\) lies in the strip \(SL(\frac{\pi}{2}-\theta,L_{1},\widehat{L_{2}})\). 2. \(\lim\limits_{n\to\infty}\arg(\varphi(z_{n+1})-\varphi(z_{n}))=\frac{\pi}{2}-\theta\). 3. \(\operatorname{Re}\varphi(\widehat{z_{n}})\) monotonically tends to \(+\infty\), and there are two positive numbers \(\widetilde{D_{1}}\) and \(\widetilde{D_{2}}\) such that \[0<\widetilde{D_{1}}\leq\frac{\operatorname{Re}\varphi(\widehat{z_{n}})}{ \operatorname{Re}\varphi(\widehat{z_{n+1}})}\leq\widetilde{D_{2}}<1.\] Then, the Proposition 3.3 also holds for the sequence \(\{\varphi(\widehat{z_{n}})\}_{n=1}^{\infty}\). That is, there exists a positive integer \(N_{3}\in\mathbb{N}\) and a positive number \(K_{3}\) such that for any \(t\in[0,1]\) and small positive number \(\Delta t\), \[\sup\limits_{y\in\mathbb{R}}\operatorname{ess}\sum\limits_{n=N_{3}}^{\infty} \big{|}\arg\frac{\varphi(\widehat{\alpha}_{n}(t))-\mathbf{i}y}{\varphi( \widehat{\alpha}_{n}(t+\Delta t))-\mathbf{i}y}\big{|}\leq K_{3}\Delta t.\] In particular, the above inequality holds for \(y=0\), and hence \[K_{3}\Delta t \geq \sum\limits_{n=N_{3}}^{\infty}\big{|}\arg\frac{\varphi(\widehat{ \alpha}_{n}(t))}{\varphi(\widehat{\alpha}_{n}(t+\Delta t))}\big{|}\] \[= \sum\limits_{n=N_{3}}^{\infty}\big{|}\arg\frac{\varphi(\alpha_{n} (t))+1}{\varphi(\alpha_{n}(t+\Delta t))+1}\big{|}\] \[= \sum\limits_{n=N_{3}}^{\infty}\big{|}\arg\frac{\frac{1+\alpha_{n }(t)}{1-\alpha_{n}(t)}+1}{\frac{1+\alpha_{n}(t+\Delta t)}{1-\alpha_{n}(t+ \Delta t)}+1}\big{|}\] \[= \sum\limits_{n=N_{3}}^{\infty}\big{|}\arg\frac{1-\alpha_{n}(t+ \Delta t)}{1-\alpha_{n}(t)}\big{|}.\] This completes the proof. \(\square\) **Proof of Main Theorem :** Given any \(B\in\mathcal{H}_{SC}\). Without loss of generality, we may assume that its zeros lie in a strip cone \(SC(1,\theta,T_{1},T_{2})\), \(\theta\in(0,\pi)\). By Lemma 2.7, it has a factor \(\widetilde{B}\in\widetilde{\mathcal{H}}_{\mathcal{SC}}\), denoted by \[\widetilde{B}(z)=\prod\limits_{n=1}^{\infty}\frac{|z_{n}|}{z_{n}}\frac{z_{n}- z}{1-\overline{z_{n}}z}.\] Following from Proposition 3.2, Proposition 3.3, Proposition 3.4 and Lemma 3.1, we could choose \(N\geq\max\{N_{1},N_{2},N_{3}\}\) and then there is a continuous path from \[\prod_{n=N}^{\infty}\frac{|z_{n}|}{z_{n}}\frac{z_{n}-z}{1-\overline{z_{n}z}} \quad\text{to}\quad\prod_{n=N+1}^{\infty}\frac{|z_{n}|}{z_{n}}\frac{z_{n}-z}{1 -\overline{z_{n}z}}.\] Therefore, by the path-connectedness of Mobius transformations and Lemma 2.1, we have \(B\sim zB\). ### Declarations * Ethics approval Not applicable. * Competing interests The author declares that there are no conflict of interest or competing interests. * Authors' contributions All authors reviewed this paper. * Funding There is no funding source for this manuscript. * Availability of data and materials Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
2303.09283
Exploring Resiliency to Natural Image Corruptions in Deep Learning using Design Diversity
In this paper, we investigate the relationship between diversity metrics, accuracy, and resiliency to natural image corruptions of Deep Learning (DL) image classifier ensembles. We investigate the potential of an attribution-based diversity metric to improve the known accuracy-diversity trade-off of the typical prediction-based diversity. Our motivation is based on analytical studies of design diversity that have shown that a reduction of common failure modes is possible if diversity of design choices is achieved. Using ResNet50 as a comparison baseline, we evaluate the resiliency of multiple individual DL model architectures against dataset distribution shifts corresponding to natural image corruptions. We compare ensembles created with diverse model architectures trained either independently or through a Neural Architecture Search technique and evaluate the correlation of prediction-based and attribution-based diversity to the final ensemble accuracy. We evaluate a set of diversity enforcement heuristics based on negative correlation learning to assess the final ensemble resilience to natural image corruptions and inspect the resulting prediction, activation, and attribution diversity. Our key observations are: 1) model architecture is more important for resiliency than model size or model accuracy, 2) attribution-based diversity is less negatively correlated to the ensemble accuracy than prediction-based diversity, 3) a balanced loss function of individual and ensemble accuracy creates more resilient ensembles for image natural corruptions, 4) architecture diversity produces more diversity in all explored diversity metrics: predictions, attributions, and activations.
Rafael Rosales, Pablo Munoz, Michael Paulitsch
2023-03-15T08:54:10Z
http://arxiv.org/abs/2303.09283v1
# Exploring Resiliency to Natural Image Corruptions in Deep Learning using Design Diversity ###### Abstract In this paper, we investigate the relationship between diversity metrics, accuracy, and resiliency to natural image corruptions of Deep Learning (DL) image classifier ensembles. We investigate the potential of an attribution-based diversity metric to improve the known accuracy-diversity trade-off of the typical prediction-based diversity. Our motivation is based on analytical studies of design diversity that have shown that a reduction of common failure modes is possible if diversity of design choices is achieved. Using ResNet50 as a comparison baseline, we evaluate the resiliency of multiple individual DL model architectures against dataset distribution shifts corresponding to natural image corruptions. We compare ensembles created with diverse model architectures trained either independently or through a Neural Architecture Search technique and evaluate the correlation of prediction-based and attribution-based diversity to the final ensemble accuracy. We evaluate a set of diversity enforcement heuristics based on negative correlation learning to assess the final ensemble resilience to natural image corruptions and inspect the resulting prediction, activation, and attribution diversity. Our key observations are: 1) model architecture is more important for resiliency than model size or model accuracy, 2) attribution-based diversity is less negatively correlated to the ensemble accuracy than prediction-based diversity, 3) a balanced loss function of individual and ensemble accuracy creates more resilient ensembles for image natural corruptions, 4) architecture diversity produces more diversity in all explored diversity metrics: predictions, attributions, and activations. ## 1 Introduction In the context of Deep Learning (DL), it has been empirically discovered that the use of ensembles can improve the model's accuracy in tasks such as regression and classification. It has been speculated [13] that the main reason behind these improvements is the implicit diversity in the solutions found that when aggregated as an ensemble obtain better predictions. In this work, we evaluate the resiliency of diverse deep learning classifiers and the role that the different kinds of diversity play in improving them. ### The case for design diversity Design diversity [23] is a technique to increase the resilience of safety critical systems. It is established as a best practice in standards such as in vehicle functional safety [18] to prevent dependent failures, safety of the intended functionality [19] to address system limitations of machine-learning-based components, and avionic software [12]. A common pitfall is to misunderstand _independent development_ as _design diversity_. In [47] the designers of a safety-critical system preferred to let multiple teams collaborate, although the purpose of having multiple teams is to produce multiple designs of a single specification. This was justified with the claim that specification problems can be better mitigated with such collaboration but at the cost of the sought _independence_. The key problem is, that independent development can (and will) produce designs with common failures mainly due to the fact that independent designs do not enforce diverse design choices. In fact, it has been statistically proven that independently developed software results in dependent failure behavior on randomly selected inputs [11]. In [22], it has been shown that what is needed to reduce dependent failure behavior is diversity in design _choices_. If the choices are made satisfying certain properties, it can be expected (in the average case) to obtain negatively correlated failure behavior, i.e., better than independent. In DL, however, design choices are not made by the human designers explicitly but are a result of the architecture, data, and optimization approach. Furthermore, existing diversity metrics are not directly related to the DL model's _design choices_ and are known to have a diversity-accuracy trade-off [21]. We investigate if a diversity metric closer to the model _design choices_ can improve the model resilience compared to existing metrics. ### Main research questions We aim to answer the following research questions: **RQ1:** Is model accuracy, size, or architecture the main explanatory factor of resilience against natural image corruptions? **RQ2:** Can a diversity metric closer to design choices improve the known accuracy-diversity trade-off? **RQ3:** Which diversity enforcement heuristic produces the most resilient models? **RQ4:** How diverse are the predictions, activations, and input feature attributions of models created with a diversity enforcement learning approach? The rest of the paper is structured as follows: Section 2 provides a brief overview of the current state-of-the-art in diversity enforcement and measurement. Next, the methodology is stated in Section 3. Our experiments are then presented in Section 4, followed by a discussion of the outcomes and conclusions in Section 5. ## 2 Related work ### Ensemble creation techniques The most relevant techniques for ensemble creation are: a) Ensembles of independently trained models where diversity originates from the training process randomness, e.g., seed. Each ensemble member loss is a function notated as: \[l(h^{i},y) \tag{1}\] where \(y\) is the ground-truth label, and \(h^{i}\) is the output of the \(i\)th single ensemble member. [6] presents an analysis of the resilience of independently trained ensembles. b) Bagging [1], reduces the variance of multiple models by averaging the outcomes of models created with different training data subsets. c) Boosting [37] sequentially trains models to reduce bias by sampling incorrectly classified inputs more often in the next model. d) Negative Correlation Learning (NCL) [24] trains models _in parallel_ with a shared penalty term in their loss function to enforce prediction diversity. Generalized NCL (GNCL) [3] proposes two extensions for NCL: i) a generalized loss function for each member: \[\sum_{i=1}^{M}l(h^{i},y)-\frac{\lambda}{2M}\sum_{i=1}^{M}{d_{i}}^{T}\mathfrak{ D}d_{i} \tag{2}\] where \(M\) is the total number of members in the ensemble, \(d\) is the difference \(h^{i}-f\) with \(f\) as the ensemble prediction, \(\mathfrak{D}\) is the 2nd derivative of the loss function, and \(\lambda\) is a weighting hyper-parameter. ii) An implicit enforcement of diversity by balancing the ensemble and the individual loss: \[\frac{1}{M}\sum_{i=1}^{M}\big{(}\lambda l(f,y)+\frac{1-\lambda}{M}\sum_{i=1}^ {M}l(h^{i},y)\big{)} \tag{3}\] ### Diversity metrics in DL There are many proposed metrics for diversity. [2] presents a survey and taxonomy for diversity metrics and [14] presents a survey of diversity for ML. In this work, we focus on behavior diversity metrics of a DL model. Input data diversity such as different modalities or implementation aspects such as the number of layers is not considered. **Prediction (output, failures) diversity** Multiple prediction-based diversity metrics have been proposed. [21] presents a comprehensive evaluation of this class of metrics. Pair-wise measures based on the correct and incorrect statistics of two models include the Q-statistics, correlation coefficient \(\rho\), and the disagreement measure: \[D_{p,q}=\frac{N^{01}+N^{10}}{N^{11}+N^{10}+N^{01}+N^{00}} \tag{4}\] where the indexes \(a,b\) of the binary vectors \(N^{ab}\) indicate the correctness of the classifiers, i.e., \((h^{i=p}=y)\Rightarrow(a=1)\wedge(h^{i=p}\neq y)\rightarrow(a=0)\). Non-pairwise measures that evaluate non-binary diversity include entropy, coincident failure diversity, cosine similarity, Kullback-Leibler divergence [10, 31], and the the Shannon equitability index [32, 6]: \[E=-\sum_{i=1}^{S}p_{i}\ln{p_{i}}/\ln{S} \tag{5}\] where \(S\) is the total number of prediction species/classes and \(p_{i}\) is the proportion of observed species \(i\). **Representation (activation) diversity** The intermediate representations (IR) can also be used to measure diversity. [13] compares the diversity in representation space from independently created ensembles and ensembles from variational approaches. Measuring IR diversity is challenging due to space size and semantic ambiguity, i.e., the same semantic concept can be represented in many different ways. A naive use of any diversity metric such as cosine similarity could give semantically irrelevant diversity scores. In [20], the Centered Kernel Alignment (CKA) metric is proposed to obtain a statistical measure across a dataset on the similarity of any two layers of a DL model: \[\text{CKA}(K,L)=\frac{\text{HSIC}(K,L)}{\sqrt{\text{HSIC}(K,K)\text{HSIC}(L,L)}} \tag{6}\] Figure 1: Model behavior diversity. a) Invariant decision boundary & diverse sample representation. b) Diverse decision boundary and measured by prediction errors. c) Diverse decision boundary and measured by feature relevance. here \(K\) and \(L\) are similarity matrices of the two feature maps being compared and HSIC is the Hilbert-Schmidt Independence Criterion which measures statistical independence. The feature maps may be layer activations or attention maps such as Saliency, Integrated Gradients, and Grad-CAM [40, 39, 41]. [34] proposed the use of the pull-away loss term from generative adversarial networks to induce diversity of such activations. Self-attention [45] (not related to attention maps) is one of the key techniques in the transformer architecture. In [33] the embeddings used to feed attention heads are masked in such a way as to enforce diversity of activations. In zero-shot learning, the _attribute_ concept is used to enable training models that can, later on, predict unseen labels. These attributes can be considered for IR diversity, as well [48]. Closely related to NCL, the self-supervised approach of contrastive learning [38, 7] trains two models to produce latent features that are diverse for false positive cases and similar in true positives through a loss function such as the triplet loss that enforces the models to learn the similarity metric. **Input feature attribution diversity** The importance of input features can be used to measure behavior diversity that to the best of our knowledge has not been explored. Figure 1 shows the relationship of an attribution-based metric w.r.t. prediction and representation diversity. Note, that attribution is not the same as attention or attributes in the context of zero-shot learning. Attention maps, such as those obtained from the activation of intermediate layers of CNNs, reflect the excitation of a network given an input. This activation however is not necessarily correlated with the final prediction, e.g., it could be an inhibitory factor. Attribution, on the other hand, indicates the importance of a feature to the final decision. See Figure 2 where _Saliency_ is used to display the original pixels masked by the attribution scores from each model. A change to a pixel with high attribution (brighter) will have a stronger influence on the model prediction than a change to a pixel with low attribution. ### Other diversity-based resilience approaches Augmenting the training data by applying affine transformations such as rotations and scaling, geometric distortions such as blurring, and texture transfer help DL models to generalize better with a limited training data [27]. Adversarial training increases the robustness to intended attacks with adversarial samples to limit the model vulnerability to input perturbations [15]. Such training data approaches are effective and complementary to the design diversity approaches of this study that address the model diversity. Modality and point of view diversity [35] is an approach to address the failure modes of sensors such as cameras, radar, and lidar. The design diversity of DL models explored in this study is orthogonal to this approach, as model diversity can be applied to every single modality. ## 3 Methodology We propose three sets of experiments: 1. Evaluate resiliency of diverse architectures and training approaches. 2. Measure diversity of prediction and diversity of attribution from independently created models of diverse architectures and evaluate robustness correlation. 3. Enforce diversity with NCL, evaluate the resulting robustness, and inspect three kinds of diversity: prediction, representation, and attribution. In addition to addressing the main research questions, we put the following hypothesis to test: that attribution-based diversity (Equation 7) can be positively correlated with ensemble resiliency if a better accuracy trade-off is achieved compared to prediction-based diversity. \[A=\sum_{c=1}^{C}\sum_{p=1}^{P}Var(a_{c,p}) \tag{7}\] where \(a\) is the input attribution score of a model at color channel \(c\) and pixel coordinate \(p\). The computation of the input attribution scores \(a\) is performed with an attribution method, such as Saliency. This hypothesis is inspired by the theoretical result of the Littlewood and Miller (LM) model [22] that diverse design choices can produce less common failures. Diverse attribution maps of correct classifications imply that the models make predictions based on independent factors, which is not the case in prediction diversity. Figure 3: Relation between diverse design methodologies and the difficulty function in the LM Model [22] Figure 2: Attribution map diversity: Two models may predict the same outcome but based on different _evidence_. ### Probability model for design diversity (LM model) The Littlewood and Miller (LM) model [22] defines a probabilistic framework to analyze the impact of methodological diversity in the expected failure behavior. The model defines: 1) an input space \(\mathcal{X}=\{x_{1},x_{2},...\}\), representing all possible inputs \(x\) to a program and 2) a program space \(\mathcal{P}=\{(\pi_{1},\pi_{2})\}\), for all possible programs \(\pi\) that could implement a program specification. A given design methodology will determine the probability to come up with a program \(\pi\) and is denoted as \(S_{A}(\pi)\). Another design methodology \(S_{B}(\pi)\) will assign a different probability to the same program. The model uses the concept of a difficulty function \(\theta_{M}(x)\) that measures the probability that a randomly chosen program \(\pi\) from a given methodology distribution \(S_{M}(\pi)\) will fail on a particular input \(x\in\mathcal{X}\). The key insight consists in noticing that \(\theta_{A}(x)\) can be different for a different methodology \(\theta_{B}(x)\), i.e., for some methodology, a certain input may be difficult, but for another, it may be easy. See Figure 3 for a visual representation of these spaces. An analysis of this model concludes that if the design methodologies produce different difficulty functions \(\theta\), then the expected failure behavior on a random input will be negatively correlated due to the fact that the covariance of the \(\theta\)'s can be negative. With this model, it is finally shown that a design methodology with diverse design choices that satisfy the following three properties will result in less common failures: 1) logically unrelated (one decision is independent of the other), 2) common failures of a decision are due to different factors, and 3) indifference to the selection of each methodology (no methodology is superior). ### Loss function to enforce attribution diversity We perform a first attempt to enforce attribution diversity with the following loss: \[\frac{1}{M}\sum_{i=1}^{M}l(h^{i},y)-\lambda A \tag{8}\] This loss computes attribution scores variance in an ensemble and uses it as a penalty term weighted by \(\lambda\). ### Failures addressed In this study, we evaluate resilience to covariate dataset distribution shifts, i.e., when the distribution of input features of the test dataset does not match the distribution of the training dataset. We use four natural image perturbations from the ImageNet-\(\overline{\text{C}}\) dataset [28] that are sensible to occur in vision application domains, such as obstructions or liquid contaminants. Our scope is not to evaluate robustness against adversarial attacks, label shift variations, or resiliency to noise variations such as Gaussian, brown, etc. ## 4 Experimental results ### ML resiliency to data corruptions To understand the relationship between accuracy, size, and resiliency to natural corruptions of DL models we evaluate a set of architectures (convolutional NNs, transformers, and subnetworks from neural architecture search (NAS)) and training approaches (supervised, self-supervised, and knowledge distillation) on both the ImageNet validation dataset and on the corrupted version ImageNet-\(\overline{\text{C}}\) "Lines" (strength of 1.6). See Table 1. **Observations to Table 1:** Although the model size is highly correlated with the final accuracy and resilience in the corrupted dataset, the architecture seems to play a more determining factor. The smallest transformer with only 28M parameters is superior to other CNNs with 2 or 3x more parameters. Self-supervision slightly decreases both metrics as appreciated in the comparison of ResNet50 and ViT models using supervised learning. SwinV2 is an exception but this model introduced more architectural innovations too. Knowledge distillation from a CNN teacher shows a slight improvement over supervised ViTs. To understand the effect of other corruptions, we evaluate a ResNet50 model1 on six different data sets: ImageNet [8], ImageNetv2 [36], and four corruptions on a fixed perturbation strength from the ImageNet-\(\overline{\text{C}}\) dataset [28]: Plasma (4.0), Checkerboard(4.0), Waterdrop(7.0) and Lines (1.6). See Figure 4. The first two are in-distribution, i.e., the covariates (input features) and labels of the validation set follow a similar distribution to the training data set. The last four are out-of-distribution, as the model has never seen such corruptions of the input images during training. Footnote 1: In the remainder of this paper, we select ResNet50 as the baseline architecture due to its common use as a reference. **Observations to Figure 4:** Different corruptions have different effects on the model performance, and a typical "good" classifier with accuracy close to 80% can have a tremendous performance decrease in situations where a human would probably not. \begin{table} \begin{tabular}{|l l|c|c|c|c|c|} \hline Training & Arch. & DL model & \# & ImageNet & \multirow{2}{*}{ImageNet-\(\overline{\text{C}}\)} \\ approach & class & & & & & Lines1.6 \\ \hline Self-sup. & CNN & DINO ResNet50 [5] & 25.6M & 75.30 & 92.61 & 40.80 & 46.65 \\ Super. & CNN ResNet50 [16] & 25.6M & 75.85 & 92.88 & 34.95 & 55.88 \\ Superv. & CNN ResNet50 [32,34][46] & 25.0M & 77.49 & 93.57 & 99.46 & 60.61 \\ Superv. & CNN ResNet52 [16] & 60.2M & 78.25 & 93.96 & 40.71 & 62.09 \\ Superv. & NN & Efficient50 [34] & 66.3M & 74.22 & 92.13 & 51.11 & 73.81 \\ Superv. & CNN ResNet50 [4][46] & 53.3M & 82.90 & 95.02 & 51.89 & 72.46 \\ Superv. & VIT & SwinV2 [6] & 23.8M & 81.34 & 95.62 & 55.79 & 78.45 \\ Self-sup. & VIT & DINO WV2 [8] & 87.3M & 80.06 & 95.02 & 55.75 & 78.37 \\ Superv. & VIT & SwinV2 [6] & 87.84M & 83.08 & 96.46 & 58.71 & 81.05 \\ Superv. & VIT & Tbase [6] & 86.6M & 80.88 & 95.28 & 61.28 & 82.26 \\ K. distill & VIT & DeTbase [44] & 87.3M & 83.34 & 96.50 & 64.07 & 84.66 \\ Self-sup. & VIT & SwinV2 [25] & 87.9M & 83.34 & 96.44 & 59.976 & 81.70 \\ \hline \end{tabular} \end{table} Table 1: Resiliency of architectures to natural image corruption (ImageNet-\(\overline{\text{C}}\) Lines(1.6 strength)) ### Diversity of ensembles from heterogeneous architectures To understand the diversity/accuracy trade-off of the attribution-based metric in comparison to the established prediction-based diversity approach we perform two different experiments: First, we create multiple ensembles of independently trained models with a wide diversity in architecture. Second, we create multiple ensembles using models discovered in a weight-sharing super-network [4], i.e., models whose architecture has been found using neural architecture search (NAS) and not by manual design. The architectures explored here are CNNs (ResNext [46] & SqueezeNet [17]), Vision transformers (DeiT [44]) and NAS (MNASNET [42] & BootstrapNAS [29]) using supervised or self-supervised training2. In total 14 models were trained with different hyperparameters to create three-member ensembles of all possible combinations. Footnote 2: Details on the architecture and optimization hyper-parameters can be found in Section A of the supplementary material. Figure 5 shows the ensemble performance of all 364 ensembles created from these 14 models using an averaging consensus mechanism, i.e., the logit output of all ensemble members is averaged first and then the highest score is used to make the prediction. The left-hand side shows on the X-axis the attribution-based diversity metric (Eq. 7). The right-hand side shows the disagreement prediction diversity metric (Eq. 4). Each point is an ensemble evaluated on the entire validation dataset of ImageNet. The color indicates the final average accuracy of the ensemble. The Y-axis indicates the average benefit of creating an ensemble: \(Y=A_{ens}-A_{top}\), where \(A_{ens}\) is the ensemble accuracy and \(A_{top}\) is the accuracy of its most accurate member, i.e., how much accuracy improvement was obtained in comparison to a single model (the most accurate in the ensemble). In this way, it can be appreciated when an ensemble makes sense: it has to lay above the zero line (dashed). The ensemble cost is measured by the number of parameters which has a direct influence on the memory and the number of operations required. The ideal ensemble is one with the brightest color, smallest radius, and residing above zero. **Observations to Figure 5:** The figure shows how attribution diversity is positively correlated with ensemble improvement, while it corroborates the known fact that prediction-based diversity is negatively correlated [21]. Figure 6 shows the same ensemble combinations, but this time using a majority voting consensus mechanism, i.e., the prediction with the highest number of votes wins. Draws are randomly resolved. **Observations to Figure 6:** The same correlation trends can be observed with majority voting. However, the most interesting aspect is that the vast majority of the ensembles here reside under the zero line. This means that majority voting with three ensembles tends on average to produce less accurate models. This corroborates the findings of [21]. We evaluate the same ensembles on five more validation datasets and verify that the observed trend in the validation Figure 4: Top-1 accuracy of ResNet50 on in-distribution data sets (ImageNet and ImageNetv2[36]) and out-of-distribution datasets (ImageNet-\(\overline{\text{C}}\)). The resilience of ResNet50 drops significantly against natural corruptions. Figure 5: Evaluation on ImageNet validation dataset of 364 three-member ensembles from heterogeneous architectures using **averaging as consensus mechanism**. Y-axis: Improvement of the ensemble against its own top ensemble member. X-axis: Normalized diversity metric. Color: absolute ensemble accuracy. Bubble size: Model parameter size. The attribution diversity metric is not negatively correlated with the ensemble improvement as disagreement diversity is. Figure 6: Evaluation on ImageNet validation dataset of the same ensembles as in Figure 5 but using **voting as consensus mechanism**. In contrast to averaging, voting produces mostly ensembles that decrease the final performance instead of improving it. dataset applies to natural corruptions. In addition, we compare the two diversity metrics to a simple validation accuracy metric. See Figure 7. **Observations to Figure 7:** Attribution-based diversity is better correlated as well. These results serve as evidence to confirm that the diversity-accuracy trade-off is better for attribution than for prediction diversity. However, the metric of averaging the individual accuracies of the ensemble members is more strongly correlated with the ensemble improvement in corruptions. Next, Figure 8 presents the results of the second experiment on architectures created with NAS. We used the open-source framework BootstrapNAS [29, 30] to create a weight-sharing super-network. The super-network is trained from an initial ResNet50 model. We then sample 11 subnetworks with different configurations but similar complexity by varying the width and depth of the CNN. **Observations to Figure 8:** Although the correlations seem strong for all metrics, the actual ensemble improvement is very low, i.e., less than 0.04%. To identify the effect of the complexity of the chosen attribution method, we evaluated the pair-wise diversity on the entire validation set on six subnetworks using _Saliency_ and _Integrated Gradients_ attribution methods using 1, 2, 10, and 50 backpropagation passes. See Figure 9. The average correlation coefficient of the normalized diversity scores of all methods is 0.998. Using Saliency-based attribution is then justified as it provides the lowest performance penalty as the computational overhead grows linearly with the number of backpropagation passes. ### Enforcing diversity in homogeneous ensembles We perform a set of training experiments to enforce diversity into the ensembles through the loss function via the Negative Correlation Learning paradigm. We use ResNet50 for all ensemble members, and evaluate different heuristics: a) Independently trained members using cross-entropy as loss in Equation 1. Four different consensus approaches in GNCL using Equation 2: b) average, c) median, d) geometric mean, e) majority vote, f) GNCL and averaging consensus but masking the penalty term for incorrect classifications, i.e., \((h^{i}\neq y)\Rightarrow(\lambda=0)\), and g) Balancing a loss function between the team and individual members (Equation 3). The optimization method in all cases was AdaBelief [49] for 100 epochs with a learning rate of 1e-3 decaying 10% every 30 epochs, epsilon of 1e-8, betas: (0.9,0.999), batch size of 64 and a \(\lambda\) factor of 0.2. In ImageNet classification, we empirically observed that bigger \(\lambda\) values in Equation 2 fail to learn. Results for the six heuristics are presented in Figure 10. Figure 8: Comparison of 165 heterogeneous ensembles of **architectures automatically created with weight-sharing neural architecture search (NAS)**. 11 models with different architectures were selected to be very close in complexity, i.e., number of parameters. The correlation between the two diversity metrics is highly positive but the increment in performance is very small. Figure 7: Comparison of **trend lines**, i.e., correlation of improvement to three different metrics on five validation datasets (consensus: averaging). Columns: Datasets. Rows: Metrics. The attribution metric (a to e) has a much lower accuracy trade-off compared to the disagreement metric (f to j) but is higher than the average accuracy metric (k to o). Bigger plots to appreciate individual ensembles are presented in Section A.2 of the supplemental material. Figure 9: Comparison of Saliency and Integrated Gradients with 2, 10 & 50 samples. The X-axis represents different pairs of models created with BootstrapNAS. The Y-axis is the normalized diversity score of each method. **Observations to Figure 10:** Explicit enforcement of prediction diversity does not result in improved resilience. However the balanced loss (Eq. 3) provides a significant advantage in 3 out of 4 natural corruptions. #### 4.3.1 First attempt at enforcing attribution diversity We perform a first attempt to enforce attribution diversity using the loss of Equation 8 and the same optimization parameters used in GNCL. The computational overhead to calculate the attributions is 2x using the Saliency method. Empirically, we tried five different lambda weights values: {10, 1,0.1,0.01,0.001} but found training instabilities. The smallest \(\lambda\) value resulted in convergence up to epoch 21 for 63.7% top1 accuracy. We believe that the penalty term of Eq. 8 is in conflict with the original loss and it would be more appropriate to investigate a better penalty term than to optimize this hyper-parameter in future work. #### 4.3.2 Diversity of NCL ensembles Figures 11, 12 together with Table 2 show three types of diversity (attribution, prediction, and intermediate representation) for three models created through independently created heterogeneous architectures, prediction diversity enforcement and attribution diversity enforcement with the following top1 accuracies on the ImageNet validation dataset: 78.2%, 76.1%, and 63.7%. **Prediction diversity.** In Table 2, the Shannon equitability index metric (Eq. 5) is shown for correctly and incorrectly classified samples for three ensembles: attribution diversity (Eq. 8), prediction diversity (Eq. 4) and heterogeneous architectures on all six datasets. The heterogeneous ensemble produces more diverse predictions in general. **Attribution diversity.** We present a few resulting attribution maps in Figure 11 for the NCL-based prediction-diversity enforcement, attribution-diversity enforcement (at epoch 21), and independently trained architectures. **Observations to Figure 11:** Independently trained heterogeneous architectures and attribution-diversity enforcement produce more diverse attribution maps than homogeneous models trained to have diverse prediction outcomes. **Representation diversity.** In Figure 12, we investigate the resulting diversity/similarity of the internal layers via CKA (Equation 6) of two ensemble members for three different diversity enforcing techniques. **Observations to Figure 12:** The CKA visualization reflects that the enforcement of attribution diversity produces \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{\(H_{corr}\)} & \multicolumn{3}{c|}{\(H_{corr}\)} \\ & IN & I2 & WD & L1 & PL & CB & IN & I2 & WD & L1 & PL & CB \\ \hline Ai. dir. & 0.13 & 0.16 & 0.29 & 0.32 & 0.23 & 0.26 & 0.46 & 0.49 & 0.42 & 0.66 & 0.57 & 0.58 \\ End. dir. & 0.10 & 0.13 & 0.29 & 0.31 & 0.21 & 0.23 & 0.46 & 0.49 & 0.48 & 0.69 & 0.59 & 0.63 \\ Human. & 0.12 & 0.17 & 0.35 & 0.37 & 0.25 & 0.29 & 0.51 & 0.35 & 0.24 & 0.77 & 0.67 & 0.71 \\ \hline \end{tabular} \end{table} Table 2: **Diversity of predictions** from all members of three ensembles as measured by the Shanon equitability index \(H\). \(H_{corr}\) and \(H_{inco}\) indicate the metric computed on all samples that were correctly or incorrectly classified. The six subcolumns correspond to the six validation datasets. Figure 11: **Attribution map diversity** of different diversity-inducing techniques on 8 ImageNet val. dataset samples. Figure 10: The resulting accuracy and **resiliency to natural corruptions of diversity enforcement** based on Negative Correlation Learning with different consensus mechanisms. Each ensemble has three members with a ResNet50 architecture. The heuristic of balancing the loss of the ensemble and the loss of the individual member produced the most resilient ensemble to corruptions. less similarity in the layers than output diversity enforcement or by independent heterogeneous architectures. ## 5 Discussion and conclusions In this section, we discuss and interpret the results of Section 4 and summarize the research questions' answers. ### Answers to research questions **RQ1**: In our experiments, it is observed that model architecture is more important to resiliency than model accuracy or size. On **RQ2**, we consistently observed that attribution-based diversity is more positively correlated with accuracy than prediction-based disagreement diversity. Answering **RQ3**, balancing the loss of the individual members and the ensemble provided a significant advantage in 3 out of 4 natural corruptions when compared to the prediction diversity enforcement variants. **RQ4**: Prediction diversity was higher for heterogeneous architectures trained independently than NCL on prediction or attribution diversity. Attribution diversity is significantly lower when enforcing prediction diversity compared to heterogeneous architectures trained independently. Activation diversity is low at the last layers for both prediction and attribution diversity enforcement, while for heterogeneous architectures trained independently, the middle layers showed less diversity. ### Results discussions **Advantage of Transformers vs CNNs:** The superiority of the transformer architecture in terms of resilience against natural corruptions could be attributed to their capability to pay attention globally instead of locally as CNNs do, and thus they may be able to construct more useful intermediate representations that suffer less from perturbations. **NAS ensembles in our experiments are not diverse enough:** Only small improvements are obtained from ensembles created from sub-networks. These are jointly trained as part of a single weight-sharing super-network and thus have little diversity. Larger search/design spaces should be explored in future work. **Balancing member vs ensemble accuracy:** Explicitly enforcing prediction diversity is outperformed by implicit enforcement through the balance of ensemble and individual accuracy (Equation 3). This could be due to the use of two less conflicting objectives than prediction diversity. **Diversity-accuracy trade-off improvement:** In contrast to the disagreement metric, attribution diversity does not require models to deviate from a correct prediction. The correlation is however not very strong as models tend to find similar features for prediction, and implicit attribution deviations may be the product of imperfect learning. The enforcement of attribution diversity with the proposed loss proved however to be insufficient and better heuristics need to be explored. **Diverse architectures produced more diversity than NCL-based methods** An ensemble selected from a combination of independently trained heterogeneous architectures and training approaches resulted in higher levels of diversity in prediction, attribution, and intermediate layers. However, mixing different architectures does not consistently produce good ensembles as observed in the many ensembles under the zero line in Figure 7, as a good model may not benefit from ensembling with a less good one, as their common modes are not negatively correlated. ### Conclusions and next steps In this study, we explored different approaches to measure and enforce diversity in ensembles and evaluated their impact on natural data corruption resiliency. The key take-aways are: 1) model architecture is more important for resiliency than model size or model accuracy, 2) attribution-based diversity is less negatively correlated to the ensemble accuracy than prediction-based diversity, 3) a balanced loss function of individual and ensemble accuracy creates more resilient ensembles for image natural corruptions, and 4) architecture diversity produces more diversity in all explored diversity metrics: predictions, attributions, and activations. In addition, other valuable findings are: a) Saliency attribution can be sufficient to measure input attribution diversity, b) Ensembles created from models of similar complexity that were discovered by weight-sharing Neural Architecture Search for our experiments barely provide any accuracy improvement, and c) Enforcing attribution-based diversity during training through a variance-based penalty term is not stable and needs further research. In future work, several experiments could be done to understand the complexity-resiliency trade-off, e.g., through knowledge distillation, improved heuristics to enforce attribution diversity, and compare diversity approaches in tasks beyond image classification. AcknowledgementsThis work was partially funded by the Federal Ministry for Economic Affairs and Climate Ac Figure 12: Centered Kernel Alignment maps to visualize the resulting **similarity of model layers** on different diversity enforcing techniques. tion of Germany, as part of the research project SafeWahr (Grant Number: 19A21026C). We would also like to thank Professors Lorenzo Strigini and Peter Popov for the fruitful conversations and feedback on this work.
2303.01813
anafi_ros: from Off-the-Shelf Drones to Research Platforms
The off-the-shelf drones are simple to operate and easy to maintain aerial systems. However, due to proprietary flight software, these drones usually do not provide any open-source interface which can enable them for autonomous flight in research or teaching. This work introduces a package for ROS1 and ROS2 for straightforward interfacing with off-the-shelf drones from the Parrot ANAFI family. The developed ROS package is hardware agnostic, allowing connecting seamlessly to all four supported drone models. This framework can connect with the same ease to a single drone or a team of drones from the same ground station. The developed package was intensively tested at the limits of the drones' capabilities and thoughtfully documented to facilitate its use by other research groups worldwide.
Andriy Sarabakha
2023-03-03T09:40:02Z
http://arxiv.org/abs/2303.01813v1
# anafi_ros: from Off-the-Shelf Drones to Research Platforms ###### Abstract The off-the-shelf drones are simple to operate and easy to maintain aerial systems. However, due to proprietary flight software, these drones usually do not provide any open-source interface which can enable them for autonomous flight in research or teaching. This work introduces a package for ROS1 and ROS2 for straightforward interfacing with off-the-shelf drones from the Parrot ANAFI family. The developed ROS package is hardware agnostic, allowing connecting seamlessly to all four supported drone models. This framework can connect with the same ease to a single drone or a team of drones from the same ground station. The developed package was intensively tested at the limits of the drones' capabilities and thoughtfully documented to facilitate its use by other research groups worldwide. ## I Introduction As one of the fastest-growing fields in the aerospace industry, unmanned aerial vehicles (UAVs) can provide a cost-efficient solution to many time-consuming tasks, such as subterranean exploration [1], power-line inspection [2] and additive manufacturing [3]. Different applications require drones equipped with appropriate sensors, for example, a thermal camera for wildfire monitoring [4], a camera with a high optical zoom for aerial observation [5], or a stereo camera for visual odometry [6]. Custom-built drones can be designed and manufactured to meet specific needs and requirements, allowing for more flexibility in their functionality [7]. However, assembling and configuring custom-made drones require technical expertise and labour time. Unlike custom-made drones, off-the-shelf drones are readily available and can be easily purchased from many retailers. This makes them an attractive option for individuals and organisations that want to use drones for a variety of purposes, such as teaching and research. Another advantage of using off-the-shelf drones is their reliability and durability since they are built to operate in a wide range of conditions. While there are many advantages of using off-the-shelf drones, their main limitation is the possibility of autonomous deployments, making them less suitable for certain applications. To partially overcome this issue, proprietary software development kits (SDKs) were released by some drone manufacturers, such as DJI [8], Ryze [9], Parrot [10] and Bitcraze [11]. Still, only discontinued Parrot Bebop [12], and tiny-size Ryze Tello [13] and Bitcraze Crazyflie [14] have the interface to bridge them with the robot operating system (ROS). Nowadays, ROS has become a standard development environment for modern roboticists. The main advantage of ROS is the possibility for the modularization of software, making it easy to reuse and modify individual components. Besides, ROS provides a standard set of libraries, tools and conventions for communication between different parts of a robotic system. Moreover, ROS has a large and active community with many available resources. A huge variety of ROS packages is available for building the navigation stack for aerial robots, like visual-inertial localisation [15], environment perception [16], motion planning [17], model-based [18] and model-free [19] control. This work introduces a bridge which allows a straightforward connection between the drones of the Parrot1 ANAFI family (illustrated in Fig. 1) and both ROS22. Parrot ANAFI drones were chosen because each model in the family offers unique features making them suitable for various applications. A comprehensive comparison of the characteristics of Parrot ANAFI drones is provided. The developed ROS package is hardware agnostic, allowing connecting seamlessly to all supported drones. This framework also allows connecting to single or multiple drones from the same ground station. The developed package was intensively tested at the limits of the drones and thoughtfully documented to facilitate its use by other researchers. Footnote 1: The author has no relations or conflicts of interest with Parrot SA. This work is organised as follows. First, the technical details of the experimental platforms are provided in Section II. Next, Section III describes the structure of the developed framework. Then, Section IV provides experimental validation of the developed framework. Finally, Section V summarises this work with conclusions and future work. Fig. 1: Parrot ANAFI family drones. ## II Experimental Platforms Let the world fixed frame be \(\mathcal{F}_{W}=\{\vec{\mathbf{x}}_{W},\vec{\mathbf{y}}_{W},\vec{\mathbf{z}}_{W}\}\), and the drone body frame be \(\mathcal{F}_{D}=\{\vec{\mathbf{x}}_{D},\vec{\mathbf{y}}_{D},\vec{\mathbf{z}}_{D}\}\). The origin of the body frame is located at the centre of mass (COM) of the UAV. The configuration with the corresponding reference frames is illustrated in Fig. 2. The absolute position of UAV \(\mathbf{p}_{D}^{W}=\begin{bmatrix}x&y&z\end{bmatrix}^{T}\) is described by three Cartesian coordinates of its COM in \(\mathcal{F}_{W}\). While the attitude of UAV \(\boldsymbol{\theta}_{D}^{W}=\begin{bmatrix}\phi&\theta&\psi\end{bmatrix}^{T}\) is described by three Euler's angles: roll \(\phi\), pitch \(\theta\) and yaw \(\psi\). The time derivative of the position (\(x\), \(y\), \(z\)) gives the linear velocity of the UAV's COM expressed in \(\mathcal{F}_{W}\): \[\mathbf{v}=\begin{bmatrix}\dot{x}&\dot{y}&\dot{z}\end{bmatrix}^{T}, \tag{1}\] and the velocity expressed in \(\mathcal{F}_{D}\) is \[\mathbf{v}_{D}=\begin{bmatrix}v_{x}&v_{y}&v_{z}\end{bmatrix}^{T}. \tag{2}\] The relation between \(\mathbf{v}\) and \(\mathbf{v}_{B}\) is given by \[\mathbf{v}=\mathbf{R}(\phi,\theta,\psi)\mathbf{v}_{D}, \tag{3}\] in which \(\mathbf{R}(\phi,\theta,\psi)\in\mathsf{SO}(3)\) is the rotation matrix from \(\mathcal{F}_{B}\) to \(\mathcal{F}_{W}\): \[\mathbf{R}(\phi,\theta,\psi)=\begin{bmatrix}\mathsf{c}_{\psi}\mathsf{c}_{ \theta}&\mathsf{c}_{\psi}\mathsf{s}_{\phi}\mathsf{s}_{\theta}-\mathsf{c}_{ \phi}\mathsf{s}_{\psi}&\mathsf{s}_{\phi}\mathsf{s}_{\psi}+\mathsf{c}_{\phi} \mathsf{c}_{\psi}\mathsf{s}_{\theta}\\ \mathsf{c}_{\theta}\mathsf{s}_{\psi}&\mathsf{c}_{\phi}\mathsf{c}_{\psi}+ \mathsf{s}_{\phi}\mathsf{s}_{\psi}\mathsf{s}_{\theta}&\mathsf{c}_{\phi} \mathsf{s}_{\psi}\mathsf{s}_{\theta}-\mathsf{c}_{\psi}\mathsf{s}_{\phi} \mathsf{s}_{\phi}\\ -\mathsf{s}_{\theta}&\mathsf{c}_{\theta}\mathsf{s}_{\phi}&\mathsf{c}_{\phi} \mathsf{c}_{\theta}\end{bmatrix}, \tag{4}\] in which \(\mathsf{c}_{\star}\) and \(\mathsf{s}_{\star}\) are \(\cos(\star)\) and \(\sin(\star)\), respectively. The time derivative of the attitude \((\phi,\theta,\psi)\) gives the angular velocity expressed in \(\mathcal{F}_{W}\): \[\boldsymbol{\omega}=\begin{bmatrix}\dot{\phi}&\dot{\theta}&\dot{\psi}\end{bmatrix} ^{T}, \tag{5}\] and the angular velocity expressed in \(\mathcal{F}_{D}\) is \[\boldsymbol{\omega}_{D}=\begin{bmatrix}\omega_{\phi}&\omega_{\theta}&\omega_ {\psi}\end{bmatrix}^{T}. \tag{6}\] The relation between \(\boldsymbol{\omega}\) and \(\boldsymbol{\omega}_{D}\) is given by \[\boldsymbol{\omega}=\mathbf{T}\boldsymbol{\omega}_{D}, \tag{7}\] in which \(\mathbf{T}\) is the transformation matrix: \[\mathbf{T}=\begin{bmatrix}1&\sin\phi\tan\theta&\cos\phi\tan\theta\\ 0&\cos\phi&-\sin\phi\\ 0&\sin\phi\sec\theta&\cos\phi\sec\theta\end{bmatrix}. \tag{8}\] The vector of control inputs \(\mathbf{u}\) is considered as in [20]: \[\mathbf{u}=\begin{bmatrix}T^{*}&\tau_{\phi}^{*}&\tau_{\theta}^{*}&\tau_{\psi} ^{*}\end{bmatrix}^{T}, \tag{9}\] where \(T^{*}\) is the reference thrust acting along \(\vec{\mathbf{z}}_{D}\) axis, whereas \(\tau_{\phi}^{*}\), \(\tau_{\theta}^{*}\) and \(\tau_{\psi}^{*}\) are the reference moments acting around \(\vec{\mathbf{x}}_{D}\), \(\vec{\mathbf{y}}_{D}\) and \(\vec{\mathbf{z}}_{D}\) axes, respectively. For mobile robots equipped with a camera mounted on a gimbal, there is also a gimbal reference frame \(\mathcal{F}_{G}=\{\vec{\mathbf{x}}_{G},\vec{\mathbf{y}}_{G},\vec{\mathbf{z}}_{ G}\}\). The origin of the gimbal frame is located in front of the UAV's COM, The position of the camera in \(\mathcal{F}_{D}\) is \(\mathbf{p}_{G}^{D}=\begin{bmatrix}x_{G}&y_{G}&z_{G}\end{bmatrix}^{T}\), while the attitude of the gimbal in \(\mathcal{F}_{D}\) is \(\boldsymbol{\theta}_{G}^{D}=\begin{bmatrix}\phi_{G}&\theta_{G}&\psi_{G}\end{bmatrix} ^{T}\). The rotation matrix of the gimbal in \(\mathcal{F}_{W}\) can be calculated with \[\mathbf{R}_{G}^{W}=\mathbf{R}(\phi,\theta,\psi)\mathbf{R}(\phi_{G},\theta_{G}, \psi_{G}), \tag{10}\] while the position of the camera in \(\mathcal{F}_{W}\) can be obtained with \[\mathbf{p}_{G}^{W}=\mathbf{p}_{D}^{W}+\mathbf{R}(\phi,\theta,\psi)\mathbf{p}_{G }^{D}. \tag{11}\] ### _NAAFI Drones_ Parrot ANAAFI drones are small, lightweight UAVs mainly designed for aerial photography and videography. They are equipped with high-resolution high-dynamic-range (HDR) cameras mounted on a \(2\)-axis gimbal, allowing to capture smooth and stable footage from the air. These drones have a unique folding design, making them portable and easily deployable: it takes less than \(1\,\min\) to unfold the drone, turn it on, connect to the remote controller and take off. Moreover, ANAAFI drones have reasonably long flight duration thanks to the lithium polymer (LiPo) battery, which has a built-in USB-C port for hassle-free charging. The Parrot ANAAFI family has four drone models: basic _AAAFI 4K_, _ANAAFI Thermal_ with a thermal camera, water- and dust-resistant _ANAAFI USA_ and _ANAAFI Ai_ with onboard computing and obstacle avoidance capabilities. These drones are illustrated in Fig. 1, while Table I summarises the properties of each model. **Remark 1**: _Since low-level stabilisation controllers as in [21] are included in ANAFI's autopilot as illustrated in Fig. 3, the virtual control inputs in (9) can be considered as:_ \[\mathbf{u}=\begin{bmatrix}v_{z}^{*}&\phi^{*}&\theta^{*}&\omega_{\psi}^{*} \end{bmatrix}^{T}. \tag{12}\] Fig. 3: Control scheme of Parrot ANAAFI UAV. Fig. 2: Configuration of Parrot ANAAFI with its reference frames. #### Iii-A1 ANAFI 4K This model (shown in Fig. 1a) is the first and basic drone of the ANAFI family. ANAFI 4K is one of the quietest drones in its class with a noise level of \(64\ \mathrm{dB}\) at \(1\ \mathrm{m}\). With \(25\ \mathrm{min}\) of flight time, the battery can be recharged via a USB-C cable in \(90\ \mathrm{min}\). The model of ANAFI 4K is available in Parrot's software-in-the-loop simulation environment - Sphinx. #### Iii-A2 ANAFI Thermal This model (shown in Fig. 1b) is the upgrade of ANAFI 4K with a thermal camera. The optical unit of ANAFI Thermal combines the electro-optics with an infrared sensor, making it possible to identify temperatures between \(-10\ ^{\circ}\mathrm{C}\) and \(+400\ ^{\circ}\mathrm{C}\). Thanks to the FLIR Lepton radiometric sensor, the absolute temperature of each pixel can be determined. The RGB image can be blended with thermal images. This enables to detection of hot spots with the thermal camera, while the RGB camera allows the viewing of important details. Despite the thermal camera, ANAFI Thermal is the smallest and lightest model of the family. #### Iii-A3 Anafi Usa This model (shown in Fig. (c)c) is the rescue-grade drone, featuring \(32\)x zoom and thermal imaging capabilities to meet the demands of first responders and search-and-rescue teams. To achieve this, ANAFI USA is equipped with three front-mounted cameras: a thermal camera, \(21\)\(\mathrm{Mpx}\) RGB wide-angle camera (for \(1\)x to \(5\)x zoom) and \(21\)\(\mathrm{Mpx}\) RGB telephoto camera (for \(5\)x to \(32\)x zoom), which guarantees a continuous zoom. The \(32\)x zoom allows seeing details as small as \(1\)\(\mathrm{cm}\) from a distance of \(50\)\(\mathrm{m}\). The image stabilisation system of ANAFI USA ensures high-quality footage even at \(15\)\(\mathrm{m/s}\) wind gust. Despite its compact design, ANAFI USA boasts a \(32\)\(\mathrm{min}\) of flight time. ANAFI USA has IPS5\({}^{h}\) ingress protection, offering water and dust resistance and making it suitable to fly in rainy conditions. ANAFI USA has a service ceiling of \(6\)\(\mathrm{km}\) and can operate in temperatures between \(-35\)\(\mathrm{\SIUnitSymbolCelsius}\) and \(+43\)\(\mathrm{\SIUnitSymbolCelsius}\). The body of ANAFI USA is mainly made of polyamide, reinforced with carbon fibre and streamlined using hollow glass beads. The data stored on ANAFI USA or sent through the networks are encrypted, and the drone is protected against malicious software modification attempts. Footnote 5: protected from limited dust ingress #### Iii-A4 Anafi Ai This model (shown in Fig. (d)d) is the biggest and heaviest but most advanced of the ANAFI family. Anafi Ai is the first drone to use the 4G cellular network connection, in addition to Wi-Fi, as an alternative encrypted data link between the drone and the remote controller, theoretically enabling control at any distance. Besides a high-resolution \(48\)\(\mathrm{Mpx}\) RGB camera with an ISO range of \(50-6400\), ANAFI Ai is also equipped with a pair of multi-directional stereo cameras, which allow the computation of the occupancy grid to avoid obstacles automatically. Anafi Ai has a 3-axis gimbal differently from other Anafi models with 2-axis gimbal. The maximum horizontal speed of Anafi Ai is \(16\)\(\mathrm{m/s}\), thanks to the optimized aerodynamic performance of the vehicle. Anafi's \(6800\)\(\mathrm{mAh}\) battery allows \(32\)\(\mathrm{min}\) of flight time and can be recharged in \(150\)\(\mathrm{min}\). ANAFI Ai has IPX3\({}^{c}\) ingress protection, offering water resistance and making it suitable to fly in rainy conditions. ANAFI Ai can execute custom C++ and Python code onboard, thanks to Parrot's Air SDK. Air SDK allows loading and running code directly on ANAFI Ai and accessing all sensors, connectivity interfaces and autopilot features. The model of ANAFI Ai is available in the Sphinx simulation environment. ### _Remote Controllers_ Parrot ANAFI drones come with a handheld remote radio controller called Skycontroller, allowing the user to control the drone and access its various features. Skycontrollers feature a light and compact design with a conventional layout of sticks and buttons. Moreover, they have a built-in battery rechargeable through the USB-C port. The Parrot ANAFI family has two Skycontroller models: _Skycontroller 3_ for ANAFI 4K, ANAFI Thermal and ANAFI USA, and _Skycontroller 4_ for ANAFI Ai. The two versions of Skycontrollers are illustrated in Fig. 4, while Table II summarises the properties of each model. #### Iii-B1 Skycontroller 3 This model (shown in Fig. (a)a) is a remote radio controller designed for Parrot ANAFI 4K, Thermal and USA drones. Skycontroller 3 has a maximum range of up to 4 \(\mathrm{km}\). #### Iii-B2 Skycontroller 4 This model (shown in Fig. (b)b) is a remote 4G controller designed for Parrot ANAFI Ai. Skycontroller 4 has IP5X\({}^{f}\) ingress protection, offering dust resistance. Fig. 4: Parrot Skycontroller series remote controllers. ## III Developed Framework The developed framework - anafi_ros - is a python-based ROS package which enables interfacing with all available Parrot ANAFI quadrocopters. Besides being compatible with all physical drones, anafi_ros can connect to virtual drones in Parrot's simulation environment - Sphinx. The developed anafi_ros is built on top of Parrot's official python SDK - Olympe - which provides a programming interface for Parrot ANAFI drones. The communication flow in anafi_ros allows connecting directly to the drones via Wi-Fi interfaces or through Skycontrollers via USB ports, which is highly recommended. The developed framework makes connecting multiple drones to the same ground station easy by automatically assigning a different virtual IP address to each connected Skycontroller and managing port forwarding. The main functionalities of anafi_ros include drone piloting, feedback of flight parameters from onboard sensors, gimbal control, drone state monitoring, video streaming from onboard cameras, picture capturing, video recording, file transferring between onboard storage and ground station, drone calibration and flight plan management. **Remark 2**: _For the complete list of subscribed and published topics, available services and parameters, please refer to Appendix._ The developed package is organised with several subelements to facilitate the development, as depicted in Fig. 5. In other words, each physical component, such as the drone itself, gimbal, camera, battery, connection link, storage device and remote controller, has a respective software element. #### Iii-A1 Drone The drone element is a core part of the package, which manages the connection to the drone, enables the control of the drone and provides feedback from the drone. This element allows piloting the drone in three modes: directly commanding values in (12), commanding relative displacements \(\left[\Delta x^{*}\quad\Delta y^{*}\quad\Delta z^{*}\quad\Delta\psi^{*}\right]^ {T}\) or commanding world references \(\left[\lambda_{x}^{*}\quad\lambda_{y}^{*}\quad z^{*}\quad\psi^{*}\right]^{T}\), where \(\lambda_{x}^{*}\) and \(\lambda_{y}^{*}\) are the desired latitude and longitude, respectively. The drone element retrieves and publishes real-time information, such as the drone's attitude, altitude, speed and GPS location. It forwards to the drone the requests for arming, taking-off and landing. Drone class also allows bounding the altitude, distance, horizontal and vertical speed, pitch and roll angles, and attitude rates. #### Iii-A2 Gimbal The gimbal element provides control and feedback on the camera's gimbal. This element controls the desired pitch \(\phi_{G}^{*}\) and roll \(\theta_{G}^{*}\) of the camera. It also provides the actual attitude \(\mathbf{\theta}_{G}^{D}\) of the gimbal and allows for setting the maximum rotational speed of the gimbal. #### Iii-A3 Camera The camera element provides essential capabilities for the camera, such as changing the zoom level, capturing pictures and recording videos. This element also publishes real-time video stream, camera calibration matrix and actual zoom level. It also allows setting the camera mode, image style and streaming mode and enabling HDR mode. #### Iii-A4 Battery The battery element provides the battery status, such as the battery's level, health and voltage. #### Iii-A5 Link The link element provides information on the connection to the drone, such as the link quality, signal strength and connection throughput. #### Iii-A6 Storage The storage element provides the available memory on the microSD cards if installed. This element also allows downloading media (photos and videos) from the storage device and formatting it. #### Iii-A7 Controller The controller element provides an alternative to connect to the drone via the remote controller. This element reads and publishes the state of the sticks (gaz/yaw and pitch/roll), triggers (gimbal tilt and camera zoom) and buttons (return to home, centre camera and reset zoom). The remote controller also streams its real-time attitude. ### _Complementary Packages_ A complimentary ROS package - anafi_autonomy1 - was developed on top of anafi_ros to enable safe navigation of ANAFI drones by adding some high-level capabilities, like position and velocity control. Besides, other open-source ROS packages are available for building the navigation stack for aerial robots, like visual-inertial localisation [15]ii, environment perception [16]iii, motion planning [17]iv, model-based [18]v and model-free [19]vi control. Fig. 5: UML diagram of the structure of anafi_ros. ## IV Experimental Validation To verify the declared characteristics and validate the developed package, we tested the drone's flight characteristics, gimbal response and camera capabilities separately. ### _Drone_ The drones were pushed to their limits by commanding the maximum control inputs to verify the drones' capabilities and validate the developed package. The tests were performed in an open field on a windless day. For the pitch tracking response, first, the commanded pitch was set to \(40^{\circ}\) for \(2\) s, then, it was reversed to \(-40^{\circ}\) for \(2\) s and, finally, set to \(0^{\circ}\). As can be observed from Fig. 5(a), all drones were able to achieve the desired pitch while reaching speeds above \(8\)\(\mathrm{m/s}\) and stabilise at around \(0^{\circ}\) in the end. ANAFI 4K and Thermal had smoother behaviour, while ANAFI USA had a more aggressive response. Similarly, for the roll tracking response, first, the commanded roll was set to \(40^{\circ}\) for \(2\) s, then, it was reversed to \(-40^{\circ}\) for \(2\) s and, finally, set to \(0^{\circ}\). As can be observed from Fig. 5(b), all drones were able to achieve the desired roll while reaching speeds of above \(10\)\(\mathrm{m/s}\) and stabilise at around \(0^{\circ}\) in the end. ANAFI 4K and Thermal still had a stable response, while ANAFI USA and Ai had more twitching behaviour. For the vertical velocity tracking response, first, the commanded velocity was set to \(4\)\(\mathrm{m/s}\) for \(2\) s, then, it was reversed to \(-4\)\(\mathrm{m/s}\) for \(2\) s and, finally, set to \(0\)\(\mathrm{m/s}\). As can be observed from Fig. 5(c), all drones have an initial delay between \(100\)\(\mathrm{m}\)s and \(200\)\(\mathrm{m}\)s but later were able to achieve the desired climbing and descend speeds, reaching the altitude of almost \(7\)\(\mathrm{m}\) in \(2\) s, and stabilise at around \(0\)\(\mathrm{m/s}\) in the end. All drones had smooth and stable behaviour. For the yaw rate tracking response, first, the commanded yaw rate was set to \(200^{\circ}/\mathrm{s}\) for \(2\) s, then, it was reversed to \(-200^{\circ}/\mathrm{s}\) for \(2\) s and, finally, set to \(0^{\circ}/\mathrm{s}\). As shown in Fig. 5(d), all drones were able to achieve the desired yaw rate, making a \(360^{\circ}\) turn in less than \(2.5\) s, and stabilise at around \(0^{\circ}/\mathrm{s}\) in the end. Similarly, to the velocity tracking, ANAFI 4K, Thermal and USA have a response delay of approximately \(100\)\(\mathrm{m}\)s; while, for ANAFI Ai, it is approximately \(200\)\(\mathrm{m}\)s. After the transient phase, all drones had stable spins at the desired yaw rate. Fig. 6: Piloting response. ### _Gimbal_ All ANAFI drones have an active gimbal on which the main cameras are mounted. The gimbal can adjust its roll and pitch in orientation or angular velocity modes. Fig. 7 shows the responses of the gimbal in two modes for two controlled axis between the gimbal's operational limits. It is possible to observe that ANAFI Ai has the fastest response but a limited range compared to other ANAFI drones. **Remark 3**: ANAFI 4K, Thermal and USA are equipped with similar gimbals, so their response is almost identical. ### _Camera_ The main difference between ANAFI drones is the set of cameras they are equipped with, as summarized in Fig. 8. The developed package allows switching online between streams from all available cameras. ANAFI 4K has one RGB front-mounted camera, which can stream live \(1280\times 720~{}\mathrm{px}\) images, shown in Fig. (a)a. ANAFI Thermal, besides the same RGB camera as ANAFI 4K, also has a thermal camera, which can stream live \(960\times 720~{}\mathrm{px}\) images, shown in Fig. (b)b. ANAFI USA has three front-mounted cameras: a thermal camera and two RGB wide-angle and telephoto cameras, which can stream high-details images, shown in Fig. (c)c, where the road sign highlighted in red in Fig. (a)a is zoomed in. ANAFI Ai, besides a high-resolution RGB camera, which can stream live \(1920\times 1080~{}\mathrm{px}\) images, also has a pair of frontal stereo cameras, which allow the computation of 3D environment information, like the \(176\times 90~{}\mathrm{px}\) disparity map, shown in Fig. (d)d, where the palm leaves in the proximity are detected. Besides streaming live the video feed, all drones can shoot pictures, record videos and store them at maximum resolution on the memory card. In addition, anafi_ros allows downloading the stored media from the drone. **Remark 4**: _All ANAFI drones also have a down-facing grey-scale global shutter \(320\times 240~{}\mathrm{px}\) camera for optical flow. However, this video stream is not accessible yet._ Fig. 8: Camera features of different Anafi drones. Fig. 7: Gimbal response. ## V Conclusions This work introduces a ROS1 and ROS2 package - anafi_ros - for simple interfacing with the drones from the Parrot ANAFI family. The developed ROS package is hardware agnostic, allowing connecting seamlessly to four supported models. The developed package was intensively tested on the drones at maximum roll and pitch angles of \(\pm 40^{\circ}\), corresponding to the horizontal speeds above \(\pm 10~{}\mathrm{m/s}\), maximum vertical speed of \(\pm 4~{}\mathrm{m/s}\) and maximum yaw rate of \(\pm 200^{\circ}\mathrm{/s}\). All drone models demonstrated satisfactory performance and stable response. We hope the developed framework will provide new opportunities for further applications of aerial robots. ### _Subscribed topics_ <topic name> (<message type>): <topic description> * **camera/command** (CameraCommand): camera zoom commands * **drone/command** (PilotingCommand): drone piloting commands * **drone/moveby** (MoveByCommand): move the drone by the given displacement and rotate by the given angle * **drone/moveto** (MoveToCommand): move the drone to the specified location * **gimbal/command** (GimbalCommand): gimbal attitude commands ### _Published topics_ <topic name> (<message type>, \(<\) frequency>) \(\in\) {<set of values>} / [<range of values>]: <topic description> [<measurement units>] * **battery/health** (UInt8, 1 Hz) \(\in\) [0: bad, 100: good]; battery health [%] * **battery/percentage** (UInt8, 30 Hz) \(\in\) [0: empty, 100: full]: battery level [%] * **battery/voltage** (Float32, 1 Hz): battery voltage [V] * **camera/awb_b_gain** (Float32, 30 Hz): camera automatic white balance (AWB) blue gain * **camera/awb_r_gain** (Float32, 30 Hz): camera automatic white balance (AWB) red gain * **camera/camera_info** (CameraInfo, 30 Hz): main camera's info * **camera/exposure_time** (Float32, 30 Hz): exposure time of the main camera [s] * **camera/image** (Image, 30 Hz): image from the main front camera * **camera/hfo** (Float32, 30 Hz): camera's horizontal field of view [\({}^{\circ}\)] * **camera/iso_gain** (UInt16, 30 Hz): camera's sensitivity gain * **camera/vfo** (Float32, 30 Hz): camera's vertical field of view [\({}^{\circ}\)] * **camera/zoom** (Float32, 5 Hz): camera zoom level [x] * **drone/altitude** (Float32, 30 Hz) \(>\) 0.0: drone's ground distance [m] * **drone/altitude** (Float32, 5 Hz): drone's ground distance above the take-off point [m] * **drone/altitude** (QuaternionStamped, 30 Hz): drone's attitude in north-west-up frame * **drone/gps/fix** (Bool, 1 Hz) \(\in\) {true: GPS is fixed, false: GPS is not fixed} * **drone/gps/location** (NavSatFix, 1 Hz): drone's GPS location * **drone/gps/satellites** (UInt8): number of GPS satellites * **drone/rpy** (Vector3Stamped, 30 Hz): drone's roll, pitch and yaw in north-west-up frame [\({}^{\circ}\)] * **drone/speed** (Vector3Stamped, 30 Hz): drone's speed in body frame [m/s] * **drone/state** (String, 30 Hz) \(\in\) {'CONNECTING', 'LANDED', 'TAKINGOFF', 'HOVERING', 'FLYING', 'LANDING', 'EMERGENCY', 'DISCONNECTED',...}: drone's state * **gimbal/attitude/absolute** (QuaternionStamped, \(5~{}\mathrm{Hz}\)): gimbal's attitude in north-west-up frame * **home/location** (PointStamped): home location * **link/quality** (UInt8, 30 Hz) \(\in\) [0: bad, 5: good]: link quality * **skycontroller/attitude** (QuaternionStamped, \(20~{}\mathrm{Hz}\)): SkyController's attitude in north-west-up frame * **skycontroller/command** (SkyControllerCommand, \(100~{}\mathrm{Hz}\)): command from SkyController * **skycontroller/rpy** (Vector3Stamped, \(20~{}\mathrm{Hz}\)): SkyController's attitude in north-west-up frame [\({}^{\circ}\)] * **storage/available** (UInt64): available storage space [B] * **time** (Time, 30 Hz): drone's local time ### _Services_ <service name> (<service type>): <service description> * **camera/photo/stop** (Photo): stop photo capture * **camera/photo/take** (Photo): take a photo * **camera/recording/start** (Recording): start video recording * **camera/recording/stop** (Recording): stop video recording * **camera/reset** (Trigger): reset zoom level * **drone/arm** (SetBool): {true: arm the drone; false: disarm the drone} * **drone/calibrate** (Trigger): start drone's magnetometer calibration process * **drone/emergency** (Trigger): cut out the motors * **drone/halt** (Trigger): halt and start hovering * **drone/land** (Trigger): take-off the drone * **drone/reboot** (Trigger): reboot the drone * **drone/rth** (Trigger): return home * **drone/takeoff** (Trigger): land the drone * **flightplan/pause** (Trigger): pause the flight plan * **flightplan/start** (FlightPlan): start the flight plan based on the Mavlink file existing on the drone * **flightplan/stop** (Trigger): stop the flight plan * **flightplan/upload** (FlightPlan): upload the Mavlink file to the drone * **gimbal/calibrate** (Trigger): start gimbal calibration * **gimbal/reset** (Trigger): reset the reference orientation of the gimbal * **home/navigate** (SetBool): {true: start return home; false: stop return home} trigger navigate home * **home/set** (Location): set the custom home location * **skycontroller/offboard** (SetBool): {true: switch to offboard control; false: switch to manual control} change control mode * **storage/download** (SetBool): {true: delete media after download; false: otherwise} download media from the drone * **storage/format** (Trigger): format removable storage ### _Parameters_ <**parameter name**> (<parameter type>) := <default value> := {<set of values>} / [<range of values>]: <parameter description> [<measurement units>] * **camera/autorecord** (bool) := false := {true: enabled; false: disabled}: auto record at take-off * **camera/ev_compensation** (int) := 9 \(\in\) {0: -3.00; 3: -2.00; 6: -1.00; 9: 0.00; 12: 1.00; 15: 2.00; 18: 3.00}: camera exposure compensation [EV] * **camera/hdr** (bool) := true \(\in\) {true: enabled; false: disabled}: high dynamic range (HDR) mode * **camera/max_zoom_speed** (float) := 10.0 \(\in\) [0.1, 10.0]: maximum zoom speed [tan(\({}^{\circ}\)) /s] * **camera/mode** (int) := 0 \(\in\) {0: camera in recording mode; 1: camera in photo mode}: camera mode * **camera/relative** (bool) := false \(\in\) {true: commands relative to the camera pitch; false: otherwise} * **camera/rendering** (int) := 0 \(\in\) {0: visible; 1: thermal; 2: blended}: thermal image rendering mode (1 and 2 supported only by ANAFI Thermal and ANAFI USA) * **camera/streaming** (int) := 0 \(\in\) {0: minimize latency with average reliability (best for piloting); 1: maximize reliability with an average latency; 2: maximize reliability using a frame-rate decimation}: streaming mode * bright colors, warm shade, high contrast; 3: pastel - soft colors, cold shade, low contrast}: images style * **drone/banked_turn** (bool) := true \(\in\) {true: enabled; false: disabled}: banked turn * **drone/max_altitude** (float) := 2.0 \(\in\) [0.5, 4000.0]: maximum altitude [m] * **drone/max_distance** (float) := 10.0 \(\in\) [10.0, 4000.0]: maximum distance [m] * **drone/max_horizontal_speed** (float) := 1.0 \(\in\) [0.1, 15.0]: maximum horizontal speed [m/s] * **drone/max_pitch_roll** (float) := 10.0 \(\in\) [1.0, 40.0]: maximum pitch and roll angle [\({}^{\circ}\)] * **drone/max_pitch_roll_rate** (float) := 200.0 \(\in\) [40.0, 300.0]: maximum pitch and roll rotation speed [\({}^{\circ}\)/s] * **drone/max_vertical_speed** (float) := 1.0 \(\in\) [0.1, 4.0]: maximum vertical speed [m/s] * **drone/max_yaw_rate** (float) := 180.0 \(\in\) [3.0, 200.0]: maximum yaw rotation speed [\({}^{\circ}\)/s] * **drone/model** (string) := \(\in\) {'4k', 'thermal', 'usa', 'ai', 'unknown'}: drone's model * **gimbal/max_speed** (float) := 180.0 \(\in\) [1.0, 180.0]: maximum gimbal speed [\({}^{\circ}\)/s] * **home/autotrigger** (bool) := true \(\in\) {true: enabled; false: disabled}: auto trigger return-to-home * **home/ending_behavior** (int) := 1 \(\in\) {0: land; 1: hover}: return-to-home ending behavior * **home/min_altitude** (float) := 20.0 \(\in\) [20.0, 100.0]: return-to-home minimum altitude [m] * **home/precise** (bool) := true \(\in\) {true: enabled; false: disabled}: precise return-to-home * **home/type** (int) := 4 \(\in\) {1: take-off location; 3: user-set custom location; 4: pilot location}: home type for return-to-home * **storage/download_folder** (string) := " \(\sim\)/Pictures/Anaf": path to the download folder ### _Custom messages_ * uint8 **mode** \(\in\) {0: level, 1: velocity}: control mode - float32 **zoom**: zoom command [x] / [x/s] * uint8 **mode** \(\in\) {0: position; 1: velocity}: control mode - uint8 **frame** \(\in\) {0: none; 1: relative; 2: absolute}: gimbal's frame of reference - float32 **roll**: roll command [\({}^{\circ}\)] / [\({}^{\circ}\)/s] - float32 **pitch**: pitch command [\({}^{\circ}\)] / [\({}^{\circ}\)/s] - float32 **yaw**: pitch command [\({}^{\circ}\)] / [\({}^{\circ}\)/s] * float32 **dx**: \(x\) displacement [m] - float32 **dy**: \(y\) displacement [m] - float32 **dz**: \(z\) displacement [m] - float32 **dyaw**: yaw displacement [\({}^{\circ}\)] * float64 **latitude**: latitude [\({}^{\circ}\)] - float64 **longitude**: longitude [\({}^{\circ}\)] - float64 **altitude**: altitude [m] - float32 **heading**: heading w.r.t. North [\({}^{\circ}\)] - uint8 **orientation_mode** \(\in\) {0: none; 1: to target; 2: heading start; 3: heading during}: orientation mode * Header **header**: header of the message * float32 **roll**: roll angle [\({}^{\circ}\)] * float32 **pitch**: pitch angle [\({}^{\circ}\)] * float32 **yaw**: yaw rate [\({}^{\circ}\)/s] * float32 **gaz**: vertical velocity [\({\rm m}/{\rm s}\)] * Header **header**: header of the message * int8 **x* * \(\in\) [-100, 100]: \(x\)-axis [\(\%\)] * int8 **y* * \(\in\) [-100, 100]: \(y\)-axis [\(\%\)] * int8 **z* * \(\in\) [-100, 100]: \(z\)-axis [\(\%\)] * int8 **yaw**\(\in\) [-100, 100]: yaw-axis [\(\%\)] * int8 **camera**\(\in\) [-100, 100]: camera-axis [\(\%\)] * int8 **zoom**\(\in\) [-100, 100]: zoom-axis [\(\%\)] * bool **return_home**\(\in\) {true: pressed; false: not pressed}: return-to-home (front top) button * bool **takeoff_land**\(\in\) {true: pressed}, false: not pressed}: take-off/land (front bottom) button * bool **reset_camera**\(\in\) {true: pressed; false: not pressed}: reset camera (back left) button * bool **reset_zoom**\(\in\) {true: pressed; false: not pressed}: reset zoom (back right) button ### _Custom services_ * string **file**: path to the flight plan file on local computer * **string **uid**: flight plan UID in drone's directory * float64 **latitude**: latitude [\({}^{\circ}\)] * float64 **longitude**: longitude [\({}^{\circ}\)] * float64 **altitude**: altitude [\({}^{\circ}\)] * float64 **latitude**: latitude to look at [\({}^{\circ}\)] * float64 **longitude**: longitude to look at [\({}^{\circ}\)] * float64 **altitude**: altitude to look at [\({}^{\circ}\)] * float64 **altitude**: altitude to look at [\({}^{\circ}\)] * bool **locked_gimbal**\(\in\) {true: gimbal is locked on the point of interest, false: gimbal is freely controllable}: gimbal is locked * **Photo**\(\rightarrow\) string **media_id**: media id * uint8 **mode**\(\in\) {0: single shot; 1: bracketing - burst of frames with a different exposure; 2: burst of frames; 3: time-lapse - frames at a regular time interval; 4: GPS-lapse - frames at a regular GPS position interval}: photo mode * uint8 **photo_format**\(\in\) {0: full resolution, not dewarped; 1: rectilinear projection, dewarped}: photo format * uint8 **file_format**\(\in\) {0: jpeg; 1: dng; 2: jpeg and dng}: file format * **Recording**\(\rightarrow\) string **media_id**: media id * uint8 **mode**\(\in\) {0: standard; 1: hyperlapse; 2: slow motion; 3: high-frame}: video recording mode ## Acknowledgment The author thanks the support team of Parrot for their assistance.
2301.08928
Non-isothermal multicomponent flows with mass diffusion and heat conduction
A type-I model of non-isothermal multicomponent systems of gases describing mass diffusive and heat conductive phenomena is presented. The derivation of the model and a convergence result among thermomechanical theories in the smooth regime are discussed. Furthermore, the global-in-time existence of weak solutions and the weak-strong uniqueness property are established for the corresponding system with zero barycentric velocity.
Stefanos Georgiadis, Ansgar Jüngel, Athanasios Tzavaras
2023-01-21T09:43:32Z
http://arxiv.org/abs/2301.08928v1
# Non-isothermal multicomponent flows with mass diffusion and heat conduction ###### Abstract. A type-I model of non-isothermal multicomponent systems of gases describing mass diffusive and heat conductive phenomena is presented. The derivation of the model and a convergence result among thermomechanical theories in the smooth regime are discussed. Furthermore, the global-in-time existence of weak solutions and the weak-strong uniqueness property are established for the corresponding system with zero barycentric velocity. Key words and phrases:Multicomponent systems, Non-isothermal model, Cross-Diffusion, Maxwell-Stefan-Fourier model, Existence of weak solutions, Weak-strong uniqueness 2020 Mathematics Subject Classification: 35Q35, 76M45, 76N10, 76R50, 76T30, 80A17 The first and second authors acknowledge partial support from the Austrian Science Fund (FWF), grants P33010 and F65. This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, ERC Advanced Grant no. 101018153. The diffusional velocities \(u_{i}\) satisfy the Maxwell-Stefan system [1]: \[-\theta\sum_{j\neq i}b_{ij}\rho_{i}\rho_{j}(u_{i}-u_{j})=\epsilon d_ {i}\quad\text{under the constraint}\quad\sum_{i=1}^{n}\rho_{i}u_{i}=0,\] \[\text{where}\qquad\quad d_{i}=\frac{\rho_{i}}{\rho}(\rho b- \nabla p)+\rho_{i}\theta\nabla\frac{\mu_{i}}{\theta}-\theta(\rho_{i}e_{i}+p_{ i})\nabla\frac{1}{\theta}-\rho_{i}b_{i} \tag{0.4}\] are the generalized forces, \(\mu_{i}\) are the chemical potentials and \(b_{ij}\) are positive and symmetric coefficients that depend on \(\rho_{i},\rho_{j}\) and \(\theta\) and model the binary interactions between the \(i\)-th and the \(j\)-th components with a strength that is measured by \(\epsilon>0\). For an explanation of the origin of the parameter \(\epsilon\), we refer to section 0.2. The remaining thermodynamic quantities are computed from a set of constitutive relations that describe the material response. Throughout this article, a simple mixture of ideal gases is employed, i.e., the thermodynamics of the \(i\)-th component is described by a Helmholtz free energy density of the form: (IG) \[\rho_{i}\psi_{i}=\theta\frac{\rho_{i}}{m_{i}}\left(\ln\frac{\rho_{i}}{m_{i}}-1 \right)-c_{w}\rho_{i}\theta(\ln\theta-1),\] where \(m_{i}\) are the molar masses and \(c_{w}\) the heat capacity, which for simplicity is assumed to be the same for all components. Given \(\rho_{i}\psi_{i}\), one computes the chemical potentials as \(\mu_{i}=\frac{\partial(\rho_{i}\psi_{i})}{\partial\rho_{i}}\), the partial entropy densities as \(\rho_{i}\eta_{i}=-\frac{\partial(\rho_{i}\psi_{i})}{\partial\theta}\), the partial internal energy densities by \(\rho_{i}e_{i}=\rho_{i}\psi_{i}+\rho_{i}\eta_{i}\theta\) and the partial pressures by the Gibbs-Duhem relation \(p_{i}=-\rho_{i}\psi_{i}+\rho_{i}\mu_{i}\). Summing up the partial entropies, the partial internal energies and the partial pressures, we obtain the total entropy density, total internal energy density and total pressure, respectively. Under these relations, system (0.1)-(0.3) is closed. Given equations (0.1)-(0.3), an entropy identity can be derived [6]: \[\partial_{t}(\rho\eta)+\operatorname{div}(\rho\eta v)=\operatorname{div}\left( \frac{\kappa}{\theta}\nabla\theta-\frac{1}{\theta}\sum_{i=1}^{n}(\rho_{i}e_{i }+p_{i}-\rho_{i}\mu_{i})u_{i}\right)+\frac{\kappa}{\theta^{2}}|\nabla\theta|^{ 2}-\frac{1}{\theta}\sum_{i=1}^{n}u_{i}\cdot d_{i}\] The last two terms capture entropy production, which according to the second law of thermodynamics must be non-negative. The Clausius-Duhem inequality \[\partial_{t}(\rho\eta)+\operatorname{div}(\rho\eta v)\geq\operatorname{div} \left(\frac{\kappa}{\theta}\nabla\theta-\frac{1}{\theta}\sum_{i=1}^{n}(\rho_{i }e_{i}+p_{i}-\rho_{i}\mu_{i})u_{i}\right)\] is a manifestation of the previous statement and has a dual role: For smooth solutions it is used to restrict the form of the constitutive relations, while for weak solutions it is regarded as a criterion of thermodynamic admissibility. ### Derivation of the model In a multicomponent system of \(n\) species, one may assume that each of the components is described by its own triplet \((\rho_{i},v_{i},\theta_{i})\), i.e., each species has its own mass density, velocity and temperature. Such a model would contain ample information but the number and complexity of the equations makes it difficult to solve and analyze, and it is (at least) challenging to design experiments able to measure all these quantities. For this reason, simplified models are usually investigated, for example those in which each component is characterized by the triplet \((\rho_{i},v_{i},\theta)\), i.e., each component has its own mass density and velocity, but the model does not distinguish among different temperatures, and the only temperature involved is that one of the mixture, common for all species. An even further simplified model that does not distinguish among the different velocities and temperatures, with each constituent described by the triplet \((\rho_{i},v,\theta)\), where \(v\) is the barycentric velocity of the fluid common for all species. The above models are known as type-III, type-II and type-I, respectively. The advantage of type-I models is their simplicity; yet the description of diffusive phenomena would be impossible as different velocities are required for the transportation of mass. Thus, one would like to compensate between the simplicity of a type-I model and the information that a type-II model carries. This counterbalance can be reached if one derives a type-I model via a type-II one, which was originally achieved in [3] in a more general framework including viscous effects and chemical reactions. The starting point is the type-II model \[\partial_{t}\rho_{i}+\text{div}(\rho_{i}v)=-\text{div}(\rho_{i}u_ {i}),\quad i=1,\ldots,n, \tag{0.6}\] \[\partial_{t}(\rho_{i}v_{i})+\text{div}(\rho_{i}v_{i}\otimes v_{i})\] \[\quad=\rho_{i}b_{i}-\rho_{i}\nabla\mu_{i}-\frac{1}{\theta}(\rho_ {i}e_{i}+p_{i}-\rho_{i}\mu_{i})\nabla\theta-\theta\sum_{j\neq i}b_{ij}\rho_{i }\rho_{j}(v_{i}-v_{j}),\] (0.7) \[\partial_{t}\left(\rho e+\sum_{i=1}^{n}\frac{1}{2}\rho_{i}v_{i}^ {2}\right)+\text{div}\left(\left(\rho e+\sum_{i=1}^{n}\frac{1}{2}\rho_{i}v_{i }^{2}\right)v\right)\] \[\quad=\text{div}\left(\kappa\nabla\theta-\sum_{i=1}^{n}(\rho_{i} e_{i}+p_{i}+\frac{1}{2}\rho_{i}v_{i}^{2})u_{i}\right)-\text{div}(pv)+\rho b \cdot v+\rho r+\sum_{i=1}^{n}\rho_{i}b_{i}\cdot u_{i}. \tag{0.5}\] In the context of a type-II model, the barycentric velocity \(v\) is defined as \(v=\frac{1}{\rho}\sum_{i=1}^{n}\rho_{i}v_{i}\) and the diffusional velocities as \(u_{i}=v_{i}-v\). The essential difference between (0.1)-(0.3) and (0.5)-(0.7) concerns the momentum balances; namely, in the type-I model, only a single momentum balance is available. It serves as an approximation of the \(n\) partial momentum balances of the type-II model, in the sense that terms of order \(|u_{i}|^{2}\) are ignored (cf. [3]). The same derivation was obtained in the isothermal case, excluding viscous effects and chemical reactions in [9], using a Chapman-Enskog expansion, where the type-I model was seen as the high-friction limit of the corresponding type-II. To this extent, the last term in (0.6), which corresponds to the friction term due to the interaction between the components, was rescaled by a factor \(1/\epsilon\), where \(\epsilon>0\) is a relaxation parameter. By letting \(\epsilon\to 0\), the partial velocities \(v_{i}\) degenerate to a single velocity, which is the barycentric velocity \(v\). The emerging type-I model serves as an \(\epsilon^{2}\) approximation of the corresponding type-II model. The additional information takes the form of a constrained linear system for determining the diffusional velocities \(u_{i}\) and is the isothermal analogue of system (0.4). The resulting type-I system contains a single velocity \(v\), yet the diffusional velocities \(u_{i}\) carry all the information of the mass-diffusive effects. The derivation of (0.1)-(0.4) as a high-friction limit of (0.5)-(0.7) was done for the non-isothermal case in [6]. ### Dissipative structure As was mentioned above, the energy dissipation needs to be non-negative, as it is essential for the model to be compatible with the second law of thermodynamics. The dissipation \[\mathcal{D}=\frac{1}{\theta^{2}}\kappa|\nabla\theta|^{2}-\frac{1}{\theta}\sum _{i=1}^{n}u_{i}\cdot d_{i}\] contains two terms: the first term is the dissipation due to heat conduction, while the second one describes dissipation caused by friction among the components. Keeping in mind that this model should encapsulate three different theories (one which describes only mass-diffusive phenomena when the temperature is kept constant; one that describes only thermal effects when the diffusional velocities vanish; and one which combines both phenomena), it is expected that each dissipation term should be non-negative independent of the other. Indeed, due to the non-negativity of the thermal conductivity \(\kappa\), as indicated by Fourier's law of heat conduction, the dissipation due to heat conduction is non-negative and one needs to focus only on the second term. There are two ways to show the non-negativity of the frictional dissipation: the first one consists of substituting \(d_{i}\) by (0.4) and using the symmetry of the coefficients \(b_{ij}\) to deduce \[-\frac{1}{\theta}\sum_{i=1}^{n}u_{i}\cdot d_{i}=\frac{1}{2}\sum_{i=1}^{n} \sum_{j\neq i}^{n}b_{ij}\rho_{i}\rho_{j}|u_{i}-u_{j}|^{2}. \tag{0.8}\] The second one consists of the inversion of the constrained linear system (0.4) and the subsequent substitution of \(u_{i}\) in the diffusional dissipation. The latter is a delicate process since system (0.4) is singular and thus the existence of a unique solution is not guaranteed. As the generalized forces \(d_{i}\) satisfy \(\sum_{i=1}^{n}d_{i}=0\), the right-hand side of (0.4) belongs to the range of the matrix on the left-hand side, and thus the system has infinitely many solutions, from which the one satisfying the linear constraint in (0.4) is selected. This procedure was systematically carried out in [10] using the Bott-Duffin generalized inverse (also see [2]), and provides an explicit way of solving for the diffusional velocities \(u_{i}\), which in turn allows for estimates for the unknowns to be obtained. After the computation of \(u_{i}\), the diffusional dissipation reads: \[-\frac{1}{\theta}\sum_{i=1}^{n}u_{i}\cdot d_{i}=\sum_{i=1}^{n}\sum_{j=1}^{n}A_ {ij}\frac{d_{i}}{\theta\sqrt{\rho_{i}}}\cdot\frac{d_{j}}{\theta\sqrt{\rho_{j}}} \tag{0.9}\] where \(A_{ij}\) is the Bott-Duffin inverse of the matrix on the left-hand side of the linear system. The last expression is a quadratic form generated by the matrix \(A_{ij}\), which turns out to be positive semi-definite in a particular subspace related to the constraint (0.4), again verifying that the diffusional dissipation is non-negative. ### Convergence among theories Since the energy dissipation is non-negative, the model fits into the general framework of hyperbolic-parabolic systems, as studied in [4], for which the existence of a unique local-in-time strong solution has been proved in [7]. As was mentioned above, system (0.1)-(0.4) contains multiple theories encoded in the choice of the parameters \(\epsilon\) and \(\kappa\). For instance, the choice \(\epsilon=0\), \(\kappa\neq 0\) corresponds to a theory describing heat-conduction but no mass-diffusion. One would like to investigate whether the strong solution of the system with mass-diffusion and heat conduction converges to the strong solution of the system with heat conduction but no mass-diffusion, obtained by setting \(\epsilon=0\). The answer is positive and is summarized in the following theorem from [6], where \(\mathbb{T}^{3}\) is the three-dimensional torus: **Theorem 1**.: _Let \(\bar{U}^{\kappa}\) be a strong solution of system (0.1)-(0.4) neglecting mass-diffusive effects (i.e. with \(\epsilon=0\)) defined on a maximal interval of existence \(\mathbb{T}^{3}\times[0,T^{*})\) and let \(U^{\epsilon,\kappa}\) be a family of strong solutions of (0.1)-(0.4) defined on \(\mathbb{T}^{3}\times[0,T]\), for some \(T<T^{*}\), which emanate from smooth data \(\bar{U}_{0}^{\kappa}\), \(U_{0}^{\epsilon,\kappa}\), respectively, and satisfy the uniform bounds_ \[0<\delta\leq\rho_{j},\bar{\rho}_{j}\leq M,\quad 0<\delta\leq\theta,\bar{\theta}\leq M \tag{0.10}\] _for some \(\delta,M>0\). Moreover, assume that \(0\leq\kappa(\rho_{1},\ldots,\rho_{n},\theta)\leq M\). Then, \(U^{\epsilon,\kappa}\to\bar{U}^{\kappa}\) in the relative entropy sense, as \(\epsilon\to 0\)._ Similarly, one can simultaneously let \(\epsilon,\kappa\to 0\), in order to obtain convergence to the adiabatic theory (cf. [6]): **Theorem 2**.: _Let \(\bar{U}\) be a strong solution of (0.1)-(0.4) with \(\epsilon=\kappa=0\), defined on a maximal interval of existence \(\mathbb{T}^{3}\times[0,T^{*})\), and let \(U^{\epsilon,\kappa}\) be a family of strong solutions of (0.1)-(0.4) defined on \(\mathbb{T}^{3}\times[0,T]\), for \(T<T^{*}\), emanating from smooth data \(\bar{U}_{0},U_{0}^{\epsilon,\kappa}\) respectively. Under the hypotheses of Theorem 1, \(U^{\epsilon,\kappa}\to\bar{U}\) in the relative entropy sense, as \(\epsilon,\kappa\to 0\)._ The proof of Theorems 1 and 2 is based on the relative entropy [4, 6, 10], \[\mathcal{H}(U|\bar{U})=\int_{\Omega}\bigg{[}\frac{1}{2}\rho|v-\bar{v}|^{2}+ \sum_{i}\frac{1}{m_{i}}\bigg{(}\rho_{i}\log\frac{\rho_{i}}{\bar{\rho}_{i}}-( \rho_{i}-\bar{\rho}_{i})\bigg{)}-c_{w}\rho\bigg{(}\log\frac{\theta}{\bar{ \theta}}+(\theta-\bar{\theta})\bigg{)}\bigg{]}\mathrm{d}x,\] which can be seen as a measure of the distance between the solutions of the two systems, namely \((U^{\epsilon,\kappa},\bar{U}^{\kappa})\) for Theorem 1 and \((U^{\epsilon,\kappa},\bar{U})\) for Theorem 2. The relative entropy identity \[\partial_{t}\left(\bar{\theta}\mathcal{H}(U|\bar{U})\right)+ \mathrm{div}\Big{[}v\bar{\theta}\mathcal{H}(U|\bar{U})+(p-\bar{p})(v-\bar{v}) +\sum_{j}\rho_{j}u_{j}(\mu_{j}-\bar{\mu}_{j})\] \[-(\theta-\bar{\theta})\left(\frac{1}{\theta}\kappa\nabla\theta- \frac{1}{\bar{\theta}}\bar{\kappa}\nabla\bar{\theta}\right)+(\theta-\bar{ \theta})\sum_{j}\rho_{j}\eta_{j}u_{j}\Big{]}+\bar{\theta}\kappa\left|\frac{ \nabla\theta}{\theta}-\frac{\nabla\bar{\theta}}{\bar{\theta}}\right|^{2}\] \[-\frac{\bar{\theta}}{\theta}\sum_{j}u_{j}\cdot d_{j}=(\partial_{ t}\bar{\theta}+\bar{v}\cdot\nabla\bar{\theta})(-\rho\eta)(U|\bar{U})-p(U|\bar{U}) \mathrm{div}\bar{v}-(\eta-\bar{\eta})\rho(v-\bar{v})\cdot\nabla\bar{\theta}\] \[-\sum_{j}\nabla\bar{\mu}_{j}\cdot\left(\frac{\rho_{j}}{\rho}- \frac{\bar{\rho}_{j}}{\bar{\rho}}\right)\rho(v-\bar{v})-\rho(v-\bar{v})\nabla \bar{v}\cdot(v-\bar{v})-\sum_{j}\nabla\bar{\mu}_{j}\cdot\rho_{j}u_{j}\] \[-\left(\frac{\nabla\theta}{\theta}-\frac{\nabla\bar{\theta}}{ \bar{\theta}}\right)\cdot\frac{\nabla\bar{\theta}}{\bar{\theta}}(\bar{\theta }\kappa-\theta\bar{\kappa})-\frac{\theta}{\bar{\theta}}\nabla\bar{\theta}\cdot \sum_{j}\rho_{j}\eta_{j}u_{j}\] where \[p(U|\bar{U}) =p-\bar{p}-\sum_{j}\bar{p}_{\rho_{j}}(\rho_{j}-\bar{\rho}_{j})- \bar{p}_{\theta}(\theta-\bar{\theta})\] \[(-\rho\eta)(U|\bar{U}) =-\rho\eta+\bar{\rho}\bar{\eta}+\sum_{j}(\bar{\rho}\bar{\eta}) _{\rho_{j}}(\rho_{j}-\bar{\rho}_{j})+(\bar{\rho}\bar{\eta})_{\theta}(\theta- \bar{\theta})\] is then used to obtain a stability estimate for the difference of the two solutions, by controlling the first five terms on the right-hand side by the relative entropy of the two solutions and absorbing the last three terms by the dissipation on the left-hand side, so that the relative entropy of the two solutions is bounded by the relative entropy of the initial data for all times \(0<t<T\). Since the two solutions emanate from the same initial data, they coincide for \(t>0\); see [6] for the full proof. The assumption that the mass densities in (0.10) are bounded away from zero can be avoided, at the expense of assuming that the free energy densities \(\psi_{i}\) are in \(C^{3}(\overline{\mathcal{U}})\), where \(\mathcal{U}\) is a set in the positive cone \((\mathbb{R}^{+})^{n+1}\) with \(\overline{\mathcal{U}}\) compact, such that: \[\mathcal{U}=\{(\rho_{1},\ldots,\rho_{n},\theta)\ :\ 0<\rho_{j},\bar{\rho}_{j} \leq M,\ 0<\delta\leq\rho,\bar{\rho}\leq M,\ 0<\delta\leq\theta,\bar{\theta}\leq M\}.\] However, in the case of the ideal gas (IG), the presence of the logarithm requires the technical hypothesis that mass densities should avoid vacuum (see [6, Section 5] for details). ### Mass and thermal diffusion around zero mean flow In the case of zero mean flow, i.e. when the barycentric velocity of the mixture \(v\) vanishes, the system reads: \[\partial_{t}\rho_{i}+\mathrm{div}J_{i} =0, \tag{0.12}\] \[\nabla p =0,\] (0.13) \[\partial_{t}(\rho e)+\mathrm{div}J_{e} =0, \tag{0.11}\] where \(u_{i}\) is the unique solution of (0.4) and the fluxes are given by \[J_{i}=\rho_{i}u_{i}\quad\text{and}\quad J_{e}=-\kappa\nabla\theta+\sum_{i=1}^{ n}(\rho_{i}e_{i}+p_{i})u_{i}.\] Note that the choice \(v=0\) does not make the momentum equation disappear completely; in fact it gives the momentum constraint (0.12), which makes sure that the system remains consistent with the assumption of zero mean flow, since a non-zero pressure gradient would generate motion, which contradicts the choice \(v=0\). The above system falls into the realm of parabolic problems, in the sense that after a change of variables from the set of prime variables \((\rho_{1},\ldots,\rho_{n},\theta)\) to the set of entropy variables \((\mu_{1}/\theta,\ldots,\mu_{n}/\theta,-1/\theta)\), the matrix of phenomenological coefficients which relates fluxes and entropy variables, namely the matrix \(\mathbb{D}\) such that \[(J_{1},\ldots,J_{n},J_{e})^{\top}=\mathbb{D}\nabla\left(\frac{\mu_{i}}{\theta },\ldots,\frac{\mu_{n}}{\theta},-\frac{1}{\theta}\right)^{\top}\] is positive semi-definite (cf. [5, Section 2]). System (0.11)-(0.13) is here solved in a bounded domain \(\Omega\subset\mathbb{R}^{3}\) and is completed by the following boundary and initial conditions: \[J_{i}\cdot\nu=0,\ J_{e}\cdot\nu=\lambda(\theta-\theta_{0})\quad \text{on }\partial\Omega,\ t>0, \tag{0.15}\] \[\rho_{i}(x,0)=\rho_{i}^{0}(x),\ \theta(x,0)=\theta^{0}(x)\quad\text{ in }\Omega, \tag{0.14}\] where \(\lambda\geq 0\), \(\theta_{0}>0\), and \(\nu\) is the exterior unit normal to the boundary \(\partial\Omega\). The boundary conditions state that mass cannot enter or exit \(\Omega\) through the boundary, while heat exchange is allowed, in a manner proportional (by \(\lambda\)) to the difference of the temperature of the mixture \(\theta\) and the background temperature \(\theta_{0}\). Even though the Maxwell-Stefan system has been studied extensively in the isothermal case, the only known works in the non-isothermal case concerns the local-in-time existence and uniqueness of classical solutions in [11] and the global-in-time existence of weak solutions in [8]. The goal in [5] is to obtain global-in-time weak solutions for the above Maxwell-Stefan-Fourier system, that is compatible with thermodynamics and differs from the model presented in [8] in several points, as explained in [5, Section 1]. **Theorem 3**.: _Let \(\Omega\subset\mathbb{R}^{3}\) be a bounded domain with Lipschitz continuous boundary. Assume that the diffusion coefficients are bounded above and continuous in \((\rho_{1},\ldots,\rho_{n},\theta)\) and the thermal conductivity \(\kappa\) is continuous in \((\rho_{1},\ldots,\rho_{n},\theta)\) and satisfies the bounds_ \[c_{k}(1+\theta^{2})\leq\kappa(\rho_{1},\ldots,\rho_{n},\theta)\leq C_{k}(1+ \theta^{2}) \tag{0.16}\] _for some positive constants \(c_{k},C_{k}\) and for all \(\theta>0\). If the initial data \(\rho_{i}^{0}\in L^{\infty}(\Omega)\) are such that the total mass is bounded away from vacuum and infinity and \(\theta^{0}\in L^{\infty}(\Omega)\), with \(\inf_{\Omega}\theta^{0}>0\), then for every \(T>0\) there exists a weak solution of (0.11)-(0.15) and (0.4), satisfying \(\rho_{i}>0\) and \(\theta>0\) a.e. in \(\Omega_{T}:=\Omega\times(0,T)\) and having the regularity_ \[\rho_{i}\in L^{\infty}(\Omega_{T})\cap L^{2}(0,T;H^{1}(\Omega)) \cap H^{1}(0,T;(H^{2}(\Omega))^{*}),\] \[\theta\in L^{2}(0,T;H^{1}(\Omega))\cap W^{1,16/15}(0,T;(W^{2,16} (\Omega))^{*}),\ \log\theta\in L^{2}(0,T;H^{1}(\Omega)).\] The proof of the theorem is based on a suitable regularization and uniform estimates from the regularized entropy (or free energy) inequality. More precisely, we discretize the equations in time by the implicit Euler scheme to avoid any issues regarding the time regularity, transform to the so-called entropy variables, defined by the relative chemical potentials, and add an elliptic higher-order regularization. By construction, the entropy variables yield the positivity of the approximate densities and temperature, while the elliptic regularization gives sufficiently regular solutions, making this transformation rigorous. The approximate problem is solved by a fixed-point argument (Leray-Schauder theorem). Some uniform estimates are derived from a discrete, regularized version of the entropy inequality, acquired after choosing the entropy variables as test functions in the weak formulation of the problem and using the technical assumption (0.16) as well as the positive semi-definiteness of matrix \(A_{ij}\) from (0.9) in the subspace induced by the constraint (0.4). Then the de-regularization limit can be performed by using an Aubin-Lions compactness argument. For more details, see [5]. The uniqueness of the local-in-time strong solution was established in [1] for the isothermal and in [11] for the non-isothermal case, but for weak solutions the problem remains open. The most general result in the isothermal case is a weak-strong uniqueness property, i.e., whenever there is a strong solution, any weak solution will coincide with the strong one, and can be found in [10]. A similar result but for the non-isothermal system (0.11)-(0.15) was proved in [5], in the case when no heat exchange is allowed through the boundary, i.e. \(\lambda=0\). **Theorem 4**.: _Let \(U=(\rho_{1},\ldots,\rho_{n},\theta)\) be a weak solution of (0.11)-(0.15), with \(\lambda=0\) in (0.14), and let \(\bar{U}=(\bar{\rho}_{1},\ldots,\bar{\rho}_{n},\bar{\theta})\) be a strong solution. Assume that there exist \(\delta,M>0\) such that the weak solution satisfies_ \[0<\rho_{i}\leq M,\quad 0<\theta\leq M, \tag{0.17}\] _and the strong solution satisfies_ \[0<\delta\leq\bar{\rho}_{i}\leq M,\quad 0<\delta\leq\bar{\theta}\leq M \tag{0.18}\] _as well as_ \[\nabla\sqrt{\bar{\rho}_{i}}\in L^{\infty}_{\rm loc}(\Omega\times(0,T)),\quad \nabla\log\bar{\theta}\in L^{\infty}_{\rm loc}(\Omega\times(0,T)).\] _Moreover, let the thermal conductivity \(\kappa\) be Lipschitz continuous as a function of the temperature, satisfying (0.16). Then, if the initial data \(U^{0}=(\rho_{1}^{0},\ldots,\rho_{n}^{0},\theta^{0})\) and \(\bar{U}^{0}=(\bar{\rho}_{1}^{0},\ldots,\bar{\rho}_{n}^{0},\bar{\theta}^{0})\) coincide, the two solutions coincide too, i.e. \(U(x,t)=\bar{U}(x,t)\) in \(\Omega\), for all \(0\leq t<T\)._ A problematic aspect of Theorem 4 is the assumption that the mass densities of the strong solution need to be bounded away from vacuum. Not only is this a strong mathematical assumption, but also excludes the case of vanishing concentrations that might occur due to the interaction of the components, if one assumes chemical reactions. Keeping this in mind, one can restate the previous theorem, exchanging the assumption on strictly positive mass densities with an assumption on the finiteness of the diffusional velocities, which is a natural hypothesis, since mass is transported at finite speed. **Theorem 5**.: _Let \(U=(\rho_{1},\ldots,\rho_{n},\theta)\) be a weak solution to (0.11)-(0.15), with \(\lambda=0\) in (0.14), and let \(\bar{U}=(\bar{\rho}_{1},\ldots,\bar{\rho}_{n},\bar{\theta})\) be a strong solution. Assume that there exist \(\delta,M>0\) such that the weak solution satisfies (0.17) and the strong solution satisfies_ \[0\leq\bar{\rho}_{i}\leq M,\quad 0<\delta\leq\bar{\theta}\leq M \tag{0.19}\] _as well as_ \[\log\bar{\rho}_{i}\in H^{1}_{\rm loc}(\Omega\times(0,T)),\quad\bar{u}_{i}\in L ^{\infty}_{\rm loc}(\Omega\times(0,T)),\quad\nabla\log\bar{\theta}\in L^{ \infty}_{\rm loc}(\Omega\times(0,T)).\] _Moreover, let the thermal conductivity \(\kappa\) be Lipschitz continuous as a function of the temperature, satisfying (0.16). Then, if the initial data \(U^{0}=(\rho_{1}^{0},\ldots,\rho_{n}^{0},\theta^{0})\) and \(\bar{U}^{0}=(\bar{\rho}_{1}^{0},\ldots,\bar{\rho}_{n}^{0},\bar{\theta}^{0})\) coincide, the two solutions coincide too, i.e. \(U(x,t)=\bar{U}(x,t)\) in \(\Omega\), for all \(0\leq t<T\)._ The proof of Theorems 4 and 5 is similar with the one of Theorems 1 and 2. In this case, the relative entropy reads \[\mathcal{H}(U|\bar{U})=\int_{\Omega}\bigg{(}\sum_{i}\frac{\rho_{i}}{m_{i}}\log \frac{\rho_{i}}{\bar{\rho}_{i}}-\sum_{i}\frac{\rho_{i}-\bar{\rho}_{i}}{m_{i}} -c_{w}\rho\log\frac{\theta}{\bar{\theta}}+c_{w}\rho(\theta-\bar{\theta}) \bigg{)}\mathrm{d}x,\] since the barycentric velocity is assumed to be zero and the relative entropy identity is used to obtain a stability estimate for the difference of a weak and a strong solution of the same system, namely (0.11)-(0.15); see [5]. The difference between the two versions of the theorem lies in the interpretation of the diffusional dissipation: Theorem 4 requires the inversion of system (0.4) and the subsequent elimination of \(u_{i}\) from the diffusional dissipation, resulting in (0.9) and requiring more assumptions on the mass densities, while in Theorem 5 one eliminates the generalized forces \(d_{i}\) by using (0.4) and the diffusional dissipation takes the form (0.8).
2302.11929
A metric to compare the anatomy variation between image time series
Biological processes like growth, aging, and disease progression are generally studied with follow-up scans taken at different time points, i.e., with image time series (TS) based analysis. Comparison between TS representing a biological process of two individuals/populations is of interest. A metric to quantify the difference between TS is desirable for such a comparison. The two TS represent the evolution of two different subject/population average anatomies through two paths. A method to untangle and quantify the path and inter-subject anatomy(shape) difference between the TS is presented in this paper. The proposed metric is a generalized version of Fr\'echet distance designed to compare curves. The proposed method is evaluated with simulated and adult and fetal neuro templates. Results show that the metric is able to separate and quantify the path and shape differences between TS.
Alphin J Thottupattu, Jayanthi Sivaswamy
2023-02-23T11:18:04Z
http://arxiv.org/abs/2302.11929v1
# A metric to compare the anatomy variation between image time series ###### Abstract Biological processes like growth, aging, and disease progression are generally studied with follow-up scans taken at different time points, i.e., with image time series (TS) based analysis. Comparison between TS representing a biological process of two individuals/populations is of interest. A metric to quantify the difference between TS is desirable for such a comparison. The two TS represent the evolution of two different subject/population average anatomies through two paths. A method to untangle and quantify the path and inter-subject anatomy(shape) difference between the TS is presented in this paper. The proposed metric is a generalized version of Frechet distance designed to compare curves. The proposed method is evaluated with simulated and adult and fetal neuro templates. Results show that the metric is able to separate and quantify the path and shape differences between TS. Keywords:Image TS Time-dependant variation Time dependant variation. ## 1 Introduction Studying natural processes such as growth, disease progression, and other physiological processes often requires imaging at different time points, thus generating an image time series (TS). The images in such TS typically represent a deforming organ of an individual or population average anatomies derived to represent the general trend of a process. Modeling the TS as a continuously deforming image/_shape_ though a temporal _path_[8, 16] helps to directly analyze the deformation happening in the anatomy with time [15]. However, it is a fact that anatomy and the biological process vary across individuals. When two TS are of individuals from the same population, it is presumed that there is anatomical (or _shape_) similarity, and for a population-level analysis with a group of TS, the focus is on understanding the _path_ difference directly or by mapping to a common space [11]. For a pair of TS, both _shape_ and _path_ difference will significantly contribute towards the difference between them, and it has to be captured by a metric that defines the distance between them. It is more interesting to study these separately when comparing two population average TS. Let us consider comparing normal brain aging trends of two populations. Aging leads to brain anatomy deformations, which can differ across populations. Comparing the aging of two populations requires accounting for the basic shape differences between the two. This paper proposes a novel approach that quantifies shape and path variation separately to define the distance between two-time series, reflecting the time-independent and time-dependent variations. This method can be useful in comparing and understanding brain aging differences across different populations. Image similarity metrics such as SSIM and MSE are popular for image-level comparisons. They are inappropriate for image TS because they fail to quantify spatial and temporal differences separately. The terminology 'path difference' is generally used to compare two 1D curves, where the initial points in both curves are assumed to be the same. Hausdorff distance [2] is a common metric to measure the distance between two curves, but it does not consider the course of the curves. Dynamic time Warping [4] aids in comparing two trajectories of different speeds, but it is also a discrete measure. Frechet distance [9] is the standard for comparing two continuous curves or paths. The idea of Frechet distance is defined in [9] as the minimum leash length when a person walks with a dog on a leash forward, from start to end. In image TS representing biological processes, the spatial context is as important as the temporal variation, but Frechet distance (FD) [1] is not designed to handle these directly. In general, none of the existing methods for curve comparison are directly applicable to compare and quantify 3D image TS differences. 3D-image-based qualitative and quantitative comparison of biological process in the existing literature is limited to characteristics like cerebral volume and dimensions of the brain, which are derived from each TS [21, 3, 5, 12]; this translates to image level comparison for 3D image TS. Such analysis, however, will not help to separately compare the _shape_ variation and _path_ variations. Measures like changes in volume and structure/organ dimensions have been reported but cannot capture the non-rigid anatomical changes. For instance, growth trajectories have been compared in [11] to study the difference between the human brain in healthy/normal individuals and those with Alzheimer's disease by first mapping the trajectories of two groups of individual follow-up scans to a common space in [11] and then performing a volume change analysis. Such methods are useful for case-specific group analysis, but a metric that defines the distance between two TS will facilitate a more general analysis framework. ### Our Contribution The main contribution of this paper is a metric for comparing a pair of TS, which considers both _shape_ and _path_ variations. To our knowledge, this has not been addressed in the context of group analysis. We propose a metric to quantify _path_ variation inspired by the idea of FD for curves because FD considers the course of the path, unlike other measures, and defines a single metric to quantify the _path_ variation. We also propose a metric to quantify _shape_ variation based on the deformation-based distance between the _shapes_ of the individual TS. Proposed shape_ and _path_ distance metrics are defined for every point in 3D space. The sum of the average _shape_ and _path_ distances quantifies the difference between the two TS. ## 2 Method TS data corresponds to either an individual anatomy variation or an average population anatomy variation with time. It should be pointed out that affine invariant, intensity normalized image TS are considered in this paper as these factors are separately quantifiable. We also assume that the two TS to be compared have been acquired from two different subjects (and denoted as \(TS_{1}\) and \(TS_{2}\)) for approximately the same time range, with aligned time indices. For example, \(TS_{1}\) and \(TS_{2}\) correspond to two subjects' scans in an age range of 20-30 years, and one subject is scanned only in even years while the other is in odd years. Hence, lack of temporal correspondence and time range mismatch can make comparing two TS difficult, and the discrete nature of the TS causes the same. We propose to overcome this by deriving a continuous model for each TS as a first step. These models of TS are then compared in a common time interval, and _shape_ and _path_ distances are defined. ### Deriving continuous representation of TS In computational anatomy, a natural process in the human body is typically modeled as a deformation of an underlying anatomy [15]. Hence, when a sequence of images of a single individual is represented as a TS, it is modeled either as an anatomy deforming through a path [16, 18] or with a Kernel-based regression [6]. The latter does not model the path as a function of time; instead, any image point is a function of time. Since we aim to separately quantify the difference in TS in terms of _shape_ and _path_ variations, we choose a path-based modeling approach. When two TS are compared _shape_ and _path_, distance is expected to separately quantify the time-dependant and independent variations, respectively. The _shape_ in each TS continuous model is the anatomy/image that can be mapped to any time point in the TS through a _path_ defined on that. The _shape_ in each TS should represent the same time point, then the distance between them represents time-independent variation between TS. To illustrate the proposed method, recent path-based modeling from [18] has opted to derive the continuous model. To derive a continuous model of a TS in time interval \([t_{0},t_{n}]\), any point in the TS can be selected as _shape_, and the _path_ is defined for the selected _shape_. A point somewhere in the middle of \([t_{0},t_{n}]\) is chosen for convenience as the temporal range mismatch can be compensated easily with such modeling. The paths are modeled as diffeomorphic deformations \(\phi=\exp(v\cdot\gamma(t))\) where \(v\) is a vector field that defines the direction in which each spatial position has to move/deform, and the \(\gamma(t)\) controls the rate of change of the deformation with time. As the _shape_(\(S\)) lies towards the middle, say at \(t=m\), two paths are defined in \([t_{0},m]\) and \([m,t_{n}]\) to cover the whole time range. The TS is modeled as \(\left\{\left\{\phi_{1}(t)\circ S\right\}_{t_{0}}^{m},\left\{\phi_{2}(t)\circ S \right\}_{m}^{t_{n}}\right\}\). The deformations \(\phi_{1}(1)\) and \(\phi_{2}(t)\) correspond to the paths defined in time intervals \([t_{0},m]\) and \([m,t_{n}]\) respectively. Let us consider two TS, \(TS_{1}\) and \(TS_{2}\) given in time interval \([t_{i},t_{m}]\) and \([t_{j},t_{n}]\) respectively. Let the selected _shapes_ be \(S_{I}\) and \(S_{J}\), occurring at \(m_{I}\) and \(m_{J}\). Similarly, \(\phi_{*}^{I}\) and \(\phi_{*}^{J}\) correspond to the _path_s of the two models derived on \(S_{I}\) and \(S_{J}\) respectively, where \({}_{*}\) corresponds to the path index(1 or 2) throughout the paper. Hence, the continuous representation of \(TS_{1}\) and \(TS_{2}\) are \(I(t)\) and \(J(t)\) as given in Equation 1-2. \[I(t)=\begin{cases}\phi_{1}^{I}(t)\circ S_{I}\text{ for }t\geq m_{I},\\ \phi_{2}^{I}(t)\circ S_{I}\text{ for }t\leq m_{I}.\end{cases} \tag{1}\] \[J(t)=\begin{cases}\phi_{1}(t)^{J}\circ S_{J}\text{ for }t\geq m_{J},\\ \phi_{2}(t)^{J}\circ S_{J}\text{ for }t\leq m_{J}.\end{cases} \tag{2}\] ### Temporal alignment of the continuous representations From Equation 1-2, it can be noted that shapes \(S_{I}\) and \(S_{J}\) correspond to time points \(m_{I}\) and \(m_{J}\), respectively. Before any comparative assessment of TS, the models need to be temporally aligned by moving \(S_{I}\) and \(S_{J}\) to the same time point. The _shapes_ obtained after alignment are denoted as \(\tilde{S}_{I}\) and \(\tilde{S}_{J}\) defined at \((m_{I}+m_{J})/2\). If \(m_{I}<(m_{I}+m_{J})/2\) and \(m_{J}>(m_{I}+m_{J})/2\) then \(\tilde{S}_{I}\) and \(\tilde{S}_{J}\) are given by Equation 3 and 4 respectively. \[\tilde{S}_{I}=S_{I}\circ\phi_{1}^{I}\left(\frac{m_{J}-m_{I}}{2}\right) \tag{3}\] \[\tilde{S}_{J}=S_{J}\circ\phi_{2}^{J}\left(\frac{m_{I}-m_{J}}{2}\right) \tag{4}\] The _shape_ distance computation is formulated such that the _shapes_ being compared to be at the same time point. To keep _shapes_ at \((m_{I}+m_{J})/2\) as shown in Figure 1.B, the deformations also must be reformulated about the time point \((m_{I}+m_{J})/2\). In Equation 3, \(\tilde{S}_{I}\) is already a deformed version of \(S_{I}\) with a small deformation \(\phi_{1}^{I}\left(\frac{m_{J}-m_{I}}{2}\right)\), and the same deformation has to be removed from the \(\phi_{1}\) path of the model with \(\tilde{S}_{I}\) as the _shape_. Similarly, other deformations have to be updated, and the updated deformations are given as follows. \[\tilde{\phi}_{1}^{I}(t)=\phi_{1}^{I}(t)\circ-\phi_{1}^{I}\left(\frac{m_{J}-m _{I}}{2}\right) \tag{5}\] \[\tilde{\phi}_{2}^{I}(t)=\phi_{2}^{I}(t)\circ-\phi_{1}^{I}\left(\frac{m_{J}-m _{I}}{2}\right) \tag{6}\] \[\tilde{\phi}_{1}^{J}(t)=\phi_{1}^{J}(t)\circ-\phi_{1}^{J}\left(\frac{m_{I}-m_{J}}{2 }\right) \tag{7}\] \[\tilde{\phi}_{2}^{J}(t)=\phi_{2}^{J}(t)\circ-\phi_{1}^{J}\left(\frac{m_{I}-m_{J} }{2}\right) \tag{8}\] The continuous models are extrapolated /truncated to the same time interval before comparing the two. Figure 1 shows the steps to be followed to derive the continuous temporally aligned, range-compensated continuous models from the TS, which are used to compute the distance between the pair of TS. ### Computing the _shape_ and _path_ distance _Shape_ distance:_ The \(\tilde{S}_{I}\) and \(\tilde{S}_{J}\) in each TS continuous model represents the _shape_ corresponding to \(TS_{1}\) and \(TS_{2}\), respectively at the same time point. Hence, the deformation between the \(\tilde{S}_{I}\) and \(\tilde{S}_{J}\) captures the _shape_ variation \(\phi_{S}\) between the two TS. deformation is modeled as \(\phi_{S}=\exp(\mathbf{V}_{S})\), where \(\mathbf{V}_{S}\) represents a stationary velocity field. Then the norm of the vector field \(\mathbf{V}_{S}\) can be directly used to quantify the deformation as given in [19]. The _shape_ distance (\(d_{s}\)) between \(TS_{1}\) and \(TS_{2}\) is hence defined as \[d_{s}=\|\mathbf{V}_{S}\| \tag{9}\] _Path_ distance:_ Our aim is to enable the comparison of a pair of TS on a common interval \([t_{a},t_{b}]\), which can be flexibly selected. Extrapolation or truncation may be required, depending on the selected time interval. The _path_ distance is defined as the maximum distance between the two paths over the chosen interval. A schematic for computing the _path_ distance is shown Figure 1: A) Temporally aligned \(TS_{1},TS_{2}\), B) Continuous image paths \(I(t),J(t)\), and C) Temporally aligned continuous models with extrapolated deformation(red curve) in Figure 2. The distance between the paths has to be computed after separating the _shape_ distance between the paths. When the two TS are modeled with the same _shape_, then the only distance between the TS will be the _path_ distance \((d_{p})\). Hence, we map the paths to either \(\tilde{S}_{I}\) or \(\tilde{S}_{J}\) via parallel transport to force \(d_{s}=0\). Here we consider \(\tilde{S}_{J}\) as the reference to define the _shape_ and _path_ distance. Hence, the paths \(\tilde{\phi}_{1}^{I}(t)\) and \(\tilde{\phi}_{2}^{I}(t)\) are transferred to \(\tilde{S}_{J}\) via parallel transport [13] through \(\mathbf{V}_{S}\) to get \(\overline{\tilde{\phi}_{1}^{I}(t)}\) and \(\overline{\tilde{\phi}_{2}^{I}(t)}\). The \(J(t)\) paths \(\tilde{\phi}_{1}^{J}(t)\) and \(\tilde{\phi}_{2}^{J}(t)\) and the transferred paths are defined on \(\tilde{S}_{J}\). Let the path difference be denoted as \(\{d_{1}(t),d_{2}(t)\}\) where \(d_{1}(t)\) corresponds to the distance in \([t_{a},m]\) and \(d_{2}(t)\) corresponds to the distance defined in \((m,t_{b}]\). Since the path is modeled with vector fields \(v\cdot\gamma(t)\), we use the norm of the difference between the vector fields in \(I(t)\) and \(J(t)\) models to compute the _path_ distance \(d_{*}(t)\) as follows. \[d_{*}(t)=\left\|v_{*}^{I}\cdot\gamma_{I}(t)-v_{*}^{J}\cdot\gamma_{J}(t)\right\| \tag{10}\] Finally, the net difference between the paths has to be derived. We follow the Frechet distance formulation for this purpose. By definition, Frechet distance is the shortest leash with which a man and his dog on the leash can complete a journey. Frechet distance helps to quantify the similarity by considering the course of the paths. Inspired by the Frechet distance formulation, the maximum distance \(\max d_{*}(t)\) in \(t=[t_{a},t_{b}]\) at each spatial position is computed first. The distance, \(\max d_{*}(t)\) is not defined on \(\tilde{S}_{J}\), hence it is transported to \(\tilde{S}_{J}\) via \(\overline{\tilde{\phi}_{*}^{I}(t)}\) to get \(\overline{\max\left\{d_{*}(t)\right\}_{t_{a}}^{t_{b}}}\). In Figure 2\(d_{p}\) corresponds to _path_ distance, and it is given as \[d_{p}=\overline{\max\left\{d_{*}(t)\right\}_{t_{a}}^{t_{b}}} \tag{11}\] The total distance between the two TS (\(D\)) is defined in Equation 12 as a combination of _shape_ and _path_ variation. The sum of \(d_{s}\) and \(d_{p}\) gives the distance between the TS. Both _shape_ and _path_ distances satisfy the distance properties; hence, \(D\) also defines a distance that satisfies all distance properties. \[D=d_{s}+d_{p}=\left\|\mathbf{V}_{S}\right\|+\left\|\overline{\max\left\{d_{*}(t) \right\}_{t_{a}}^{t_{b}}}\right\| \tag{12}\] ## 3 Results A variety of experiments were done to validate the proposed method. We believe the proposed method is the first attempt towards separating the _shape_ and _path_ distance between two TS. Hence, bench-marking was not possible. ### Implementation Details If the TS under consideration is longitudinal data, then _shape_\(S\) can correspond to any point in the TS, as the same subject is scanned at different time points. If the TS the s population average image (i.,e. a template) at each time point, then \(S\) is found by averaging all samples in the TS. Now \(S\) represents the anatomy that normalizes all inter-subject and temporal variation. In both cases, an \(S\) is not preferred to lie at the end of the time range. This constraint in modeling helps to perform time range matching of the two TS. It also demands a two-piece path modeling which is helpful in handling a complex path as a diffeomorphic deformation. ### Simulated data-based experiment In order to understand how well the proposed method separates and quantifies time-dependent and independent distances, a simulation experiment was done with three sample TS pairs which were constructed by deforming a Shepp-Logan phantom with simulated path and shape deformation as shown in the first column of Figure 3. The first set of TS (rows 1-2) was constructed such that they differed only by shape, while the second set of TS (rows 3-4) was constructed to differ only by the path, and finally, the last set of TS ( rows 5-6) was constructed to differ in terms of both shape and path. To generate the second set (rows 3-4), two mutually inverse paths were constructed using the path deformation on the phantom image. The third set (rows 5-6) was constructed with different source images; one was the original phantom image, and the other was the shape-deformed phantom image. Inverse paths were applied to these images to construct the TS pair. The last column in Figure 3 displays a heat map for each set of the computed shape(\(d_{s}\)) and path(\(d_{p}\)) distance values. For the first set, the _path_ variation is negligible, and _shape_ variation is maximum and vice versa for the second set. For the last pair of TS, both _shape_ and _path_ variations are observed. This observation is in line with the expected results. Hence, this experiment validates the proposed method's ability to separate the time-dependent and independent distances between a pair of TS. ### Aging data-based experiment The second experiment is with real data, specifically 3D brain templates at different age points, which arise in brain aging studies. The aging process and brain anatomy are expected to vary across two different populations [5]. Hence, inter-population distances are expected to be larger than intra-population distances; these hypotheses are evaluated with the proposed metric. Datasets drawn from the Caucasian (Neurodev [7]) and Japanese (AOBA [17] populations were used for the inter-population study in this experiment. The age range of subjects was 22-87 years for the former and 25-75 years for the latter. The intra-population TS pair was constructed from the Neurodev data by sampling the data at odd (\(P1_{a}\)) and even (\(P1_{b}\)) time indices. This was possible because Neurodev templates were defined every 5 years, whereas AOBA templates are only available for every decade. Hence, intra-population analysis was not done with AOBA. _Shape_ and _path_ distance were computed for inter and intra-population TS pairs and are presented in a Table in Figure 4. Notably, both shape and path distances are higher for inter-population (column 2) than intra-population TS pairs. The time interval considered for analysis was 30-70 years, as the time ranges differ for the two TS. Sample time points of each TS considered in this analysis are shown in Figure 5. Figure 3: Shepp Logan-based validation of the proposed method to quantify the path distance. Column 1: phantom and the deformations fields for shape and path Column 2: three sets of TS used in the validation; column 4: _Shape_ and _path_ distance maps Figure 4. The 3D visualization of _shape_ and _path_ distances are shown in Figure 4 in three canonical planes for a better understanding of the spatial distribution of the distance. It can be observed that both the shape and path distances are smaller within a population relative to across populations; this inter-population difference appears to be primarily due to the _path_ difference rather than the _shape_ difference, which is consistent with the results listed in the Table. A third experiment was done with fetal brain datasets as the developmental changes are large, unlike in the case of the adult brain considered in the previous experiment. Caucasian and Chinese populations were considered for this purpose. Scans of subjects aged 23-35 weeks were used for both populations. The fetal templates for the Caucasian population were from CRL database [10] while those of the Chinese population were from FBA [20] database. As both the TS were well sampled, the intra-population TS were generated in the same manner as in the previous experiment. A few sample points of the two TS are shown in Figure 5. \(P1_{a}\) and \(P1_{b}\) constructed from \(\text{Causian}(P1)\) and \(P2_{a}\) vs \(P2_{b}\) constructed from \(\text{Chinese}(P2)\) populations. Two intra-population cases (\(P1_{a}\) vs \(P1_{b}\), and \(P2_{a}\) vs \(P2_{b}\)) and one inter-population (\(P1\) vs \(P2\)) case were compared. The _shape_ and _path_ distances are plotted separately for each pair of TS in Figure 5. It is notable that the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances. The _shape_ and _path_ distances are plotted separately for each pair of TS in Figure 5. It is notable that the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances. The _shape_ and _path_ distances are plotted separately for each pair of TS in Figure 5. It is notable that the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances. The _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for _shape_shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for _shape_ and _path_ distances are almost the same for _shape_ and _path_ distances are almost the same for _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for the _shape_ and _path_ distances are almost the same for _shape_ and _path_ distances intra-population pair, whereas the variation is much higher for inter-population pairs. Once again, _Path_ distance is higher and is the major contributor to the total distance between two population TS. Finally, the proposed method was also evaluated on a pair of longitudinal TS acquired from two normal individuals at fairly short intervals of 78-83 years and 78-82 years from the same population [14]. The cross-sectional _shape_ variation shown in colour maps in Figure 6 appears to be more than the _path_ variation for the two subjects. This is the opposite of the result for TS covering a wider age range. This trend is logical as the aging effect over a short time span is likely to be much less across subjects from the same population than morphological variation. ## 4 Discussion and Conclusion A metric that enables disentangling of the _shape_ and _path_ variation and helps quantify the difference between a pair of TS is proposed in this paper. The proposed metric is an affine invariant and time interval mismatch-compensated metric. The idea of _shape_ variation in the proposed metric is more relevant when the TS under consideration are from cohorts from different populations. As the course of the path is considered in the quantification of the _path_ variation, the intra-population TS path variation can be analyzed. For example, one can study the _path_ difference in the growth pattern among the elderly (50-80 years) versus the young (20-50) within a population. This was done in our second experiment, and the distance for Caucasians was found to be 1.8, while it is 1.4 for Figure 5: Validation results of templates of the fetal brain from two populations. The samples of population average templates for the Caucasian and Chinese populations are shown in rows 1-2. The average distance plots for the intra- (\(P1_{a}\) vs. \(P1_{b}\)) and inter-population (\(P1\) vs. \(P2\)) study are shown at the bottom. Japanese. This suggests that the temporal variations are faster in Caucasians than in Japanese after adulthood. Whereas the difference across populations is much lower for the fetal brain. Such analysis opens up the opportunity to better understand the reason behind such trends from young to elderly and across populations. The main goal of longitudinal data-based group analysis is to understand the general trend. Our work enables approaching the problem via a joint statistical analysis of 4D data (TS of 3D images). A metric to quantify the distance between a pair of TS can also help derive an average TS model from a set of TS as done for 1D-3D objects. There are some limitations with regard to the proposed metric. It cannot handle large temporal mismatches. Further, the metric accuracy is totally dependent on the accuracy of the computed deformations. This is relevant to inter-subject TS analysis registration in this scenario is generally error-prone that too for a complex structure such as the brain.
2310.17191
How do Language Models Bind Entities in Context?
To correctly use in-context information, language models (LMs) must bind entities to their attributes. For example, given a context describing a "green square" and a "blue circle", LMs must bind the shapes to their respective colors. We analyze LM representations and identify the binding ID mechanism: a general mechanism for solving the binding problem, which we observe in every sufficiently large model from the Pythia and LLaMA families. Using causal interventions, we show that LMs' internal activations represent binding information by attaching binding ID vectors to corresponding entities and attributes. We further show that binding ID vectors form a continuous subspace, in which distances between binding ID vectors reflect their discernability. Overall, our results uncover interpretable strategies in LMs for representing symbolic knowledge in-context, providing a step towards understanding general in-context reasoning in large-scale LMs.
Jiahai Feng, Jacob Steinhardt
2023-10-26T07:10:31Z
http://arxiv.org/abs/2310.17191v2
# How do Language Models Bind Entities in Context? ###### Abstract To correctly use in-context information, language models (LMs) must bind entities to their attributes. For example, given a context describing a "green square" and a "blue circle", LMs must bind the shapes to their respective colors. We analyze LM representations and identify the _binding ID mechanism_: a general mechanism for solving the binding problem, which we observe in every sufficiently large model from the Pythia and LLaMA families. Using causal interventions, we show that LMs' internal activations represent binding information by attaching _binding ID vectors_ to corresponding entities and attributes. We further show that binding ID vectors form a continuous subspace, in which distances between binding ID vectors reflect their discernability. Overall, our results uncover interpretable strategies in LMs for representing symbolic knowledge in-context, providing a step towards understanding general in-context reasoning in large-scale LMs. ## 1 Introduction Modern language models (LMs) excel at many reasoning benchmarks, suggesting that they can perform general purpose reasoning across many domains. However, the mechanisms that underlie LM reasoning remain largely unknown (Rauker et al., 2023). The deployment of LMs in society has led to calls to better understand these mechanisms (Hendrycks et al., 2021), so as to know why they work and when they fail (Mu and Andreas, 2020; Hernandez et al., 2021; Vig et al., 2020). In this work, we seek to understand _binding_, a foundational skill that underlies reasoning. How humans solve binding, i.e. recognize features of an object as bound to that object and not to others, is a fundamental problem in psychology (Treisman, 1996). Here, we study binding in LMs. Binding arises any time the LM has to reason about two or more objects of the same kind. For example, consider the following passage involving two people and two countries: Context: Alice lives in the capital city of France. Bob lives in the capital city of Thailand. Question: Which city does Bob live in? (1) In this example the LM has to represent the associations _lives(Alice, Paris)_ and _lives(Bob, Bangkok)_. We call this the _binding problem_--for the predicate _lives_, _Alice_ is bound to _Paris_ and _Bob_ to _Bangkok_. Since predicates are bound in-context, binding must occur in the activations, rather than in the weights as with factual recall (Meng et al., 2022). This raises the question: how do LMs represent binding information in the context such that they can be later recalled? Overall, our key technical contribution is the identification of a robust general mechanism in LMs for solving the binding problem. The mechanism relies on _binding IDs_, which are abstract concepts that LMs use internally to mark variables in the same predicate apart from variables in other predicates (Fig. 1). Using causal mediation analysis we empirically verify two key properties of the binding ID mechanism (Section 3). Turning to the structure of binding IDs, we find that binding IDs are represented as vectors which are bound to variables by simple addition (Section 4). Further, we show that binding IDs occupy a subspace, in the sense that linear combinations of binding IDs are still valid binding IDs, even though random vectors are not. Lastly, we find that binding IDs are ubiquitous and transferable (Section 5). They are used by every sufficiently large model in the LLaMA (Touvron et al., 2023) and Pythia (Biderman et al., 2023) families, and their fidelity increases with scale. They are used for a variety of synthetic binding tasks with different surface forms, and binding vectors from one task transfer to other tasks. Finally, we qualify our findings by showing that despite their ubiquity, binding IDs are not universal: we exhibit a question-answering task where an alternate mechanism, "direct binding", is used instead. ## 2 Preliminaries In this section we define the _binding task_ and explain causal mediation analysis, our main experimental technique. **Binding task.** Formally, the binding task consists of a set of entities \(\mathcal{E}\) and a set of attributes \(\mathcal{A}\). An \(n\)-entity instance of the binding task consists of a context that is constructed from \(n\) entities \(e_{0},\dots,e_{n-1}\in\mathcal{E}\) and \(n\) attributes \(a_{0},\dots,a_{n-1}\in\mathcal{A}\), and we denote the corresponding context as \(\mathbf{c}=\mathrm{ctxt}(e_{0}\leftrightarrow a_{0},\dots,e_{n-1}\leftrightarrow a _{n-1})\). For a context \(\mathbf{c}\), we use \(E_{k}(\mathbf{c})\) and \(A_{k}(\mathbf{c})\) to denote the \(k\)-th entity and the \(k\)-th attribute of the context \(\mathbf{c}\), for \(k\in[0,n-1]\). We will drop the dependence on \(\mathbf{c}\) for brevity when the choice of \(\mathbf{c}\) is clear from context. In the capitals task, which is the main task we study for most of the paper, \(\mathcal{E}\) is a set of single-token names, and \(\mathcal{A}\) is a set of single-token countries. Quote 1 is an example instance of the capitals task with context \(\mathbf{c}=\mathrm{ctxt}(Alice\leftrightarrow France,Bob\leftrightarrow Thailand)\). In this context, \(E_{0}\) is \(Alice\), \(A_{0}\) is \(France\), etc. Given a context \(\mathbf{c}\), we are interested in the model's behavior when queried with each of the \(n\) entities present in \(\mathbf{c}\). For any \(k\in[0,n-1]\), when queried with the entity \(E_{k}\) the model should place high probability on the answer matching \(A_{k}\). In our running example, the model should predict "Paris" when queried with "Alice", and "Bangkok" when queried with "Bob". To evaluate a model's behavior on a binding task, we sample \(N=100\) contexts. For each context \(\mathbf{c}\), we query the LM with every entity mentioned in the context, which returns a vector of log probabilities over every token in the vocabulary. The _mean log prob_ metric measures the mean of the log probability assigned to the correct attribute token. Top-1 accuracy measures the proportion of queries where the correct attribute token has the highest log probability out of all attribute tokens. However, we will instead use the _median-calibrated accuracy_(Zhao et al., 2021), which calibrates the log probabilities with the median log probability before taking the top-1 accuracy. We discuss this choice in Appendix A. **Causality in autoregressive LMs.** We utilize inherent causal structure in autoregressive LMs. Let an LM have \(n_{\text{layers}}\) transformer layers and a \(d_{\text{model}}\)-dimensional activation space. For every token position \(p\), we use \(Z_{p}\in\mathbb{R}^{n_{\text{layers}}\times d_{\text{model}}}\) to denote the stacked set of internal activations1 at token \(p\) (see Fig. 1(a)). We refer to the collective internal activations of the context as \(Z_{\text{contact}}\). In addition, we denote the activations at the token for the \(k\)-th entity as \(Z_{E_{k}}\), and the \(k\)-th attribute as \(Z_{A_{k}}\). We sometimes write \(Z_{A_{k}}(\mathbf{c}),Z_{\text{context}}(\mathbf{c})\), etc. to make clear the dependence on the context \(\mathbf{c}\). Footnote 1: These are the pre-transformer layer activations, sometimes referred to as the _residual stream_. Figure 1: The Binding ID mechanism. The LM learns abstract binding IDs (drawn as triangles or squares) which distinguish between entity-attribute pairs. Binding functions \(\Gamma_{E}\) and \(\Gamma_{A}\) bind entities and attributes to their abstract binding ID, and store the results in the activations. To answer queries, the LM identifies the attribute that shares the same binding ID as the queried entity. Fig. 2a shows that \(Z_{\text{context}}\) contains all the information about the context that the LM uses. We thus study the structure of \(Z_{\text{context}}\) using _causal mediation analysis_, a widely used tool for understanding neural networks (Vig et al., 2020; Geiger et al., 2021; Meng et al., 2022). Causal mediation analysis involves substituting one set of activations in a network for another, and we adopt the \(/\). notation (from Mathematica) to denote this. For example, for activations \(Z_{*}\in\mathbb{R}^{n_{\text{input}}\times d_{\text{multi}}}\), and a token position \(p\) in the context, \(Z_{\text{context}}/.\{Z_{p}\to Z_{*}\}=[Z_{0},\dots,Z_{p-1},Z_{*},Z_{p+1},\dots]\). Similarly, for a context \(\mathbf{c}=\operatorname{ctx}(e_{0}\leftrightarrow a_{0},\dots,e_{n-1} \leftrightarrow a_{n-1})\), we have \(\mathbf{c}/.\{E_{k}\to e_{*}\}=\operatorname{ctx}(e_{0}\leftrightarrow a_{0},\dots,e_{*}\leftrightarrow a_{k},\dots,e_{n-1}\leftrightarrow a_{n-1})\). Given a causal graph, causal mediation analysis determines the role of an intermediate node by experimentally intervening on the value of the node and measuring the model's output on various queries. For convenience, when the model answers queries in accordance to a context \(\mathbf{c}\), we say that the model _believes2_\(\mathbf{c}\). If there is no context consistent with the language model's behavior, then we say that the LM is _confused_. Footnote 2: We do not claim or assume that LMs actually have beliefs in the sense that humans do. This is a purely notational choice to reduce verbosity. As an example, suppose we are interested in the role of the activations \(Z_{A_{0}}\) in Fig. 2a. To apply causal mediation analysis, we would: 1. Obtain \(Z_{\text{context}}\) by running the model on the original context \(\mathbf{c}\) (which we also refer to as the _target_ context) (Fig. 2a) 2. Obtain \(Z^{\prime}_{\text{context}}\) by running the model on a different context \(\mathbf{c}^{\prime}\) (i.e. _source_ context) (Fig. 2b) 3. Modify \(Z_{\text{context}}\) by replacing \(Z_{A_{0}}\) from the target context with \(Z^{\prime}_{A_{0}}\) from the source context (Fig. 2c), while keeping all other aspects of \(Z_{\text{context}}\) the same, resulting in \(Z^{\text{intervened}}_{\text{context}}=Z_{\text{context}}/.\{Z_{A_{0}} \to Z^{\prime}_{A_{0}}\}\) 4. Evaluate the model's beliefs based on the new \(Z^{\text{intervened}}_{\text{context}}\) We can infer the causal role of \(Z_{A_{0}}\) from how the intervention \(Z_{\text{context}}/.\{Z_{A_{0}}\to Z^{\prime}_{A_{0}}\}\) changes the model's beliefs. Intuitively, if the model retains its original beliefs \(\mathbf{c}\), then \(Z_{A_{0}}\) has no causal role in the model's behavior on the binding task. On the other hand, if the model now believes the source context \(\mathbf{c}^{\prime}\), then \(Z_{A_{0}}\) contains all the information in the context. In reality both hypothetical extremes are implausible, and in Section 3 we discuss a more realistic hypothesis. A subtle point is that we study how different components of \(Z_{\text{context}}\) store information about the context (and thus influence behavior), and not how \(Z_{\text{context}}\) itself is constructed. We thus suppress Figure 2: **a) Causal diagram for autoregressive LMs. From input context \(\operatorname{ctx}(e_{0}\leftrightarrow a_{0},e_{1}\leftrightarrow a_{1})\), the LM constructs internal representations \(Z_{\text{context}}\). We will mainly study the components of \(Z_{\text{context}}\) boxed in blue. b) A secondary run of the LM on context \(\operatorname{ctx}(e_{2}\leftrightarrow a_{2},e_{3}\leftrightarrow a_{3})\) to produce \(Z^{\prime}_{\text{context}}\). c) An example intervention where \(Z_{\text{context}}\) is modified by replacing \(Z_{A_{0}}\to Z^{\prime}_{A_{0}}\) from \(Z^{\prime}_{\text{context}}\).** the causal influence that \(Z_{A_{0}}\) has on downstream parts of \(Z_{\text{context}}\) (such as \(Z_{E_{1}}\) and \(Z_{A_{1}}\)) by freezing the values of \(Z_{E_{1}}\) and \(Z_{A_{1}}\) in \(Z_{\text{context}}^{\text{intervened}}\) instead of recomputing them based on \(Z_{A_{0}}^{\prime}\). ## 3 Existence of Binding IDs In this section, we first describe our hypothesized binding ID mechanism. Then, we identify two key predictions of the mechanism, factorizability and position independence, and verify them experimentally. We provide an informal argument in Appendix B for why this binding ID mechanism is the only mechanism consistent with factorizability and position independence. **Binding ID mechanism.** We claim that to bind attributes to entities, the LM learns abstract binding IDs that it assigns to entities and attributes, so that entities and attributes bound together have the same binding ID (Fig. 1). In more detail, our informal description of the binding ID mechanism is: 1. For entity \(E_{k}\), encode both the entity \(E_{k}\) and the binding ID \(k^{3}\) in the activations \(Z_{E_{k}}\). 2. For attribute \(A_{k}\), encode both the attribute \(A_{k}\) and the binding ID \(k\) in the activations \(Z_{A_{k}}\). 3. To answer a query for entity \(E_{k}\), retrieve from \(Z_{\text{context}}\) the attribute that shares the same binding ID as \(E_{k}\). Further, for activations \(Z_{E_{k}}\) and \(Z_{A_{k}}\), the binding ID and the entity/attribute are the only information they contain that affects the query behavior. More formally, there are _binding functions_\(\Gamma_{E}(e,k)\) and \(\Gamma_{A}(a,k)\) that fully specify how \(Z_{E}\) and \(Z_{A}\) bind entities/attributes with binding IDs. Specifically, if \(E_{k}=e\in\mathcal{E}\), then we can replace \(Z_{E_{k}}\) with \(\Gamma_{E}(e,k)\) without changing the query behavior, and likewise for \(Z_{A}\). More generally, given \(Z_{\text{context}}\) with entity representations \(\Gamma_{E}(e_{0},0),\ldots,\Gamma_{E}(e_{n-1},n-1)\) and attribute representations \(\Gamma_{A}(a_{0},\pi(0)),\ldots,\Gamma_{A}(a_{n-1},\pi(n-1))\) for a permutation \(\pi\), the LM should answer queries according to the context \(\mathbf{c}=\operatorname{ctxt}(e_{0}\leftrightarrow a_{\pi^{-1}(0)},\ldots, e_{n-1}\leftrightarrow a_{\pi^{-1}(n-1)})\). This implies two properties in particular, which we will test in the following subsections: * **Factorizability:** if we replace \(Z_{A_{k}}\) with \(Z_{A_{k}^{\prime}}\), then the model will bind \(E_{k}\) to \(A_{k}^{\prime}\) instead of \(A_{k}\), i.e. it will believe \(\mathbf{c}./\{A_{k}\to A_{k}^{\prime}\}\). This is because \(Z_{A_{k}}^{\prime}\) encodes \(\Gamma_{A}(A_{k}^{\prime},k)\) and \(Z_{A_{k}}\) encodes \(\Gamma_{A}(A_{k},k)\). Substituting \(Z_{A_{k}}\to Z_{A_{k}^{\prime}}\) will overwrite \(\Gamma_{A}(A_{k},k)\) with \(\Gamma_{A}(A_{k}^{\prime},k)\), causing the model to bind \(E_{k}\) to \(A_{k}^{\prime}\). * **Position independence:** if we e.g. swap \(Z_{A_{0}}\) and \(Z_{A_{1}}\), the model still binds \(A_{0}\leftrightarrow E_{0}\) and \(A_{1}\leftrightarrow E_{1}\), because it looks up attributes based on binding ID and not position in the context. In Section 4, we construct fine-grained modifications to the activation \(Z\) that modify the binding ID but not the attributes, allowing us to test the binding hypothesis more directly. In Section 5 we extend this further by showing that binding IDs can be transplanted from entirely different tasks. ### Factorizability of activations The first property of \(Z_{\text{context}}\) we test is _factorizability_. In our claimed mechanism, information is highly localized\(-Z_{A_{k}}\) contains all relevant information about \(A_{k}\), and likewise for \(Z_{E_{k}}\). Therefore, we expect LMs that implement this mechanism to have factorizable activations: for any contexts \(\mathbf{c},\mathbf{c}^{\prime}\), substituting \(Z_{E_{k}}\to Z_{E_{k}}(\mathbf{c}^{\prime})\) into \(Z_{\text{context}}(\mathbf{c})\) will cause the model to believe \(\mathbf{c}/.\{E_{k}\to E_{k}^{\prime}\}\), and substituting \(Z_{A_{k}}\to Z_{A_{k}}(\mathbf{c}^{\prime})\) cause the model to believe \(\mathbf{c}/.\{A_{k}\to A_{k}^{\prime}\}\). To test this concretely, we considered the capitals task from Section 2 with \(n=2\) entity-attribute pairs. We computed representations for two contexts \(\mathbf{c}=\operatorname{ctxt}(e_{0}\leftrightarrow a_{0},e_{1}\leftrightarrow a _{1})\) and \(\mathbf{c}^{\prime}=\operatorname{ctxt}(e_{0}^{\prime}\leftrightarrow a_{0}^{ \prime},e_{1}^{\prime}\leftrightarrow a_{1}^{\prime})\), and used causal mediation analysis (Section 2) to swap representations from the source context \(\mathbf{c}^{\prime}\) into the target context \(\mathbf{c}\). Specifically, we fix \(k\in\{0,1\}\) and intervene on either just the entity (\(Z_{E_{k}}\to Z_{E_{k}}^{\prime}\)), just the attribute, neither, or both. We then measure the mean log probs for all possible queries (\(E_{0},E_{1},E_{0}^{\prime},E_{1}^{\prime}\)). For instance, swapping \(A_{k}\) with \(A_{k}^{\prime}\) in \(Z_{\text{context}}\) should lead \(A_{k}^{\prime}\) (and not \(A_{k}\)) to have high log-probability when \(E_{k}\) is queried. Results are shown in Fig. 3 and support the factorizability hypothesis. As an example, consider Fig. 2(a). In the None setting (no intervention), we see high log probs for \(A_{0}\) when queried for \(E_{0}\), and for \(A_{1}\) when queried for \(E_{1}\). This indicates that the LM is able to solve this task. Next, consider the Attribute intervention setting (\(A_{0}\to A_{0}^{\prime}\)): querying for \(E_{0}\) now gives high log probs for \(A_{0}^{\prime}\), and querying for \(E_{1}\) gives \(A_{1}\) as usual. Finally, in the Both setting (where both entity and attribute are swapped), querying \(E_{0}^{\prime}\) returns \(A_{0}^{\prime}\) while querying \(E_{0}\) leads to approximately uniform predictions. **Experiment details.** We use LLaMA 30-b here and elsewhere unless otherwise stated. In practice, we found that activations for both the entity token and the subsequent token encode the entity binding information. Thus for all experiments in this paper, we expand the definition of \(Z_{E_{k}}\) to include the token activations immediately after \(E_{k}\). ### Position independence We next turn to position independence, which is the other property we expect LMs implementing the binding ID mechanism to have. This says that permuting the order of the \(Z_{E_{k}}\) and \(Z_{A_{k}}\) should Figure 4: Top: Mean log probs for entity interventions. Bottom: Mean log probs for attributes. For brevity, let \(Z_{k}\) refer to \(Z_{E_{k}}\) or \(Z_{A_{k}}\). The grey and green vertical lines indicate the original positions for \(Z_{0}\) and \(Z_{1}\) respectively. The x-axis marks \(x\), \(Z_{0}\)’s new position. Under the position interventions \(\{X_{0}\to x,X_{1}\to X_{1}-(x-X_{0})\}\), the grey line is the _control condition_ with no interventions, and the green line is the _swapped condition_ where \(Z_{0}\) and \(Z_{1}\) have swapped positions. Figure 3: Factorizability results. Each row corresponds to querying for a particular entity. Plotted are the mean log prob for all four attributes. Highlighted squares are predicted by factorizability. have no effect on the output, because the LM looks only at the binding IDs and not the positions of entities or attributes activations. To apply causal interventions to the positions, we use the fact that transformers use positional embeddings to encode the (relative) position of each token in the input. We can thus intervene on these embeddings to "move" one of the \(Z_{k}\)'s to another location \(k^{\prime}\). Formally, we let \(X_{k}\) describe the position embedding for \(Z_{k}\), and denote the position intervention as \(\{X_{k}\to k^{\prime}\}\). In Appendix C we describe how to do this for rotary position embeddings (RoPE), which underlie all the models we study. For now, we will assume this intervention as a primitive and discuss experimental results. For our experiments, we again consider the capitals task with \(n=2\). Let \(X_{E_{0}}\) and \(X_{E_{1}}\) denote the positions of the two entities. We apply interventions of the form \(\{X_{E_{0}}\to x,X_{E_{1}}\to X_{E_{1}}-(x-X_{E_{0}})\}\), for \(x\in\{X_{E_{0}},X_{E_{0}}+1,\dots,X_{E_{1}}\}\). This measures the effect of gradually moving the two entity positions past each other: when \(x=X_{E_{0}}\), no intervention is performed (_control condition_), and when \(x=X_{E_{1}}\) the entity positions are swapped (_swapped condition_). We repeat the same experiment with attribute activations and measure the mean log props in both cases. Results are shown in Fig. 4. As predicted under position independence, position interventions result in little change in model behavior. Consider the _swapped condition_ at the green line. Had the binding information been entirely encoded in position, we expect a complete switch in beliefs compared to the _control condition_. In reality, we observe almost no change in mean log probs for entities and a small change in mean log probs for attributes that seems to be part of an overall gradual trend. We interpret this gradual trend as an artifact of _position-dependent bias_, and not as evidence against position independence. We view it as a bias because it affects all attributes regardless of how they are bound--attributes that are shifted to later positions always have higher log probs. We provide further discussion of this bias, as well as other experimental details, in Appendix C. ## 4 Structure of Binding ID The earlier section shows evidence for the binding ID mechanism. Here, we investigate two hypotheses on the structure of binding IDs and binding functions. The first is that the binding functions \(\Gamma_{A}\) and \(\Gamma_{E}\) are additive, which lets us think of binding IDs as _binding vectors_. The second is contingent on the first, and asks if binding vectors have a geometric relationship between each other. ### Additivity of Binding Functions Prior interpretability research has proposed that transformers represent features linearly (Ehlage et al., 2021). Therefore a natural hypothesis is that both entity/attribute representations and abstract binding IDs are vectors in activation space, and that the binding function simply adds the vectors for entity/attribute and binding ID. We let the binding ID \(k\) be represented by the pair of vectors \([b_{E}(k),b_{A}(k)]\), and the representations of entity \(e\) and attribute \(a\) be \(f_{E}(e)\) and \(f_{A}(a)\) respectively. Then, we hypothesize that the binding functions can be linearly decomposed as: \[\Gamma_{A}(a,k)=f_{A}(a)+b_{A}(k),\quad\Gamma_{E}(e,k)=f_{E}(e)+b_{E}(k). \tag{1}\] Binding ID vectors seem intuitive and plausibly implementable by transformer circuits. To experimentally test this, we seek to extract \(b_{A}(k)\) and \(b_{E}(k)\) in order to perform vector arithmetic on them. We use (1) to extract the _differences_\(\Delta_{E}(k):=b_{E}(k)-b_{E}(0)\), \(\Delta_{A}(k):=b_{A}(k)-b_{A}(0)\). Rearranging (1), we obtain \[\Delta_{A}(k)=\Gamma_{A}(\alpha,k)-\Gamma_{A}(\alpha,0),\quad\Delta_{E}(k)= \Gamma_{E}(a,k)-\Gamma_{E}(a,0). \tag{2}\] We estimate \(\Delta_{A}(k)\) by sampling \(\mathbb{E}_{\mathbf{e},\mathbf{e}^{\prime}}[Z_{A_{k}}(\mathbf{e})-Z_{A_{0}}( \mathbf{e}^{\prime})]\), and likewise for \(\Delta_{E}(k)\). **Mean interventions.** With the difference vectors, we can modify binding IDs by performing _mean interventions_, and observe how model behavior changes. The attribute mean intervention switches the binding ID vectors in \(Z_{A_{0}}\) and \(Z_{A_{1}}\) with the interventions \(Z_{A_{0}}\to Z_{A_{0}}+\Delta_{A}(1),Z_{A_{1}}\to Z_{A_{1}}-\Delta_{A}(1)\). The entity mean intervention similarly switches the binding ID vectors in \(Z_{E_{0}}\) and \(Z_{E_{1}}\). Additivity predicts that performing either mean intervention will reverse the model behavior: \(E_{0}\) will be associated with \(A_{1}\), and \(E_{1}\) with \(A_{0}\). **Experiments.** In our experiments, we fix \(n=2\) and use 500 samples to estimate \(\Delta_{E}(1)\) and \(\Delta_{A}(1)\). We then perform four tests, and evaluate the model accuracy under the original belief. The Control test has no interventions, and the accuracy reflects model's base performance. The Attribute and Entity tests perform the attribute and entity mean interventions, which should lead to a complete switch in model beliefs so that the accuracy is near 0. Table 1 shows agreement with additivity: the accuracies are above \(99\%\) for Control, and below \(3\%\) for Attribute and Entity. As a further check, we perform both attribute and entity mean interventions simultaneously, which should cancel out and thus restore accuracy. Indeed, Table 1 shows that accuracy for Both is above \(97\%\). Finally, to show that the _specific_ directions obtained by the difference vectors matter, we sample random vectors with the same magnitude but random directions, and perform the same mean interventions with the random vectors. These random vectors have no effect on the model behavior. ### The Geometry of Binding ID Vectors Section 4.1 shows that we can think of binding IDs as pairs of ID vectors, and that randomly chosen vectors do not function as binding IDs. We next investigate the geometric structure of valid binding vectors and find that linear interpolations or extrapolations of binding vectors are often also valid binding vectors. This suggests that binding vectors occupy a continuous _binding subspace_. We find evidence of a metric structure in this space, such that nearby binding vectors are hard for the model to distinguish, but far-away vectors can be reliably distinguished and thus used for the binding task. To perform our investigation, we apply variants of the mean interventions in Section 4.1. As before, we start with an \(n=2\) context, thus obtaining representations \(Z_{0}=(Z_{E_{0}},Z_{A_{0}})\) and \(Z_{1}=(Z_{E_{1}},Z_{A_{1}})\). We first erase the binding information by subtracting \((\Delta_{E}(1),\Delta_{A}(1))\) from \(Z_{1}\), which reduces accuracy to chance. Next, we will add vectors \(v_{0}=(v_{E_{0}},v_{A_{0}})\) and \(v_{1}=(v_{E_{1}},v_{A_{1}})\) to the representations \(Z\); if doing so restores accuracy, then we view \((v_{E_{0}},v_{A_{0}})\) and \((v_{E_{1}},v_{A_{1}})\) as valid binding pairs. To generate different choices of \(v\), we take linear combinations across a two-dimensional space. The basis vectors for this space are \((\Delta_{E}(1),\Delta_{A}(1))\) and \((\Delta_{E}(2),\Delta_{A}(2))\) obtained by averaging across an \(n=3\) context. Fig. 5 shows the result for several different combinations, where the coordinates of \(v_{0}\) are fixed and shown in green while the coordinates of \(v_{1}\) vary. When \(v_{1}\) is close to \(v_{0}\), the LM gets close to 50% accuracy, which indicates confusion. Far away from \(v_{1}\), the network consistently achieves high accuracy, demonstrating that linear combinations of binding IDs (even with negative coefficients) are themselves valid binding IDs. See Appendix G for details. \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c} Test condition & Control & Attribute & Entity & Both & Attribute & Entity & Both \\ \hline Querying \(E_{0}\) & 0.99 & 0.00 & 0.00 & 0.97 & 0.98 & 0.98 & 0.97 \\ Querying \(E_{1}\) & 1.00 & 0.03 & 0.01 & 0.99 & 1.00 & 1.00 & 1.00 \\ \end{tabular} \end{table} Table 1: Left: Mean calibrated accuracies for mean interventions on four test conditions. Columns are the test conditions, and rows are queries. Right: Mean interventions with random vectors. Figure 5: The plots show the mean median-calibrated accuracy when one pair of binding ID, \(v_{0}\), is fixed at the green circle, and the other, \(v_{1}\), is varied across the grid. The binding IDs \(b(0)\), \(b(1)\), and \(b(2)\) are shown as the origin of the arrows, the end of the horizontal arrow, and the end of the diagonal arrow respectively. We use LLaMA-13b for computational reasons. The geometry of the binding subspace hints at circuits (Elhage et al., 2021) in LMs that process binding vectors. For example, we speculate that certain attention heads might be responsible for comparing binding ID vectors, since the attention mechanism computes attention scores using a quadratic form which could provide the metric over the binding subspace. ## 5 Generality and Limitations of Binding ID The earlier sections investigate binding IDs for one particular task: the capitals task. In this section, we evaluate their generality. We first show that binding vectors are used for a variety of tasks and models. We then show evidence that the binding vectors are task-agnostic: vectors from one task transfer across many different tasks. However, we show that our mechanism is not fully universal, by exhibiting a question-answering task that uses an alternative binding mechanism. **Generality of binding ID vectors.** We evaluate the generality of binding vectors across models and tasks. For a (model, task) pair, we compute the median-calibrated accuracy on the \(n=3\) context under three conditions: (1) the control condition in which no interventions are performed, and the (2) entity and (3) attribute conditions in which entity or attribute mean interventions (Section 4.1) are performed. We use the mean interventions to permute binding pairs by a cyclic shift and measure accuracy according to this shift (see Appendix F). As shown in Figure 6, the interventions induce the expected behavior on most tasks; moreover, their effectiveness increases with model scale, suggesting that larger models have more robust structured representations. **Transfer across tasks.** We next show that binding vectors often transfer across tasks. Without access to the binding vectors \([b_{E}(k),b_{A}(k)]\), we instead test if the difference vectors \([\Delta^{\text{src}}_{E}(k),\Delta^{\text{src}}_{A}(k)]\) from a source task, when applied to a target task, result in valid binding IDs. To do so, we follow a similar procedure to Section 4.2: First, we erase binding information by subtracting \([\Delta^{\text{tar}}_{E}(k),\Delta^{\text{tar}}_{A}(k)]\) for the target task from each target-task representation \([Z_{E_{k}},Z_{A_{k}}]\), which results in near-chance accuracy. Then, we add back in \([\Delta^{\text{src}}_{E}(k),\Delta^{\text{src}}_{A}(k)]\) computed from the _source_ task with the hope of restoring performance. Table 2 shows results for a variety of source tasks when using capitals as the target task. Accuracy is consistently high, even when the target task has limited surface similarity to the target task. For example, the shapes task contains descriptions about geometrical shapes and their colors, and parallel puts all entities before any attributes instead of interleaving them as in capitals. We include two baselines for comparison: replacing \(\Delta^{\text{src}}(k)\) with the zero vector ("Zeros"), or picking a randomly oriented difference vector as in Table 1 ("Random"). Both lead to chance accuracy. See Appendix D for more details on the tasks. \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c} Task & capitals & parallel & shapes & fruits & Bios & Zeros & Random \\ \hline Mean accuracy & 0.88 & 0.87 & 0.71 & 0.80 & 0.47 & 0.30 & 0.31 \\ Mean log prob & -1.01 & -1.07 & -1.18 & -1.21 & -1.64 & -1.86 & -2.15 \\ \end{tabular} \end{table} Table 2: The mean median-calibrated accuracy and mean log prob for mean interventions on \(n=3\) capitals using binding ID estimates from other tasks. Random chance has \(0.33\) mean accuracy. Figure 6: Left: models in Pythia and LLaMA on capitals. LLaMA-65b not present for computational reasons. Right: LLaMA-30b on binding tasks. Unlike others, the bios task has attributes that are several tokens long. The fact that binding vectors transfer across tasks, together with the results from Section 4, suggests that there could be a task-agnostic subspace in the model's activations reserved for binding vectors. **Direct binding in MCQ.** While binding IDs are used for many tasks, they are not universal. We briefly identify an alternate binding mechanism, the _direct binding_ mechanism, that is used for a multiple-choice question-answering task (MCQ). In MCQ, each label (A or B) has to be bound to its associated option text. In this task, instead of binding variables to an abstract binding ID, the model directly binds the label to the option (Fig. 7). We provide the full details of this task and further explanations in Appendix E. ## 6 Related work **Causal mediation analysis.** In recent years, causal methods have gained popularity in post hoc interpretability (Meng et al., 2022; Geiger et al., 2021). Instead of relying on correlations, which could lead to spurious features (Hewitt and Liang, 2019), causal mediation analysis (Vig et al., 2020) performs causal interventions on internal states of LMs to understand their causal role on LM behavior. Our work shares the same causal perspective adopted by many in this field. **Knowledge recall.** A line of work studies recalling factual associations that LMs learn from pretraining (Geva et al., 2020; Dai et al., 2021; Meng et al., 2022; Geva et al., 2023; Hernandez et al., 2023). This is spiritally related to binding, as entities must be associated to facts about them. However, this work studies factual relations learned from _pretraining_ and how they are recalled from model _weights_. In contrast, we study representations of relations learned from _context_, and how they are recalled from model _activations_. More recently, Hernandez et al. (2023) found a method to construct bound representations by directly binding attribute representations to entity representations. In contrast, our work investigates bound representations constructed by the LM itself, and identifies that the binding ID mechanism (and not direct binding) is the mechanism that LM representations predominantly uses. An avenue for future work is to study how bound representations constructed by Hernandez et al. (2023) relates to the direct binding mechanism we identified in the MCQ task. **Symbolic representations in connectionist systems.** Many works have studied how neural networks represent symbolic concepts in activation space (Mikolov et al., 2013; Tenney et al., 2019; Belinkov and Glass, 2019; Rogers et al., 2021; Patel and Pavlick, 2021). To gain deeper insights into how these representations are used for reasoning, recent works have studied representations used for specialized reasoning tasks (Nanda et al., 2023; Li et al., 2022, 2021). Our work shares the motivation of uncovering how neural networks implement structured representations that enable reasoning. **Mechanistic Interpretability.** Mechanistic interpretability aims to uncover circuits (Elhage et al., 2021; Wang et al., 2022; Wu et al., 2023), often composed of attention heads, that are embedded in language models. In our work, we study language model internals on a more coarse-grained level. We identify structures in representations that have causal influences on model behavior, but how circuits construct these representations or utilize them is left as future work. Figure 7: Direct binding in MCQ task. \(O_{k}\) and \(L_{k}\) denote options and labels respectively. Under direct binding, \(Z_{O_{0}}\) and \(Z_{O_{1}}\) are represented by a binding function \(\Lambda_{O}\) that directly binds option and label together, whereas \(Z_{L_{0}}\) and \(Z_{L_{1}}\) are causally irrelevant. ## 7 Conclusion In this paper we identify and study the binding problem, a common and fundamental reasoning subproblem. We find that pretrained LMs can solve the binding task by binding entities and attributes to abstract binding IDs. Then, we identify that the binding IDs are vectors from a binding subspace with a notion of distance. Lastly, we find that the binding IDs are used broadly for a variety of binding tasks and are present in all sufficiently large models that we studied. Taking a broader view, we see our work as a part of the endeavor to interpret LM reasoning by decomposing it into primitive skills. In this work we identified the binding skill, which is used in several settings and has a simple and robust representation structure. An interesting direction of future work would be to identify other primitive skills that support general purpose reasoning and have similarly interpretable mechanisms. Our work also suggests that ever-larger LMs may still have interpretable representations. A common intuition is that larger models are more complex, and hence more challenging to interpret. Our work provides a counterexample: as LMs become larger, their representations can become _more_ structured and interpretable, since only the larger models exhibited binding IDs (Fig. 6). Speculating further, the fact that large enough models in two unrelated LM families learn the same structured representation strategy points to a convergence in representations with scale. This raises the philosophical question: could there be an ultimate representation that these LMs are converging towards? Perhaps the properties of natural language corpora and LM inductive biases lead to the inevitable development of certain core representation strategies that are invariant to changes in model hyperparameters or exact dataset composition. This would encouragingly imply that interpretability results can transfer across models--studying the core representations of any sufficiently large model would yield insights into other similarly large models because of their convergent core structure. #### Acknowledgments We thank Danny Halawi, Fred Zhang, Erik Jenner, Cassidy Laidlaw, Shawn Im, Arthur Conmy, Shivam Singhal, and Olivia Watkins for their helpful feedback. JF was supported by the Long-Term Future Fund. JS was supported by the National Science Foundation under Grants No. 2031899 and 1804794. In addition, we thank Open Philanthropy for its support of both JS and the Center for Human-Compatible AI.
2309.02399
The Batik-plays-Mozart Corpus: Linking Performance to Score to Musicological Annotations
We present the Batik-plays-Mozart Corpus, a piano performance dataset combining professional Mozart piano sonata performances with expert-labelled scores at a note-precise level. The performances originate from a recording by Viennese pianist Roland Batik on a computer-monitored B\"osendorfer grand piano, and are available both as MIDI files and audio recordings. They have been precisely aligned, note by note, with a current standard edition of the corresponding scores (the New Mozart Edition) in such a way that they can further be connected to the musicological annotations (harmony, cadences, phrases) on these scores that were recently published by Hentschel et al. (2021). The result is a high-quality, high-precision corpus mapping scores and musical structure annotations to precise note-level professional performance information. As the first of its kind, it can serve as a valuable resource for studying various facets of expressive performance and their relationship with structural aspects. In the paper, we outline the curation process of the alignment and conduct two exploratory experiments to demonstrate its usefulness in analyzing expressive performance.
Patricia Hu, Gerhard Widmer
2023-09-05T17:13:47Z
http://arxiv.org/abs/2309.02399v2
# The Batik-Plays-Mozart Corpus: Linking Performance to Score to Misicological Annotations ###### Abstract We present the _Batik-plays-Mozart_ Corpus, a piano performance dataset combining professional Mozart piano sonata performances with expert-labelled scores at a note-precise level. The performances originate from a recording by Viennese pianist Roland Batik on a computer-monitored Bosendorfer grand piano, and are available both as MIDI files and audio recordings. They have been precisely aligned, note by note, with a current standard edition of the corresponding scores (the New Mozart Edition) in such a way that they can further be connected to the musicological annotations (harmony, cadences, phrases) on these scores that were recently published by [1]. The result is a high-quality, high-precision corpus mapping scores and musical structure annotations to precise note-level professional performance information. As the first of its kind, it can serve as a valuable resource for studying various facets of expressive performance and their relationship with structural aspects. In the paper, we outline the curation process of the alignment and conduct two exploratory experiments to demonstrate its usefulness in analyzing expressive performance. Patricia Hu\({}^{1}\) Gerhard Widmer\({}^{1,2}\)\({}^{1}\) Institute of Computational Perception, Johannes Kepler University Linz, Austria \({}^{2}\) LIT AI Lab, Linz Institute of Technology, Austria [email protected] ## 1 Introduction Music performance is a complex and nuanced activity that involves the interplay of various expressive features such as timing, dynamics, and articulation. Expressive performance research in music information retrieval (MIR) focuses on modeling expressive aspects of music performance by analyzing how performers use nuances in timing, dynamics, articulation, and other expressive features to convey their musical intentions, with the aim of developing computational models that can analyze, recognize, or synthesize expressive performances [2]. Recent research in this field for Western classical piano has focused on data-driven approaches both for performance generation [3, 4] and data creation in the form of large-scale MIDI performance data transcribed from audio recordings [5, 6]. While such data corpora can be useful for comparative performance analyses and related tasks (e.g., performer identification, performance style transfer), they lack the necessary precision and alignment information (with the underlying musical score) required to precisely map expressive intentions and parameters to underlying score features. Compared to these large-scale transcribed MIDI datasets, precise MIDI data (as recorded on computer controlled grand pianos such as the Yamaha Disklavier or Boesendorfer SE/Ceus series) along with their corresponding score alignment is somewhat limited in quantity and size [7, 8, 9]. The performances in such datasets are typically sourced from advanced piano students or piano competitions, whereas the digital scores are often obtained from open-source, user-curated online libraries such as MusaEGCore1. Footnote 1: [https://musescore.com/sheetmusic](https://musescore.com/sheetmusic) Regarding the performance-to-score alignment, one would ideally want to have note-by-note correspondence information; unfortunately, in the case of the largest of these datasets [7], score-performance alignments are only given at a rather coarse level of beats. Score annotations conveying structural information such as underlying harmony or phrases are even more scarce. To address these limitations, we introduce the _Batik-plays-Mozart_ dataset2, in which we provide a set of expert performances of 12 complete Mozart piano sonatas (36 distinct movements) in MIDI format by concert pianist Roland Batik, precisely aligned, at a note-by-note-level, to a standard edition (the New Mozart Edition) of the score, thereby linking the performance information to a previously published dataset [1] of expert annotations of the scores in terms of harmony, cadence, and phrase structure. To the best of our knowledge, this is the first corpus of its kind, combining high quality digital score and structural annotations with expert performances in recorded MIDI format. We report two preliminary experiments to demonstrate the benefits of having precise performance-score-structure annotation alignments. Footnote 2: [https://github.com/huispaty/batik_plays_mozart](https://github.com/huispaty/batik_plays_mozart) The remainder of this paper is organised as follows: Section 2 presents a list of comparable expressive performance datasets currently publicly available. Section 3 de scribes the data origins, the used data formats, and the curation process. Section 4 gives an overview of the dataset, and Section 5 describes two preliminary experiments to demonstrate the benefits of performance-score-structure annotation alignments. Finally, Section 6 concludes the paper with some remarks for future work. ## 2 Related Work Several piano performance datasets have been published in the context of expressive performance analysis and performance rendering. While recently published datasets are considerably larger than _Batik-plays-Mozart_, they provide performance recordings solely in the form of MIDI transcribed from audio recordings [5, 6] or do not include a high-quality digital score ground truth [11]. Despite the encouraging results demonstrated by recent transcription models, they often introduce inaccuracies, such as incorrect note fragmentation, missed note onsets, and falsely identified notes [12]. Similarly, certain expressive performance aspects such as (micro-)timing and tempo can only be measured given either a temporal or note-wise score-performance mapping [2]. Nevertheless, these datasets remain useful for various related tasks such as symbolic music generation, music transcription and tagging, or high-level comparative performance analysis. Table 1 presents an overview of comparable piano performance datasets currently publicly available, for which precise (recorded) MIDI data, score-performance alignments and/or musicological annotations are available. Among these datasets, ASAP [7] stands out as the most extensive one, both in terms of musical pieces and performer range, with 1,068 performances beat-aligned to 222 scores, each annotated with key and time signature. In comparison to ASAP, all other publicly accessible datasets are significantly smaller: The Vienna 4x22 corpus [8] contains 22 different performances for excerpts of four different pieces, each aligned on a note level and provided in MusicXML 3, MIDI and audio format. The CrestMuse PEDB v2.0 [9] provides 35 pieces note-aligned to 411 performances, with scores provided in MusicXML and MIDI and performances in MIDI and WAV. The dataset also contains phrase structure annotations, however, merely in the format of PDF and plain text files, somewhat limiting their (re)usability. Footnote 3: [https://www.musicxml.com/](https://www.musicxml.com/) Footnote 4: [https://www.gramola.at/products/9003643987012](https://www.gramola.at/products/9003643987012) The MazurkaBL dataset [10] consists of a corpus of 44 Chopin Mazurkas with MusicXML scores that have been beat-aligned to 2000 performances. The performances themselves are not provided (neither as MIDI nor as audio); only beat positions and corresponding loudness values are given, along with the positions of tempo/dynamics markings in the score. ## 3 Curation Protocol and File Formats ### File origins The MIDI performance files originate from a performance of twelve Mozart piano sonatas by Viennese concert pianist Roland Batik on a computer-controlled Bosendorfer SE290 grand piano, the predecessor of the CEUS model. The Bosendorfer SE series measures each individual keystroke and pedal movement precisely, with onset and offset times being captured at a time resolution of 1.25ms. Hammer velocity values are captured in a proprietary file format, and converted and mapped to the 128 dynamics MIDI values (see [13] for conversion details). The audio recordings corresponding to those MIDI files can be purchased commercially 4. Footnote 5: [https://dme.mozarteum.at/DME/tma/start.php?l=](https://dme.mozarteum.at/DME/tma/start.php?l=) These MIDI performance data were originally aligned manually, on a note-to-note level, to a symbolic encoding of the score produced by our team [14, 15]. In order to make it possible to link the performance data in an unequivocal way to the musicological score annotations provided in the _Amnotated Mozart Sonatas_ dataset by Hentschel et al. [1], we decided to replace our score encoding in the alignments entirely by the score notes as given in the their dataset, which link to their annotations directly via absolute temporal score position. The scores in the _Amnotated Mozart Sonatas_ dataset conform to the New Mozart Edition 6 and are given in MuseScore format, with the harmony, phrase and cadence label annotations provided in tabular format, as tab-separated values (TSV) files. Footnote 6: [https://www.gramola.at/products/9003643987012](https://www.gramola.at/products/9003643987012) Footnote 7: [https://dme.mozarteum.at/DME/tma/start.php?l=](https://dme.mozarteum.at/DME/tma/start.php?l=) ### The match alignment format We provide the alignment between the above-mentioned score and performance files in the match file format [16], a file format for symbolic music alignment in a human-understandable textual form. It is structured sequentially, and the alignment information is given at the level of individual notes. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{Size} & \multicolumn{2}{c|}{Modality} & \multicolumn{2}{c|}{Annotations} \\ & Pieces & Performances & MIDI & Score & Alignment & Other \\ \hline ASAP [7] & \(222\) & \(1,068\) & recorded & MusicXML & beat & time and key signature \\ Vienna4x22 [8] & \(4\) & \(88\) & recorded & MusicXML & note & - \\ CrestMuse PEDB [9] & \(35\) & \(411\) & recorded & MusicXML & note & phrase \\ MazurkaBL [10] & \(44\) & \(2,000\) & - & MusicXML & beat & dynamics, tempo markings \\ \hline \hline _Batik-plays-Mozart_ & \(36\) & \(36\) & recorded & MusicXML & note & phrase, harmony, cadence \\ \hline \end{tabular} \end{table} Table 1: An overview of publicly available comparable piano performance datasets for which precise recorded MIDI data, score-performance alignments and/or musicological annotations are available. The encoded alignment is complete in the sense that all performance and all score notes are captured. Each performance and each score note is represented with their respective note ID, and their respective alignment can be recorded with one out of three potential tuples: 1. A _match_ between score note and performance note, i.e., (score_id, performance_id), 2. a _deleted_ score note (score_id, ) which represents a score note omitted in the performance, or 3. an _inserted_ performance note (, performance_id), which marks a performed note for which there is no corresponding score note. Following this alignment encoding, each line in a match file corresponds to either a _match_, a _deletion_ or an _insertion_. Additional lines express (sustain or soft) pedal information, or encode meta information about the musical piece and performer. While the performance part in match corresponds to a lossless encoding of a corresponding performance in MIDI format, the score part captures essential information including onset, offset and duration in beats, and pitch, pitch spelling, and octave information for each score note. ### _Curation protocol_ To create note-level score-to-performance alignments, encoded in the match file format, between the performance MIDI data by pianist Roland Batik and scores and musicological annotations by Hentschel et al. [1], we follow the workflow as outlined below (see Fig. 1): 1. **Retrieve information from old alignment.** Given an old alignment file, we use partitura [17] to retrieve a score and performance representation which we parse into score and performance note arrays, sna_o and pna_o, to sequentially capture each (notated and performed) note with a unique note ID. In addition we retrieve a score-to-performance alignment, align_o, in the encoding format explained above (i.e., a list of note ID tuples expressing either a match, deletion or insertion). 2. **Retrieve score note array from MusicXML.** In the next step, we convert the annotated MuseScore format scores provided by Hentschel et al. [1] to MusicXML, assign unique note IDs to each note, and convert this score representation into a second score note array (sna_s). 3. **Unfold score note array.** We update the score note array obtained from MusicXML, sna_s, by unfolding it in accordance to the repetition structure found in the performance note array, pna_o. 6 Footnote 6: To reflect the same note occuring in a repeated segment, a suffix is added to the ID to reflect the number of occurrence, i.e. for a note with ID n14, the repeat structure unfolding is expressed as n14-1 for the first, and n14-2 for the second occurrence, respectively. 4. **Create score-score alignment.** In this step, we create a score-to-score alignment (align_s) by matching each note in the two score note arrays sna_o and sna_s using its pitch, onset and duration information in beats. Any notes in sna_o and sna_s not matched automatically need to be aligned manually. Missed alignments at this stage can occur due to: * **Score mistakes**. These reflect mistakes in the score (e.g., a missing note, incorrect pitch, octave, missing modifier, missing repetition or ending markings) and require a manual correction of the score file. * **Differing score versions**. For certain sonata movements, the notated score provides an alternative score version reflecting the first edition ("Erstdruck") for certain segments of a piece, expressing the composer's impromptu ornamentation. 7 For the current dataset, such ornamented versions exist in K.284iii, K.332ii, K.457iii. Footnote 7: [https://www.nelle.de/en/music-column/mozar-piano-sonatas/](https://www.nelle.de/en/music-column/mozar-piano-sonatas/) * **Double-voiced score notes**. These occur frequently in notated music, and describe a score note that is notated doubly in two different voices but corresponds to one performed note. * **Grace notes**. Grace notes in notated music can occur in multiple forms to reflect different types of ornaments such as trills, acciacature, mordents, turns etc. Depending on the ornament type and the underlying score encoding format, this may result in several notes occurring at the same (notated) onset (and hence Figure 1: Visual illustration of the alignment process. Each step in the alignment process is numbered according to the textual description in Section 3.3. Steps marked * indicate manual correction / post-processing. Elements highlighted in green are combined in the new alignment match files. with zero duration) to ensure a regular measure according to the time signature of that piece. Without onset and duration information, these notes must then be manually aligned to their corresponding performed notes. * **Cadenza and _ad libitum_ measures.** Both academia measures and those marked _ad libitum_ correspond to irregular measures, that is, measures that contain more beats than indicated in the time signature (see Fig. 2). Digitally encoded, the notes in such measures are commonly notated without duration to allow for error-free parsing, and thus share the same beat onset and need to be aligned manually. 5. **Update score-performance alignment.** Here we update the score note IDs in the old alignment (align_o) according to the score-score alignment (align_s) to create new score-performance alignments, align_n. For each alignment in align_o, we then need to ensure the validity of the original alignment type (match, insertion or deletion). In particular, for notes in the original score note array (sna_o) that could not be aligned to notes in the MusicXML-based score note array (sna_s), we consider two cases: * If the note in sna_o corresponds to type'match' in align_o, the alignment type for the formerly matched performance note is changed accordingly into an insertion. * If the note in sna_o corresponds to type 'deletion' in align_o (i.e., a score note that was not performed), it is is discarded in align_n. Notes in sna_s that could not be aligned with notes in sna_o, on the other hand, are recorded as type 'deletion' in align_n. 6. **Create match files.** Using the updated performance-to-score alignment align_n, we create new match files, and manually add attributional information (e.g., 'diff_score_version', 'voice_overlap') to score notes to reflect edge cases described in step 4. ## 4 Dataset Overview The _Batik-plays-Mozart_ dataset contains performances by pianist Roland Batik of twelve Mozart sonatas (see Table 2 for the list of sonatas), corresponding to approx. 102,400 played notes and 223 minutes of music, for which the performances are provided in MIDI, musical scores in MusicXML, and the alignment in match file format. Approximately 98,300 (95.36%) of all performed notes are aligned with a corresponding score note, the remaining 4,100 (4.44%) represent insertions (reflecting mostly ornaments). Roughly 200 score notes have been omitted in the performances. For each performance, we also provide the performance note arrays, which capture each played note with its note ID along with onset and duration information in seconds and MIDI ticks, as well as velocity and pitch information. Likewise, the dataset includes the score note array (unfolded according to the repeats as played by the pianist and reflected in the alignment), which captures each score note with its (MusicXML) note ID (including repeat suffices, where applicable), onset and duration information in terms of beats (reflecting the time signature), and quarter notes (reflecting a "normalized" score time unit), and pitch and voice information. We link our aligned score note arrays to the musicological annotations in [1] via their temporal position in the following way: In the second version 8 of the dataset, each annotation label for harmonies, cadences, and phrases is unequivocally referenced to a temporal score position represented in terms of quarterbeats and measure number, where the first expresses the distance of the label from the beginning of the piece in quarter note units. We leverage these two temporal parameters to link each note-aligned score note array by first reducing it to its shortest form (without any unfolded repeats), aligning it temporally with the musicological annotations, and eventually unfolding it according to the performed repetition structure. Footnote 8: [https://github.com/DCMLab/mozart_piano_sonatas](https://github.com/DCMLab/mozart_piano_sonatas) ## 5 Dataset Demonstrations This section presents two simple examples of the kinds of studies that are made possible by our dataset. The first is motivated by a directly related study in the Annotated Mozart Sonata corpus paper [1]; the second shows how precise performance alignments permit more detailed investigations relating to cadences and their performance. ### Global tempo and harmonic density In a first study, we replicate the second experiment in Hentschel et al. [1], aimed at investigating the relationship Figure 2: An example of a cadenza within a piano sonata starting in measure 198 in KV333, 3rd movement. between tempo and harmonic change rate. The basic question asked in [1] was whether the rate at which the harmony changes in a piece is correlated with the piece's typical performance tempo. Their study involved determining the average (median) performance duration of each sonata movement from 6 complete commercial sonata recordings, and correlating harmonic label density (rate of harmonic labels in their annotations, per performance time unit) with average overall performance tempo (number of quarter notes per performance time unit). We repeat the same experiment with our pianist's performances and our alignment files instead of 6 pianists' audio recordings. We apply the same procedure as in [1], unfolding the score according to the repeat structure of the piece in order to calculate the actual piece length (in terms of quarter notes). The only difference is that we do this according to the repeats actually performed by the pianist (which are expressed in our match files, thus omitting the need for a dedicated "unfolding" step), whereas [1] seem to have assumed that all repeats were played by all pianists. Comparing our results (Fig. 3) to Fig. 10 in [1], we see a similar general trend, in the form of a roughly linear increase in harmonic label density with performed tempo (slope =.43, r=.75, compared to.48 and.80, respectively, in [1]).9 However, we also immediately see a marked difference in the performance tempo distribution: in [1], Fig. 10, there is a relatively large cloud of points (sonata movements) with conspicuously high tempos of 180-200 (quarters per minute), which does not appear in our plot, and which we believe may point to a systematic problem in their way of estimating playing tempo: assuming that all notated repeats are played out by the performers leads them to overestimate the tempo in all cases where some or a majority skipped some repeats. 10 Footnote 9: Note that we have a somewhat smaller set of points, because we only have 12 of the 18 sonatas in our dataset. Footnote 10: Of course, the authors explicitly acknowledge the problem: “Also, some of the initial assumptions might have to be revisited. For example, the extreme outlier suggesting a tempo of 239 quarter notes per minute is due to the fact that for this particular piece – the first movement of K. 533/494 – there seems to be a convention among pianists to repeat the first part of the piece, but not the second (as the score would suggest), which of course reduces the performance duration.” [1] (p.76), but a comparison with our distribution implies it might be more severe than expected. We thus see an immediate advantage of our more precise performance-aligned corpus: the match files naturally give correct tempo and score duration information, being based as they are on score-performance alignments that reflect the actual repeat structure played by our performer. Still, we can say that our results support and confirm the overall hypotheses proposed there, showing a more or less linear relationship between harmonic label density and global performance tempo. ### Performance of different cadence types Our data permits much more detailed investigations into relationships between structural aspects of a piece, and how these are translated into performance decisions by a pianist. As a simple example, we investigate variations in local tempo before various types of cadences. Specifically, we compare the local tempo prior to a cadence annotation across different tempo classes for authentic (perfect and imperfect, i.e., PAC and IAC) and half cadences (HC), and differentiate between the cases when a cadence falls on either a downbeat or a weak beat. The hypothesis to be tested \begin{table} \begin{tabular}{|l r r|r r|r r|r r|} \hline Sonata & \multicolumn{1}{c|}{Performed Notes} & \multicolumn{1}{c|}{Duration (min)} & \multicolumn{1}{c|}{Match Notes} & \multicolumn{1}{c|}{\%} & \multicolumn{1}{c|}{Insertion Notes} & \multicolumn{1}{c|}{\%} & \multicolumn{1}{c|}{Deletion Notes} & \multicolumn{1}{c|}{\%} \\ \hline KV279 & \(7,789\) & \(16.21\) & \(7,385\) & \(94.087\) & \(404\) & \(5.780\) & \(11\) & \(0.130\) \\ KV280 & \(6,277\) & \(14.69\) & \(6,070\) & \(95.793\) & \(207\) & \(3.983\) & \(13\) & \(0.223\) \\ KV281 & \(7,030\) & \(14.43\) & \(6,396\) & \(90.450\) & \(634\) & \(9.393\) & \(11\) & \(0.160\) \\ KV282 & \(5,761\) & \(14.77\) & \(5,552\) & \(96.197\) & \(209\) & \(3.467\) & \(20\) & \(0.337\) \\ KV283 & \(8,231\) & \(17.39\) & \(7,915\) & \(95.657\) & \(316\) & \(4.233\) & \(9\) & \(0.107\) \\ KV284 & \(13,386\) & \(25.92\) & \(12,691\) & \(93.763\) & \(695\) & \(6.033\) & \(27\) & \(0.203\) \\ KV330 & \(7,869\) & \(18.47\) & \(7,589\) & \(96.857\) & \(280\) & \(3.047\) & \(7\) & \(0.100\) \\ KV331 & \(11,760\) & \(22.64\) & \(11,595\) & \(98.283\) & \(165\) & \(1.370\) & \(45\) & \(0.347\) \\ KV332 & \(9,013\) & \(17.84\) & \(8,660\) & \(93.417\) & \(353\) & \(6.210\) & \(24\) & \(0.373\) \\ KV333 & \(9,137\) & \(20.40\) & \(8,827\) & \(96.723\) & \(310\) & \(3.120\) & \(16\) & \(0.157\) \\ KV457 & \(7,290\) & \(18.24\) & \(7,022\) & \(96.043\) & \(268\) & \(3.843\) & \(9\) & \(0.110\) \\ KV533 & \(8,878\) & \(22.12\) & \(8,616\) & \(97.027\) & \(262\) & \(2.837\) & \(15\) & \(0.137\) \\ \hline Total & \(102,421\) & \(223.12\) & \(98,318\) & \(95.358\) & \(4,103\) & \(4.443\) & \(207\) & \(0.199\) \\ \hline \end{tabular} \end{table} Table 2: List of sonatas in the _Batik-plays-Mozart_ dataset. The bottom row represents the sum in all columns except for those expressing percentages, for which the mean is shown. Figure 3: Correlation between global tempo (as measured in quarter notes per minute) and harmony label density here is that a performer will tend to shape cadences differently, in terms of tempo, depending on their type and degree of 'finality'. To compute the local tempo curves, we consider a uniform window spanning one quarter note each preceding and following a cadence label. 11 For each score-notet-aligned performed note in that window, we define the local tempo via the _beat period (BP)_, which we calculate as the ratio of the inter-onset-interval (IOI) between the current _performed_ onset and the subsequent one, and the IOI between the current _notated_ onset and subsequent one. We exclude grace notes and their corresponding performed notes from this calculation in order to remove outliers. Footnote 11: In the _Annotated Mozart Sonatas Corpus_[1], cadence labels are placed at the onset of the final target harmony (e.g., I/i for authentic cadences). Next, we perform time-wise interpolation on these tempo curves to obtain beat period values at eighth note intervals within the window. Given that we are most interested in the local timing strategy immediately before a label (that is, an eighth note before the label position), we discard those curves where that particular time point is interpolated. Following this procedure, we obtain a total of 3,540 local tempo values (corresponding to 708 curves), of which 251 (7.09%) values are interpolated. Figure 4 shows the mean of local tempo curves across different tempo classes, for cadence labels annotated on a downbeat (left) and on a weak beat (right), respectively. For both authentic and half cadence types, the differences in local tempo diminish with increasing global tempo for both downbeat and weak beat cadences. Likewise, the tempo profiles tend to flatten out with increasing global tempo, suggesting that the pianist takes more liberty, in terms of expressive timing, in slow pieces. For this reason, we focus our analysis on the _adagio_ tempo class, the slowest tempo (the solid line plots in Fig. 4). The influence of the beat level on the local tempo for half cadences seems to be negligible, with the local beat period decreasing slightly prior to the cadence (causing an increase in local tempo, i.e. an _accelerando_), regardless of whether it falls on a downbeat or weak beat. For authentic cadences, we can see a substantial difference in expressive tempo depending on whether or not the label falls on a downbeat: for authentic cadences falling on a downbeat, the mean tempo curve for the _adagio_ tempo class corresponds mostly to what one would expect (i.e., a very clear _ritardando_ in preparation of the cadence) based on the underlying harmonies and their notion of tension and release. Interestingly, this _ritard_ seems to continue somewhat after the resolution into the tonic, suggesting a lengthening of the tonic arrival. For weak-beat authentic cadences, a similar significant preparation or anticipation is largely missing. ## 6 Conclusion and Future Work We have presented _Batik-plays-Mozart_, a piano performance dataset linking professional Mozart piano sonata performances to expert-labelled musical scores, at the level of notes. The resulting dataset is the first of its kind to combine professional performances in precise, recorded MIDI with curated musical scores and expert musicological and structural annotations [1] at this level of detail. We presented two preliminary experiments, intended to demonstrate the benefits of having such precise, not-aligned performance-score-structure annotation data for studying expressive features and their relation to the underlying musical structure. Our plan for future work includes the transcription of the remaining six sonatas of the Mozart piano sonatas corpus from audio recordings by the same pianist, and their subsequent alignment to the musical scores using state-of-the-art transcription and alignment models. By doing so, we hope to advance our understanding of the differences between transcribed and recorded MIDI, and to evaluate the potential benefits of incorporating an alignment step to improve the quality of transcription. Figure 4: Comparison of local timing strategies one quarter note before and after authentic and half cadence labels, over different tempo classes (in increasing tempo from top to bottom), for cadences falling on a downbeat (left) or weak beat (right). Colour identifies cadence type, line style notated tempo class. ## 7 Acknowledgments We wish to express our gratitude to pianist Roland Batik for his gracious permission to publish the detailed measurements of his performances. We also want to thank the authors of the _Annotated Mozart Sonatas Corpus_ for their tremendous efforts, and for permitting us to link our data to theirs. This work receives funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 101019375 (_Whither Music?_). The LIT AI Lab is supported by the Federal State of Upper Austria.
2304.13268
Cross-beam energy transfer in conditions relevant to direct-drive implosions on OMEGA
In cross-beam energy transfer (CBET), the interference of two laser beams ponderomotively drives an ion-acoustic wave that coherently scatters light from one beam into the other. This redirection of laser beam energy can severely inhibit the performance of direct-drive inertial confinement fusion (ICF) implosions. To assess the role of nonlinear and kinetic processes in direct-drive-relevant CBET, the energy transfer between two laser beams in the plasma conditions of an ICF implosion at the OMEGA laser facility was modeled using particle-in-cell simulations. For typical laser beam intensities, the simulations are in excellent agreement with linear kinetic theory, indicating that nonlinear processes do not play a role in direct-drive implosions. At higher intensities, CBET can be modified by pump depletion, backward stimulated Raman scattering, or ion trapping, depending on the plasma density.
K. L. Nguyen, L. Yin, B. J. Albright, D. H. Edgell, R. K. Follett, D. Turnbull, D. H. Froula, J. P. Palastro
2023-04-26T03:41:47Z
http://arxiv.org/abs/2304.13268v1
# Cross-beam energy transfer in conditions relevant to direct-drive implosions on OMEGA ###### Abstract In cross-beam energy transfer (CBET), the interference of two laser beams ponderomotively drives an ion-acoustic wave that coherently scatters light from one beam into the other. This redirection of laser beam energy can severely inhibit the performance of direct-drive inertial confinement fusion (ICF) implosions. To assess the role of nonlinear and kinetic processes in direct-drive-relevant CBET, the energy transfer between two laser beams in the plasma conditions of an ICF implosion at the OMEGA laser facility was modeled using particle-in-cell simulations. For typical laser beam intensities, the simulations are in excellent agreement with linear kinetic theory, indicating that nonlinear processes do not play a role in direct-drive implosions. At higher intensities, CBET can be modified by pump depletion, backward stimulated Raman scattering, or ion trapping, depending on the plasma density. + Footnote †: preprint: AIP/123-QED Introduction In direct-drive inertial confinement fusion (ICF), an ensemble of laser beams directly illuminate a spherical target containing deuterium-tritium fuel.[1] Initially, the laser beams ionize and heat the target surface, forming an ablation front surrounded by a lower-density plasma corona. Thereafter, the laser beam energy is absorbed in the plasma corona and conducted to the ablation front by electron thermal transport. Continued ablation of the target accelerates the fuel inward until it converges at the target center. If the converged fuel has sufficient thermal energy or can be confined long enough, it will ignite and undergo thermonuclear burn. Compared to other ICF schemes, direct-drive has the potential to couple a larger fraction of the laser energy to the target, but like other schemes, it is susceptible to laser-plasma instabilities that can inhibit implosion performance. Among these instabilities, cross-beam energy transfer (CBET) has been identified as the leading cause of decreased energy coupling in direct-drive implosions.[2; 3; 4; 5] In direct-drive-relevant CBET, two frequency-degenerate laser beams ponderomotively excite an ion acoustic wave that scatters energy from one beam into the other. The outward flow of the plasma corona enhances the excitation by shifting the ion acoustic wave frequency into resonance. For a typical implosion at the OMEGA laser facility, CBET transfers energy from the center of an incoming beam to the periphery of other beams, which reduces laser absorption by 10% to 20%.[6; 7] The redirection of laser light also degrades absorption uniformity, which can contribute significantly to implosion asymmetries.[8; 9] The impact of reduced absorption and symmetry on implosion performance has motivated the development of reduced CBET models that are suitable for implementation into radiation hydrodynamics simulations.[10; 11; 12; 13] To maintain a computational cost that is commensurate with radiation hydrodynamics codes, the models combine ray tracing with steady-state coupled-mode equations instead of using more fundamental wave or particle-based approaches.[14; 15] As a result, these models neglect a number of nonlinear processes that may affect, or even saturate, CBET, including ion trapping, two-ion decay, nonlinear sound waves, and stochastic heating.[16; 17; 18; 19; 20; 21; 22; 23; 24; 25] While several experiments have observed CBET saturation,[26; 27; 28; 29] recent experiments on the OMEGA laser were the first to identify a saturation mechanism.[28] Specifically, the OMEGA experiments showed that CBET can saturate through a resonance detuning caused by trapping-induced modifications to the ion distribution functions[24; 29; 28]. The experiments employed high-intensity laser beams (\(\sim\)10\({}^{15}\) W/cm\({}^{2}\)) crossing in a low-density gas-jet plasma (\(n_{e}\leq\) 1.2% \(n_{c}\), where is the electron density, \(n_{c}=\epsilon_{0}m_{e}\omega^{2}/e^{2}\) and \(\omega\) is the laser frequency) with electron and ion temperatures of only a few hundreds of eV. In contrast, CBET in direct-drive ICF occurs between lower intensity laser beams (\(\sim\)10\({}^{14}\) W/cm\({}^{2}\)) in a much hotter and denser plasma. This raises the question: what, if any, saturation mechanisms occur in direct-drive-relevant conditions? This manuscript presents the results of particle-in-cell (PIC) simulations which indicate that nonlinear or saturation processes do not play a role in direct-drive-relevant CBET. The simulations modeled the energy transfer between two laser beams using plasma conditions and intensities extracted from a radiation hydrodynamics simulation of an OMEGA implosion. For direct-drive-relevant intensities, the PIC simulations are in excellent agreement with the linear, steady-state, kinetic theory of CBET. For the same plasma conditions but higher intensities, different nonlinear processes are observed to affect CBET depending on the plasma density: At \(n_{e}/n_{c}=30\%\), CBET can saturate due to pump depletion; at \(n_{e}/n_{c}=20\%\), the seed beam can become unstable to backward stimulated Raman scattering, which reduces the apparent energy transfer; and at \(n_{e}/n_{c}=10\%\), ion trapping can enhance CBET by reducing Landau damping of the driven ion acoustic wave. The remainder of this manuscript is organized as follows: Section II describes a commonly used reduced model for CBET in direct-drive. Section III presents the setup for the PIC simulations. Section IV demonstrates the agreement between CBET and linear theory for direct-drive-relevant intensities. Section V discusses the nonlinear processes that can affect CBET at high intensities. Section VI concludes the manuscript with a summary of the results. ## II Reduced model for CBET in direct drive In a direct-drive implosion, CBET occurs between beams with the same frequency and tends to redistribute energy from the central portion of an incoming laser beam to the periphery of another laser beam. Figure 1(a) illustrates the intersection of rays within different beams in the expanding plasma surrounding the target. The incident ray (purple) travels towards the target and intersects rays at the edges of other beams (blue, green, and red). The ponderomotive force of each crossing pair drives an ion acoustic wave with a frequency (\(\Omega\)) and wavevector (\(\mathbf{k}\)) determined by the conditions: \(\Omega=\omega_{0}-\omega_{1}\) and \(\mathbf{k}=\mathbf{k}_{0}-\mathbf{k}_{1}\), where the subscripts 0 and 1 denote two crossing rays. The density perturbation of the ion acoustic wave acts like a grating, coherently scattering light from one ray into the other. A commonly used reduced model for CBET approximates rays within each beam as plane waves and assumes a linear, steady-state plasma response. In this model, the intensities of two intersecting rays evolve according to \[(\mathbf{v}_{a}\cdot\nabla)I_{a}=g_{ab}I_{b}I_{a} \tag{1}\] where a \(\neq\) b can take the values 0 or 1, \(I_{a}\) is the intensity, \(\mathbf{v}_{a}=(1-\omega_{p}^{2}/\omega_{a}^{2})^{1/2}c\mathbf{k}_{a}/k_{a}\) is the group velocity, \(\omega_{p}=(e^{2}n_{e}/\varepsilon_{0}m_{e})^{1/2}\) is the plasma frequency, and \[g_{ab}=-\frac{e^{2}k^{2}\eta_{ab}}{2\varepsilon_{0}m_{e}^{2}\nu_{b}\omega_{b}^ {2}\omega_{a}}\mathrm{Im}\left[\Gamma(\Omega,\mathbf{k})\right] \tag{2}\] with \(\eta_{01}=-\eta_{10}=1\). The kinetic coupling factor, \[\Gamma(\Omega,\mathbf{k})=\frac{\chi_{e}(\Omega,\mathbf{k})[1+\chi_{i}(\Omega,\mathbf{k})]}{1+\chi_{i}(\Omega,\mathbf{k})+\chi_{e}(\Omega,\mathbf{k})}, \tag{3}\] captures the response of the plasma to the ponderomotive drive. Here \(\chi_{e}\) and \(\chi_{i}\) are the electron and ion susceptibilities: \[\chi_{e}(\Omega,\mathbf{k})=\frac{e^{2}}{\varepsilon_{0}m_{e}k^{2}}\int\frac {\mathbf{k}\cdot\nabla_{\mathbf{v}}f_{e}}{\Omega-\mathbf{k}\cdot\mathbf{u}_{ f}-\mathbf{k}\cdot\mathbf{v}}d\mathbf{v} \tag{4}\] Figure 1: (a) A schematic of CBET in a typical direct drive implosion. An incoming ray (purple), anti-parallel to the hydrodynamic flow, interacts with rays from the edges of other beams (blue, green, and red). (b) Total electromagnetic field amplitudes of the pump and seed, illustrating the simulated interaction geometries at 30%, 20%, and 10% \(n_{c}\). \[\chi_{i}(\Omega,\mathbf{k})=\sum_{s}\frac{Z_{s}^{2}e^{2}}{\epsilon_{0}m_{s}k^{2}} \int\frac{\mathbf{k}\cdot\nabla_{\mathbf{v}}f_{s}}{\Omega-\mathbf{k}\cdot \mathbf{u}_{f}-\mathbf{k}\cdot\mathbf{v}}d\mathbf{v}, \tag{5}\] \(f_{e}\) and \(f_{s}\) are the electron and ion distribution functions in the stationary frame of the plasma, \(Z_{s}\) is the ion charge, \(m_{s}\) is the ion mass, and \(\mathbf{u}_{f}\) is the hydrodynamic flow velocity. The maximum rate of energy transfer between two rays occurs when the ion acoustic wave is driven on resonance, i.e., when \[\Omega=\Omega_{A}+\mathbf{k}\cdot\mathbf{u}_{f}, \tag{6}\] where \(\Omega_{A}\) is the real, natural mode frequency of the ion acoustic wave in the absence of flow. The resonant drive frequency minimizes the absolute value of the dielectric constant \(|\epsilon(\Omega,\mathbf{k})|=|1+\chi_{e}(\Omega,\mathbf{k})+\chi_{i}(\Omega,\mathbf{k})|\), which maximizes \(\text{Im}[\Gamma(\Omega,\mathbf{k})]\) and \(g_{ab}\). The flow velocity shifts the frequency of the ion acoustic wave, allowing for resonant excitation even when each beam (or ray) has the same frequency. Specifically, when \(\omega_{0}-\omega_{1}=\Omega=0\), the resonance condition can be satisfied if \[\mathbf{k}\cdot\mathbf{u}_{f}=-\Omega_{A}. \tag{7}\] As depicted in Fig. 1(a), the flow velocity is directed radially outward. Thus, in order to satisfy Eq. 7, the wavevector of the ion acoustic wave must point towards the target. Equivalently, ray 0 must have a smaller local angle of incidence (i.e., be more anti-parallel to the flow) than ray 1: \(\mathbf{k}_{0}\cdot\mathbf{u}_{f}<\mathbf{k}_{1}\cdot\mathbf{u}_{f}\). To determine the direction of energy transfer, it is convenient to perform a Galilean transformation into the stationary frame of the plasma. In this frame, denoted by a prime \({}^{\prime}\), the frequencies of the rays are given by \(\omega_{a}^{\prime}=\omega_{a}-\mathbf{k}_{a}\cdot\mathbf{u}_{f}\). Thus, ray 0 is blue shifted relative to ray 1, such that \(\omega_{0}^{\prime}=\omega_{1}^{\prime}+\Omega_{A}\). This indicates that ray 0 transfers energy to ray 1 and explains the redistribution of energy from the center of incoming beams, where rays have smaller angles of incidence, to the periphery of other beams, where rays have larger angles of incidence. To indicate the direction of energy transfer, 0 will denote the pump and 1 the seed for the remainder of the manuscript. In all of the calculations presented here, the pump will have the minimum (zero) angle of incidence and propagate anti-parallel to the flow [see Fig. 1(b) for the geometries]. Upon using \(\mathbf{k}_{0}\cdot\mathbf{u}_{f}=-k_{0}u_{f}\) in Eq. 7, one can rewrite the resonance condition for frequency-degenerate beams as \[\mathrm{v}_{p}^{\prime}=u_{f}\mathrm{sin}(\theta/2), \tag{8}\] where \(\mathrm{v}^{\prime}_{p}=\Omega_{A}/k\) is magnitude of the ion acoustic wave phase velocity in the stationary frame of the plasma and \(\theta\) is the crossing angle between the pump and the seed. Equation 8 demonstrates that resonance is impossible when \(u_{f}<\mathrm{v}^{\prime}_{p}\), regardless of crossing angle. When the pump and seed propagate in the same direction (\(\theta=0\)), Eq.1 has the analytical solution [30] \[I_{1}^{\mathrm{out}}=\frac{(1+\beta)\mathrm{exp}[G_{0}(1+\beta)]}{1+\beta \mathrm{exp}[G_{0}(1+\beta)]}I_{1}^{\mathrm{in}}, \tag{9}\] where the superscripts "in" and "out" denote the input and output values of intensity and \(\beta=I_{1}^{\mathrm{in}}/I_{0}^{\mathrm{in}}\). The small signal gain \(G_{0}\) is given by \[G_{0}=k_{0}L\mathrm{sin}^{2}(\theta/2)\mathrm{Im}[\Gamma(\Omega,\mathbf{k})]| \mathbf{a}_{0}^{\mathrm{in}}|^{2}, \tag{10}\] where \(L\) is the interaction length and \(|\mathbf{a}_{0}^{\mathrm{in}}|=(e/m_{e}c\omega_{0})(2I_{0}^{\mathrm{in}}/ \epsilon_{0}\mathrm{v}_{0})^{1/2}\) is the normalized amplitude of the pump. Using Eq. 9, one can calculate the gain including the effect of pump depletion: \[G_{D}=\mathrm{ln}(I_{1}^{\mathrm{out}}/I_{1}^{\mathrm{in}}). \tag{11}\] In the limit of small \(I_{1}^{\mathrm{in}}\), \(G_{D}\to G_{0}\), reproducing the familiar small signal gain result, \(I_{1}^{\mathrm{out}}=I_{1}^{\mathrm{in}}e^{G_{0}}\). Oblique intersection of the pump and the seed can be accounted for by replacing the interaction length in Eqs. 9 and 10 by \(L=d/\sin\theta\), where \(d\) is the width of each beam. This approximation was compared to more sophisticated 2D models [31; 32] of planar-like beams and was found to be in excellent agreement. The approximation also works well for speckled beams when the average intensities are used for \(I_{0}^{\mathrm{in}}\) and \(I_{1}^{\mathrm{in}}\), so long as the gain is moderate (\(G_{0}\lesssim 1\)) and accumulates over multiple speckle widths, which is typically the case in direct-drive CBET. [33] ## III Simulation Setup The reduced model described above neglects several effects that can modify CBET, including non-steady-state, nonlinear, and collisional processes. In principle, one would like the simplest possible model of CBET that describes experimental observations. In practice, integrated experiments, such as direct-drive implosions, complicate the interpretation of measurements, because several competing processes can lead to similar features in the data. This makes it difficult to isolate and rule out effects. Alternatively, one can use a more sophisticated model to test the validity of the reduced model and the impact of the neglected effects. The approach here is to test the reduced CBET model against PIC simulations initialized with plasma conditions extracted from a radiation hydrodynamics simulation of an OMEGA implosion. The PIC simulations capture the non-steady-state, nonlinear, and collisional processes that are not included in the reduced model. As a result, the impact of these effects can be assessed by comparing the PIC results to the reduced model. The implosion of a warm CH target (shot 96299) was simulated using the 1D radiation hy Figure 2: (a) Density (blue, left) and flow (red, right) profiles from a LILAC simulation of a direct-drive implosion on OMEGA at peak power. (b) The resonant crossing angle between the pump and seed lasers (blue) maximizes the value of \(\text{Im}[\Gamma]\) (red), and thus the rate of intensity transfer, at each location. (c) The total energy transfer calculated using a 3D ray-based model of CBET, indicating that CBET is most active near 30% \(n_{c}\). The VPIC simulations were initialized with the plasma conditions at the location of the dots in (a) and (b). drodynamics code LILAC.[34] At peak power, the coronal plasma consists of electrons and two fully ionized ion species: approximately 50% each of hydrogen and carbon. The electron and ion temperatures, \(T_{e}=2.5\) keV and \(T_{i}=1.2\) keV, are nearly uniform. Figure 2(a) displays the electron density and flow profiles. Here the Mach number \(\mathrm{M}\equiv u_{f}/\mathrm{v}_{p}^{\prime}\), where \(\mathrm{v}_{p}^{\prime}\approx 3.6\times 10^{5}\,\mathrm{m/s}\) is calculated by solving for the root of \(\epsilon(\Omega,\mathbf{k})=0\) using Maxwellian electron and ion distribution functions. In accordance with the flow profile and Eq. 8, the crossing angle \(\theta\) required for resonant excitation of the ion acoustic wave increases towards the target [Fig. 2(b)]. At each location, the resonant angle maximizes the coupling factor \(\mathrm{Im}[\Gamma]\). The value of \(\mathrm{Im}[\Gamma]\) increases with density towards the target until the flow velocity drops below \(\mathrm{M}=1\) [right axis of Figs. 2(a) and (b)]. Past this point, \(\mathrm{M}<1\) and resonant excitation is impossible. Figure 2(c) demonstrates that CBET is the most energetically significant near 30% \(n_{c}\), which is consistent with the maximum of \(\mathrm{Im}[\Gamma]\) in Fig. 2(b). The total energy transfer at each location was calculated using a 3D ray-based, reduced model of CBET.[35] Note that CBET still occurs in the subsonic region (i.e., where \(\mathrm{M}<1\)), albeit non-resonantly. The energy transfer between crossing beams was simulated using the Vector Particle-in-Cell code[15] (VPIC). Three resonant crossing angles were considered: \(\theta=110^{\mathrm{o}}\) at \(n_{e}=30\%\,n_{c}\), \(\theta=80^{\mathrm{o}}\) at \(n_{e}=20\%\,n_{c}\), and \(\theta=60^{\mathrm{o}}\) at \(n_{e}=10\%\,n_{c}\) (see Figs. 1 and 2 for geometries and locations). In all cases, the beams were polarized out of the plane of the simulation, i.e., in the \(\hat{\mathbf{y}}\)-direction; had a width of \(d=20\,\mu\mathrm{m}\); and were composed of a random distribution of \(f/6.7\) speckles.[36] The beam width, while much smaller than that of an actual OMEGA beam (\(\sim 800\,\mu\mathrm{m}\)),[37] results in gains that are large enough to test the salient physics. The plasma conditions in the VPIC simulations were determined by the LILAC profiles. For each crossing angle, the density profile was modeled as a linear ramp increasing towards the target in the \(\hat{\mathbf{x}}\)-direction. The ramp was centered at the respective density for each crossing angle, \(n_{e}=30\%\), \(20\%\), and \(10\%\,n_{c}\), and had a scale length \(L_{n}\) equal to that of the LILAC profile at that density. The initial electron and ion temperatures, \(T_{e}=2.5\) keV and \(T_{i}=1.2\) keV, were uniform. The flow velocity at each density was emulated by redshifting the seed beam relative to the pump. Specifically, the pump beam had a vacuum wavelength \(\lambda_{0}=351\,\mathrm{nm}\) in all cases, while the vacuum wavelength of the seed beam was specified using \[\lambda_{1}=\lambda_{0}+\frac{\lambda_{0}^{2}}{2\pi c}ku_{f}\mathrm{sin}( \theta/2)=\lambda_{0}+\frac{\lambda_{0}^{2}}{2\pi c}\Omega_{A}. \tag{12}\] This is equivalent to simulating the interaction in the local rest frame of the plasma. The flow gradient was not included in the VPIC simulations (see Appendix A for further discussion). The physical parameters are summarized in Table 1. All of the VPIC simulations included ion-ion, electron-electron, and electron-ion collisions. The binary collision model implemented in VPIC recovers the Landau form for inter-particle collisions in weakly coupled plasmas.[38; 39] The domain of each simulation was \(60\,\mu\mathrm{m}\times 40\,\mu\mathrm{m}\) with cell sizes \(\Delta x\times\Delta z\approx 1.0\lambda_{D}\times 1.0\lambda_{D}\), where \(\lambda_{D}=(\epsilon_{0}k_{B}T_{e}/n_{e}e^{2})^{1/2}\) is the Debye length, and 512 particles per cell on average. Absorbing boundary conditions were used for the fields, and refluxing boundary conditions were used for the particles: Every particle that left the simulation domain was replaced by a particle injected with a velocity randomly sampled from a fixed-temperature Maxwellian distribution. ## IV Omega-relevant intensities For intensities relevant to a direct-drive implosion on OMEGA, the VPIC simulations are in excellent agreement with the linear, steady-state CBET theory. This is illustrated by Fig. 3, which compares \(G_{0}\) (Eq. 10) to the gain calculated from VPIC: \(G_{\mathrm{V}}=\ln(P_{1}^{\mathrm{out}}/P_{1}^{\mathrm{in}})\), where \(P_{1}^{\mathrm{in}}\) and \(P_{1}^{\mathrm{out}}\) are the input and output seed powers. The VPIC gain evolves through a transient stage[40] over the first \(\sim 5\) ps before approaching a near-steady state value. The modest discrepancy between the VPIC gain and \(G_{0}\) results from the relatively small number of speckles in the pump and seed beams. The pump and seed beams at each density were initialized with average intensities determined by the 3D ray-based model of CBET[35] discussed above (see Table 2). Inverse bremsstrahlung \begin{table} \begin{tabular}{l c c c} & 10\% \(n_{c}\) & 20\% \(n_{c}\) & 30\% \(n_{c}\) \\ \hline \(L_{n}\) (\(\mu\mathrm{m}\)) & 300 & 225 & 200 \\ \(\theta\) (degrees) & 60 & 80 & 110 \\ \(\Delta\lambda\) (Å) & 4.0 & 4.9 & 5.9 \\ \end{tabular} \end{table} Table 1: VPIC simulation parameters determined by a LILAC simulation of an OMEGA implosion. The wavelength shift of the seed \(\Delta\lambda=\lambda_{1}-\lambda_{0}\) emulates the local flow velocity in the corona. The electron and ion temperatures, \(T_{e}=2.5\) keV and \(T_{i}=1.2\) keV, were uniform in all cases. heating and CBET lower the average intensity of the pump as it propagates from \(10\%\,n_{c}\) to \(30\%\,n_{c}\). Despite the lower pump intensity, the highest gain occurs at \(30\%\,n_{c}\) due to the density scaling of the coupling factor \(\mathrm{Im}[\Gamma]\) [Fig. 2(b) and Fig. 3]. The seed at \(30\%\,n_{c}\) corresponds to a ray that originates closers to the center of an OMEGA beam where the intensity is higher, while the seed at \(10\%\,n_{c}\) corresponds to a ray further into the periphery of an OMEGA beam where the intensity is lower. ## V Nonlinear effects at higher intensities The intensities used in Sec. IV are typical of conventional "hot-spot" ignition designs for direct-drive ICF. When hydrodynamically scaling from OMEGA to larger, more ignition-relevant \begin{table} \begin{tabular}{l c c c} & \(10\%\,n_{c}\) & \(20\%\,n_{c}\) & \(30\%\,n_{c}\) \\ \hline \(I_{0}^{\text{in}}\,\left(\mathrm{W/cm^{2}}\right)\) & \(9.8\times 10^{13}\) & \(8.1\times 10^{13}\) & \(5.5\times 10^{13}\) \\ \(I_{1}^{\text{in}}\,\left(\mathrm{W/cm^{2}}\right)\) & \(2.6\times 10^{11}\) & \(2.5\times 10^{12}\) & \(1.0\times 10^{13}\) \\ \end{tabular} \end{table} Table 2: Average intensities of the pump and seed at each plasma density for the OMEGA-relevant simulations. Figure 3: Time evolution of the CBET gain for OMEGA-relevant intensities (Table 2). At each density, the gain evolves through a transient stage before reaching a near-steady-state value slightly above the small signal gain \(G_{0}\) (Eq. 10). scales, the laser pulse power and target surface area increase in a proportion that keeps the incident intensity fixed. There are, however, a number of direct-drive concepts that propose using higher intensities, such as "fast" and "shock" ignition.[41; 42; 43] This section presents the results of VPIC simulations of CBET between higher intensity laser beams in OMEGA-relevant plasma conditions. For consistency with the low-intensity simulations, the plasma conditions and beam crossing angles were kept the same. The pump intensity was set to \(I_{0}^{\text{in}}=2\times 10^{14}\;\text{W}/\text{cm}^{2}\) in all cases, while the average seed intensity was set to \(I_{1}^{\text{in}}=1\) or \(2\times 10^{14}\;\text{W}/\text{cm}^{2}\). These values bracket the threshold for the observed nonlinear phenomena. At the higher seed intensity, the simulations show that different nonlinear processes can affect CBET, depending on the density. Specifically, pump depletion is observed at 30% \(n_{c}\), backward stimulated Raman scattering (BSRS) of the seed at 20% \(n_{c}\), and ion trapping at 10% \(n_{c}\). These different processes occur despite the prediction of linear theory that the relative amplitude of the ion acoustic waves should be nearly the same: Calculating \[\frac{\delta n_{e}}{n_{e}}=\frac{c^{2}k^{2}}{2\omega_{p}^{2}}|\Gamma(\Omega, \mathbf{k})(\mathbf{a}_{0}^{\text{in}}\cdot\mathbf{a}_{1}^{\text{in}})| \tag{13}\] at each density provides \(\delta n_{e}/n_{e}\approx 0.32\%\) for \(I_{0}^{\text{in}}=I_{1}^{\text{in}}=2\times 10^{14}\;\text{W}/\text{cm}^{2}\). To reflect the energetic significance of CBET at each location [Fig.2(c)], the simulations will be discussed in order of descending density (i.e., increasing target radius). ### Pump Depletion at 30% \(n_{c}\) Figure 4 displays the time evolution of the CBET gain at 30% \(n_{c}\). For both seed intensities, the gain asymptotes to a value that is consistent with linear theory including pump depletion \(G_{D}\) (Eqs. 9 and 11), but lower than the small signal gain \(G_{0}\). The predominance of pump depletion at 30% \(n_{c}\) is explained by three factors. Foremost, the density scaling of the kinetic coupling factor \(\text{Im}[\Gamma]\propto(n_{e}/n_{c})(1-n_{e}/n_{c})^{-1/2}\) increases the rate of energy transfer and pump depletion at higher densities. Second, the increased rate of ion-ion collisions at higher densities reduces the amount of time that ions stay trapped in an ion-acoustic wave, which inhibits trapping-induced modifications to the ion distribution function. Finally, SRS cannot occur at 30% \(n_{c}\), because frequency matching is impossible: \(\omega_{0}<2\omega_{p}\) (see Appendix B). ## Appendix B Raman scattering of the seed at 20% \(n_{c}\) At 20% \(n_{c}\), BSRS reduces the output power of the higher intensity seed. In an experiment, a reduced output power could be confused with a lower gain. Figure 5(a) shows the percent change in the seed power (\(P_{1}^{\text{out}}/P_{1}^{\text{in}}-1\)) as a function of time. The power of the lower intensity seed asymptotes to a value slightly less than that predicted by the small signal gain. The power of the high intensity seed undergoes an initial rise over the first \(\sim\)2 ps, followed by a rapid drop to a value below the input power. The inset in Fig. 5(a) displays the power spectrum of electromagnetic waves at the entrance boundary of the seed over the first 8 ps. At the higher seed intensity, there is a substantially elevated signal at \(\omega=\omega_{1}-\omega_{p}\), indicating resonant decay of the seed light into backscattered light and an electron plasma wave, i.e., BSRS. The backscattered power into frequencies around \(\omega_{s}\equiv\omega_{1}-\omega_{p}\) is consistent with the Manley-Rowe relations and inverse-Bremsstrahlung (IB) absorption. Integration of the power spectrum around \(\omega_{s}\) indicates that \(\sim\)15% of the input seed power (\(P_{1}^{\text{in}}\)) was backscattered through the entrance boundary. The total backscattered power, accounting for IB absorption, can be estimated as \(P_{s}\approx 0.15P_{1}^{\text{in}}\text{exp}(\kappa l)\), where \(\kappa=72.6\text{ cm}^{-1}\) is the IB power absorption coefficient modified for a multi-component plasma [36] and \(l=42\text{ }\mu\text{m}\) is the distance between the top and bottom boundaries of the simulation domain. From the Manley-Rowe relations, the power scattered into electron Figure 4: Time evolution of the CBET gain for the high pump and seed intensities at 30% \(n_{c}\). Here, \(I_{0}^{\text{in}}=2\times 10^{14}\,\text{W}/\text{cm}^{2}\). Pump depletion causes the gain to asymptote to a value lower than the small signal gain \(G_{0}\). Consistent with Eq. 9, the reduction in gain is larger for the higher seed intensity. plasma waves is given by \(P_{e}=(\omega_{p}/\omega_{s})P_{s}\approx 0.18P_{1}^{\text{in}}\). Overall, the seed loses \(P_{s}+P_{e}\approx 0.38P_{1}^{\text{in}}\), which agrees with the difference between the linear prediction and VPIC result shown in Fig. 5(a). In this configuration, the seed beam propagates tangential to the density gradient and experiences a near-uniform plasma. This allows SRS to remain phase matched over an extended distance, which in combination with the high density, exacerbates the instability growth. The pump beam propagates parallel to the density gradient, which mitigates SRS by limiting the distance over Figure 5: (a) Change in the transmitted seed power, \(P_{1}^{\text{out}}/P_{1}^{\text{in}}-1\), as a function of time for the high pump and seed intensities at 20% \(n_{c}\). When \(I_{1}^{\text{in}}\) = \(1\times 10^{14}\text{W}/\text{cm}^{2}\), the change is in reasonable agreement with the small signal gain result \(e^{G_{0}}-1\). When \(I_{1}^{\text{in}}\) = \(2\times 10^{14}\text{W}/\text{cm}^{2}\), the output seed power asymptotes to a value lower than the input seed power. The inset shows the power spectrum of electromagnetic waves at the entrance boundary of the seed and indicates substantial BSRS of the higher intensity seed. (b) Despite SRS of the higher intensity seed, the power lost from the pump is in good agreement with the linear theory including pump depletion and collisional absorption. which it can remain phase matched. As a result, the pump beam loses energy due to CBET but does not undergo SRS. Figure 5(c) demonstrates that the power lost from the pump agrees with the linear CBET theory including pump depletion when IB absorption is taken into account (\(\sim\)9% of the incident pump power). ## Appendix C Ion trapping at 10% \(n_{c}\) At 10% \(n_{c}\), ion trapping in the driven ion acoustic wave reduces the Landau damping and enhances the gain of the higher intensity seed. As shown in Fig. 6(a), the gain of the lower intensity seed approaches a value only slightly above the small signal gain \(G_{0}\) due to the finite number of speckles. The gain of the higher intensity seed, on the other hand, approaches a value Figure 6: (a) Time evolution of the CBET gain for the high pump and seed intensities at 10% \(n_{c}\). When \(I_{1}^{\text{in}}=1\times 10^{14}\)\(\text{W}/\text{cm}^{2}\), the gain asymptotes to a value slightly higher than the small signal gain \(G_{0}\). When \(I_{1}^{\text{in}}=2\times 10^{14}\)\(\text{W}/\text{cm}^{2}\), the gain approaches a value \(\sim\)1.3\(\times\) larger than \(G_{0}\). (b) Trapping in the ion acoustic wave driven by the pump and higher intensity seed flattens the hydrogen and carbon distribution functions in the vicinity of the ion acoustic wave phase velocity. (c) The modified distribution functions at the higher seed intensity redshift the resonant ion acoustic wave frequency and reduce the Landau damping, which narrows and enhances the resonant peak. The enhancement in the peak is consistent with the elevated gain observed in (a). In (b), the distribution functions are averaged over the interaction region at 10 ps. The smooth and dashed lines are the 1D projection of the distribution functions onto the propagation direction of the ion acoustic wave \(\mathbf{k}\) at \(t=0\) and \(t=10\) ps, respectively. nearly \(1.3\times\) larger than \(G_{0}\). The higher intensity seed drives a larger amplitude ion acoustic wave that traps and accelerates more ions. The trapped ions flatten the distribution functions in the vicinity of the ion acoustic wave phase velocity, \(\mathrm{v}_{p}^{\prime}=3.7\mathrm{v}_{\mathrm{TC}}=1.06\mathrm{v}_{\mathrm{TH}}\), where \(\mathrm{v}_{\mathrm{TC}}\) and \(\mathrm{v}_{\mathrm{TH}}\) are the carbon and hydrogen thermal velocities [Fig. 6(b)]. The flattening reduces the Landau damping and redshifts the resonant ion acoustic wave frequency.[16] Figure 6(c) illustrates the effects that the modified distribution functions have on the kinetic coupling factor and ion acoustic wave resonance. The ion acoustic wave driven by the lower intensity seed does not have sufficient amplitude to appreciably modify the distribution functions or the coupling factor. As a result, the resonance peak at \(t=10\) ps is identical to that at \(t=0\). The reduction in Landau damping at the higher seed intensity narrows the resonance width and strengthens the response, while the flattening of the distributions redshifts the resonance peak. The elevated response at the drive detuning, \(\Delta\lambda=4.0\) A, is consistent with the enhanced gain observed in Fig. 6(a). Despite this enhancement, the gain at \(10\%\)\(n_{c}\) is still much lower than that at \(30\%\)\(n_{c}\) due to the density scaling of \(\mathrm{Im}[\Gamma]\). Ion trapping has a larger impact on CBET in lower density and colder plasmas. As illustrated by Fig. 2(c), CBET is not energetically significant in the low density region of a direct-drive corona. This suggests that, even at higher intensities, trapping is unimportant. At higher densities, ion-ion collisions are more frequent.[44] These collisions can either detrap ions or prevent their trapping altogether, thereby inhibiting flattening of the distribution function. At higher temperatures, collisional thermalization of trapped ions produces a smaller relative increase in the ion temperature. SRS and pump depletion, on the other hand, can have a larger impact at higher densities where CBET is more energetically significant: Both the growth rate (see Appendix B) and kinetic coupling factor increase with density. ## VI Summary and Conclusions Comparisons of 2D collisional PIC simulations with a commonly used reduced, linear model indicate that nonlinear effects do not play a role in CBET for plasma conditions relevant to implosions at the OMEGA laser facility. The PIC simulations were initialized with plasma conditions extracted from a radiation hydrodynamics simulation of an OMEGA implosion. The beam intensities were initialized with average intensities calculated using a 3D ray-based model of CBET.[35] The CBET gain predicted by the linear model and the PIC simulations are in excellent agreement. For higher intensities, of more relevance to shock or fast ignition, several nonlinear effects were observed to modify CBET, depending on the density. At the highest density, \(n_{e}=30\%\,n_{c}\), the strong coupling between the crossing beams resulted in significant pump depletion and gains lower than the linear, small signal gain. At \(n_{e}=20\%\,n_{c}\), the seed beam was unstable to BSRS, which reduced the output seed power and apparent energy transfer. At the lowest density, \(n_{e}=10\%\,n_{c}\), ion trapping decreased the Landau damping and enhanced the gain well above the prediction of linear theory. ###### Acknowledgements. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0003856, the University of Rochester, and the New York State Energy Research and Development Authority. LANL work was performed under the auspices of the U.S. Department of Energy by the Triad National Security, LLC Los Alamos National Laboratory, and was supported by the LANL Office of Experimental Science Inertial Confinement Fusion program. VPIC simulations were run on the LANL Institutional Computing Clusters. This report was prepared as an account of work sponsored by an agency of the U.S. Government. Neither the U.S. Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the U.S. Government or any agency thereof. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the U.S. Government or any agency thereof. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Appendix A Wavelength detuning and flow The VPIC simulations presented above used wavelength detuning as a surrogate for flow and did not include a flow gradient. The density and flow in VPIC are constrained by the continuity equation: \(\partial_{t}n+\nabla\cdot(n\mathbf{u}_{f})=0\). At any particular time, the corona is not in a steady hydrodynamic state, such that \(\nabla\cdot(n\mathbf{u}_{f})\neq 0\). As a consequence, initializing VPIC with a non-uniform density and flow profile from a radiation hydrodynamics simulation would result in unwanted evolution of the density profile. A hydrodynamic equilibrium can be achieved in VPIC by using (1) a non-uniform density and setting \(\mathbf{u}_{f}=0\) or (2) a uniform density and a uniform flow velocity. Ultimately, option (1) was chosen for the simulations presented in this paper. Simulations using option (2) predicted substantial levels of BSRS from the pump beam. This does not occur in OMEGA implosions because the density gradient stabilizes BSRS. Thus, the inclusion of the density gradient was critical for faithfully modeling CBET in OMEGA-relevant conditions. Nevertheless, option (2) allows for comparisons of CBET between wavelength-degenerate beams in the presence of flow and wavelength-detuned beams in the absence of flow. These comparisons were used to verify that wavelength detuning is indeed a good surrogate for flow. Figure 7 compares the evolution of CBET when resonant excitation of the ion acoustic wave is achieved through wavelength detuning or flow. In all cases, the pump and seed intensities were set to \(I_{0}^{\text{in}}=I_{1}^{\text{in}}=2.0\times 10^{14}\,\text{W}/\text{cm}^{2}\), corresponding to the high intensity cases of Sec. V. At both 30% and 10% \(n_{c}\), detuning and flow resulted in nearly identical gains over the entire duration of the simulation. Similarly, at 20% \(n_{c}\), the two resulted in similar levels of BSRS from the seed, which is illustrated by the loss of seed power. Inspection of the pump power at 20% \(n_{c}\) (not shown) indicated substantial BSRS from the pump, which was not observed in the inhomogeneous plasma. While wavelength detuning can be used to emulate flow for the parameters considered here, this is not always the case. Recent work has shown that CBET between wavelength-shifted beams in a stationary plasma can be substantially suppressed compared to CBET between wavelength-degenerate beams in a flowing plasma.[45; 46] This suppression occurs when many weakly damped ion acoustic waves originating from many different speckles destructively interfere. However, in the coronal plasmas relevant to OMEGA implosions, the Landau damping rate \(\gamma\) of the ion acoustic waves is relatively high: \(\gamma\) / \(\Omega_{A}\approx 0.1\). As a result, the driven ion acoustic wave can only travel a fraction of a speckle width before decaying away. Defining the damping length as \(L_{\gamma}\equiv v_{A}^{\prime}/\gamma\) and using the speckle width \(d_{s}\approx 4\lambda_{0}f_{\#}/\pi\), where \(f_{\#}\) is the f-number, one finds \(L_{\gamma}/d_{s}\lesssim 0.2\) over the entire CBET-active region. As a final note, initializing VPIC in a hydrodynamic equilibrium (\(\partial_{t}n=0\)) and simulating the pump and probe beams away from their turning points precludes frequency shifts due to the time evolution of the density profile, i.e., the Dewandre shift.[47] For the interactions considered, these shifts are much smaller than the frequency detuning between the beams (Eq. 12), and thus would not have a substantial effect on the results. Accounting for the shifts in Eq. 3 would slightly change the resonant angle at each density. Figure 7: Comparison of CBET between wavelength-degenerate beams in the presence of flow and wavelength-detuned beams in the absence of flow. In all cases, the results are nearly identical. The simulations were initialized with a uniform density plasma to avoid unwanted evolution of the density profile in the presence of flow. (a) and (c) Time evolution of CBET gain at 30% and 10% \(n_{c}\), respectively. (b) Percentage change of the seed power as a function of time at 20% \(n_{c}\). ## Appendix B Stimulated Raman Scattering An electromagnetic wave propagating through a plasma with frequency \(\omega_{i}\) and wavevector \(\mathbf{k}_{i}\) will scatter from fluctuations in the electron density. The scattered wave will have a frequency \(\omega_{s}=\omega_{i}\mp\omega_{f}\) and wavenumber \(\mathbf{k}_{s}=\mathbf{k}_{i}\mp\mathbf{k}_{f}\) either down or upshifted by that of the fluctuation (\(\omega_{f}\),\(\mathbf{k}_{f}\)). When the fluctuation corresponds to a natural mode of the plasma, the downshifted scattered wave can beat with the initial wave to produce a ponderomotive force that resonantly drives the fluctuation. The resonant drive increases the amplitude of the fluctuation, which, in turn, enhances the rate of scattering. The stimulated Raman scattering (SRS) instability refers to a special case of this feedback loop where the natural mode is an electron plasma wave. More specifically, \((\omega_{f},\mathbf{k}_{f})=(\omega_{e},\mathbf{k}_{e})\), where \(1+\chi_{e}(\omega_{e},\mathbf{k}_{e})\approx 0\), and \((\omega_{s},\mathbf{k}_{s})=(\omega_{i}-\omega_{e},\mathbf{k}_{i}-\mathbf{k}_ {e})\). These relations indicate that SRS cannot occur when \(n_{e}>n_{c}/4\). At these densities, \(\omega_{p}>\omega_{i}/2\), which would result in an evanescent scattered wave with \(\omega_{s}<\omega_{p}\). For densities lower than \(n_{c}/4\), the dispersion relation for SRS is given by [48; 49] \[\begin{split}[\omega_{s}^{2}-|\mathbf{k}_{s}|^{2}c^{2}-\omega_{ p}^{2}][1+&\chi_{e}(\omega_{e}\mathbf{k}_{e})]=\\ &-\tfrac{1}{2}\chi_{e}(\omega_{e},\mathbf{k}_{e})(ck_{e}|\mathbf{ a}_{i}^{\text{in}}|)^{2},\end{split} \tag{21}\] where \(|\mathbf{a}_{i}^{\text{in}}|=(e/m_{e}c\omega_{i})(2I_{i}/\varepsilon_{0}\text{ v}_{i})^{1/2}\) is the normalized amplitude of the initial electromagnetic wave. Upon working through some algebra, one can show that the growth is maximized for backscattering: \[\gamma_{\text{SRS}}\approx\tfrac{1}{4}ck_{e}(\omega_{e}/\omega_{s})^{1/2}| \mathbf{a}_{i}^{\text{in}}|-\tfrac{1}{2}(\gamma_{e}+\gamma_{s}), \tag{22}\] where \(\gamma_{e}\) and \(\gamma_{s}\) are the amplitude damping rates of the electron plasma and scattered wave, respectively. Using Eq. 21 and the parameters corresponding to Fig. 5, one finds \(\omega_{s}\approx 1.18\omega_{p}\), \(\omega_{e}\approx 1.06\omega_{p}\), and \(\gamma_{\text{SRS}}=1.1\times 10^{-3}\omega_{p}=2.6\times 10^{12}\) s\({}^{-1}\), all of which are in agreement with the simulations of the high intensity pump and seed beams at 20% \(n_{c}\).
2307.12426
Anomalous Dimensions at an Infrared Fixed Point in an SU($N_c$) Gauge Theory with Fermions in the Fundamental and Antisymmetric Tensor Representations
We present scheme-independent calculations of the anomalous dimensions $\gamma_{\bar\psi\psi,IR}$ and $\gamma_{\bar\chi\chi,IR}$ of fermion bilinear operators $\bar\psi\psi$ and $\bar\chi\chi$ at an infrared fixed point in an asymptotically free SU($N_c$) gauge theory with massless Dirac fermion content consisting of $N_F$ fermions $\psi^a_i$ in the fundamental representation and $N_{A_2}$ fermions $\chi^{ab}_j$ in the antisymmetric rank-2 tensor representation, where $i,j$ are flavor indices. For the case $N_c=4$, $N_F=4$, and $N_{A_2}=4$, we compare our results with values of these anomalous dimensions measured in a recent lattice simulation and find agreement.
Thomas A. Ryttov, Robert Shrock
2023-07-23T20:45:26Z
http://arxiv.org/abs/2307.12426v1
Anomalous Dimensions at an Infrared Fixed Point in an SU(\(N_{c}\)) Gauge Theory with Fermions in the Fundamental and Antisymmetric Tensor Representations ###### Abstract We present scheme-independent calculations of the anomalous dimensions \(\gamma_{\bar{\psi}\psi,IR}\) and \(\gamma_{\bar{\chi}\chi IR}\) of fermion bilinear operators \(\bar{\psi}\psi\) and \(\bar{\chi}\chi\) at an infrared fixed point in an asymptotically free SU(\(N_{c}\)) gauge theory with massless Dirac fermion content consisting of \(N_{F}\) fermions \(\psi_{i}^{a}\) in the fundamental representation and \(N_{A_{2}}\) fermions \(\chi_{j}^{ab}\) in the antisymmetric rank-2 tensor representation, where \(i,j\) are flavor indices. For the case \(N_{c}=4\), \(N_{F}=4\), and \(N_{A_{2}}=4\), we compare our results with values of these anomalous dimensions measured in a recent lattice simulation and find agreement. ## I Introduction An asymptotically free gauge theory with sufficiently many massless fermions has an infrared zero in its beta function, which is an infrared fixed point (IRFP) of the renormalization group [1; 2; 3]. At this IRFP the theory is scale-invariant and is inferred to be conformally invariant [4], whence the commonly used term "conformal window" (CW). Because of the asymptotic freedom, one can use perturbation theory reliably in the deep ultraviolet (UV) where the gauge coupling approaches zero, and then follow the renormalization-group flow toward the infrared. These statements apply to both vectorial and chiral gauge theories; here we restrict our consideration to vectorial gauge theories. As the fermion content is reduced, the gauge coupling at the IRFP increases in strength and eventually exceeds a value such that there is generically spontaneous chiral symmetry breaking and dynamical fermion mass generation. This defines the lower end of the conformal window. Theories that lie slightly below this lower end exhibit quasi-conformal behavior over a large interval of Euclidean energy/momentum scales, over which the gauge coupling runs slowly due to a small beta function. Such theories just below the lower end of the conformal window can be relevant to approaches to composite-Higgs scenarios and associated physics beyond the Standard Model (BSM). Considerable progress has been made in studies of quasi-conformal vectorial gauge theories with several flavors of fermions transforming according to a single representation of the gauge group, e.g., SU(3) with \(N_{F}=8\) Dirac fermions in the fundamental representation [5]-[7]. For an operator \({\cal O}\), the full scaling dimension is denoted as \(D_{\cal O}\) and its free-field value as \(D_{{\cal O},free}\). The anomalous dimension of this operator, denoted \(\gamma_{\cal O}\), is defined via the relation \(D_{\cal O}=D_{{\cal O},free}-\gamma_{\cal O}\). The anomalous dimensions of gauge-invariant operators at an IRFP are of basic physical interest. While the simplest gauge theories have used fermions transforming according to a single representation of the gauge group, a natural generalization is to study theories with multiple fermions transforming according to different representations of the gauge group. In previous work [8], we presented scheme-independent perturbative calculations of anomalous dimensions of fermion bilinears at an IRFP in the conformal window in a theory of this type (in \(d=4\) spacetime dimensions at zero temperature), with a general non-Abelian gauge group \(G\) and massless fermion content consisting of \(N_{f}\) fermions \(f\) in a representation \(R\) and \(N_{f^{\prime}}\) fermions \(f^{\prime}\) in a representation \(R^{\prime}\) of \(G\)[9]. Our calculational method applies at an exact IRFP in the conformal window (sometimes called the non-Abelian Coulomb phase). In [10] with S. Girmohanta we studied theories of this type with \(G=\) SU(\(N_{c}\)), \(R\) equal to the fundamental representation, and \(R^{\prime}\) equal to the adjoint or rank-2 tensor representation and investigated a type of 't Hooft-Veneziano limit, \(N_{f}\to\infty\), \(N_{c}\to\infty\) with \(N_{f}/N_{c}\) and \(N_{f^{\prime}}\) fixed. We denote a gauge theory with gauge group \(G=\) SU(\(N_{c}\)) and (massless) fermion content consisting of \(N_{F}\) Dirac fermions in the fundamental representation, denoted \(F\), and \(N_{A_{2}}\) Dirac fermions in the antisymmetric rank-2 tensor representation, denoted \(A_{2}\), as \((N_{c},N_{F},N_{A_{2}})\) for short. Recently, Ref. [11] has reported interesting results from lattice simulations of the \((N_{c},N_{F},N_{A_{2}})=(4,4,4)\) theory. Since the \(A_{2}\) representation in SU(4) is self-conjugate, the \(N_{A_{2}}\) Dirac fermions are equivalent to \(2N_{A_{2}}\) Majorana fermions. Ref. [11] finds evidence for an IRFP inferred to lie in the conformal window, near its lower end, and presents measurements of the anomalous dimensions of the \(F\) and \(A_{2}\) fermion bilinears \(\gamma_{m}^{(4)}\) and \(\gamma_{m}^{(6)}\) (where the superscripts refer to the dimensionalities of these representations of SU(4)), and of gauge-singlet composite-fermion operators. Given the conclusion in [11] that this theory is in the conformal window, a relevant question is whether our general higher-order perturbative calculations of anomalous dimensions of fermion bilinears in [8], when specialized to this theory, yield results in agreement with the values measured in [11]. To our knowledge, this question has not been previously investigated in the literature. In the present work we address and answer this question for the anomalous dimensions \(\gamma_{m}^{(4)}\) and \(\gamma_{m}^{(6)}\) by extracting the requisite special case of our general calcu lations of anomalous dimensions of fermion bilinears in [8] for the \((N_{c},N_{F},N_{A_{2}})=(4,4,4)\) theory. To state our conclusions in advance, to within the uncertainties in our finite-order perturbative calculation, we find agreement with the lattice results in [11]. The authors of Ref. [11] also observed that their conclusion that the (4,4,4) theory is in the conformal window disagreed with a calculation in [12] (denoted KHL) of the lower boundary of this conformal window in the \((4,N_{F},N_{A_{2}})\) theory based on a critical condition on \(\max(\gamma_{m}^{(4)},\ \gamma_{m}^{(6)})\), denoted \(\gamma\)CC. We investigate this further here. As noted above, our calculations of anomalous dimensions of fermion bilinears in [8] were for a general non-Abelian gauge group \(G\) and fermion representations \(R\) and \(R^{\prime}\). Before presenting our calculations for SU(4) theory, we will first specialize the results from [8] to the case of the gauge group \(G=\) SU(\(N_{c}\)) with massless fermion content consisting of \(N_{F}\) Dirac fermions in the fundamental representation and \(N_{A_{2}}\) Dirac fermions in the antisymmetric rank-2 tensor representation of SU(\(N_{c}\)), i.e., theories of the type \((N_{c},N_{F},N_{A_{2}})\) in our shorthand notation. We then further specialize to the case \(N_{c}=4\), and then finally to the \((4,4,4)\) theory. We denote the massless Dirac fermions in the \(F\) and \(A_{2}\) representations as \(\psi_{i}^{a}\) and \(\chi_{j}^{ab}=-\chi_{j}^{ba}\), where \(a,b\) are SU(\(N_{c}\)) gauge indices, and \(i,j\) are flavor indices with \(i=1,...,N_{F}\) and \(j=1,...,N_{A_{2}}\). We shall use our general results in [8] to calculate scheme-independent series expansions for the anomalous dimensions at the IRFP, denoted \(\gamma_{\bar{\psi}\psi,IR}\) and \(\gamma_{\bar{\chi}\chi,IR}\), of the respective (gauge-invariant) fermion bilinears \[\bar{\psi}\psi=\sum_{i=1}^{N_{f}}\bar{\psi}_{a,i}\psi_{i}^{a} \tag{1}\] and \[\bar{\chi}\chi=\sum_{j=1}^{N_{A_{2}}}\bar{\chi}_{ab,j}\chi_{j}^{ab}\, \tag{2}\] where the sums over color indices are understood and run from 1 to \(N_{c}\). Although we take the fermions to be massless, the operators (1) and (2) would be mass operators (with all fermion flavors taken to have equal mass) if these fermions were massive, and for this reason another common notation for the anomalous dimensions is \(\gamma_{m}^{(F)}\equiv\gamma_{\bar{\psi}\psi,IR}\) and \(\gamma_{m}^{(A_{2})}\equiv\gamma_{\bar{\chi}\chi,IR}\). In the special case \(N_{c}=4\), these are also written with reference to the respective dimensionalities 4 and 6 of the \(F\) and \(A_{2}\) representations as \(\gamma_{m}^{(4)}\equiv\gamma_{\bar{\psi}\psi,IR}\) and \(\gamma_{m}^{(6)}\equiv\gamma_{\bar{\chi}\chi,IR}\). The color and flavor indices will often be suppressed in the notation. It should be mentioned that studies were also performed of the (4,2,2) theory, due to its possible role as a model for a composite Higgs boson and a partially composite top quark [13]-[15]. However, as we noted in [8], the (4,2,2) theory is in the chirally broken phase, where there is no exact IRFP and hence where our calculations are not directly applicable. In general, BSM theories with fermions in higher-dimensional representations have long been of interest; in addition to Refs. [11] and [13]-[15], some of the many works include [16; 17; 18]. This paper is organized as follows. In Section II we briefly discuss some relevant background and our general calculational methods. Section III contains our results for the anomalous dimensions for SU(\(N_{c}\)) with general \(N_{F}\) and \(N_{A_{2}}\) in the conformal window, while Section IV presents the corresponding formulas for \(N_{c}=4\) and for the specific \((N_{c},N_{F},N_{A_{2}})=(4,4,4)\) theory. Our conclusions are given in Section V. ## II Calculational methods In this section we briefly review our calculational methods and relevant notation. In the context of the general SU(\(N_{c}\)) gauge group, we first mention two degenerate cases. If \(N_{c}=2\), then the \(A_{2}\) representation is a singlet and hence decouples from the dynamics, so the theory reduces to one with fermions in just the fundamental representation. Hence, in all expressions involving the number \(N_{A_{2}}\), this number always occurs multipled by the factor \((N_{c}-2)\). If \(N_{c}=3\), then \(A_{2}=\bar{F}\), i.e., the \(A_{2}\) representation is the conjugate fundamental representation of SU(3). Taking into account the fact that a Dirac fermion \(f\) has a decomposition into chiral components \(f=f_{L}+f_{R}\) and the property that a left-handed chiral component of a fermion can be equivalently written as the charge conjugate of a right-handed antifermion, it follows that if \(N_{c}=3\), then the theory reduces to one with \(N_{F}+N_{A_{2}}\) fermions in the fundamental representation. If \(N_{c}\geq 4\), then the \(F\) and \(A_{2}\) representations are distinct. ### Relevant Range of \(N_{f}\) and \(N_{a_{2}}\) We denote the running gauge coupling as \(g=g(\mu)\), where \(\mu\) is the Euclidean energy/momentum scale at which this coupling is measured. We define \(\alpha(\mu)=g(\mu)^{2}/(4\pi)\). As noted before, since we require the theory to be asymptotically free, its properties can be computed perturbatively in the UV limit at large \(\mu\), where \(\alpha(\mu)\to 0\). The dependence of \(\alpha(\mu)\) on \(\mu\) is described by the renormalization-group (RG) beta function, \[\beta=\frac{d\alpha(\mu)}{d\ln\mu}. \tag{3}\] The argument \(\mu\) will generally be suppressed in the notation. The series expansion of \(\beta\) in powers of \(\alpha\) is \[\beta=-2\alpha\sum_{\ell=1}^{\infty}b_{\ell}\,a^{\ell}\, \tag{4}\] where \[a\equiv\frac{g^{2}}{16\pi^{2}}=\frac{\alpha}{4\pi}\, \tag{3}\] and \(b_{\ell}\) is the \(\ell\)-loop coefficient. For a theory with a gauge group \(G\) and Dirac fermions \(f\) and \(f^{\prime}\) in respective representations \(R\) and \(R^{\prime}\) of \(G\), the one-loop coefficient in the beta function is [1] \[b_{1}=\frac{1}{3}\Big{[}11C_{A}-4(N_{f}T_{f}+N_{f^{\prime}}T_{f^{\prime}}) \Big{]}\, \tag{4}\] and the two-loop coefficient is [2] \[b_{2}=\frac{1}{3}\bigg{[}34C_{A}^{2}-4N_{f}T_{f}(5C_{A}+3C_{f})-4N_{f^{\prime}} T_{f^{\prime}}(5C_{A}+3C_{f^{\prime}})\bigg{]}\, \tag{5}\] where \(C_{A}\), \(C_{f}\), and \(T_{f}\) are group invariants (see [19] and the Appendix). With an overall minus sign extracted, as in Eq. (1), the condition of asymptotic freedom is that \(b_{1}>0\). Setting \(R=F\) and \(R^{\prime}=A_{2}\) and substituting the values of the group invariants for \(G=\mathrm{SU}(N_{c})\), the condition \(b_{1}>0\) reads \[N_{F}+(N_{c}-2)N_{A_{2}}<\frac{11N_{c}}{2}. \tag{6}\] The resultant upper (\(u\)) limits on \(N_{F}\) and \(N_{A_{2}}\) imposed by the requirement of asymptotic freedom are thus \[N_{F,u}=\frac{11}{2}N_{c}-(N_{c}-2)N_{A_{2}} \tag{7}\] and \[N_{A_{2},u}=\frac{11N_{c}-2N_{F}}{2(N_{c}-2)}. \tag{8}\] The maximal order to which the beta function is independent of the scheme used for regularization and renormalization is the two-loop order. With \(b_{1}>0\), the condition that this two-loop beta function should have an IR zero is that \(b_{2}<0\). For the \(\mathrm{SU}(N_{c})\) theory, this is the condition \[N_{F}(13N_{c}^{2}-3)+2N_{A_{2}}(N_{c}-2)(8N_{c}^{2}-3N_{c}-6)>34N_{c}^{3}. \tag{9}\] The region of the first quadrant in the \((N_{F},N_{A_{2}})\) plane where the inequalities (6) and (9) are both satisfied will be denoted \(I_{IRZ}\), where the subscript refers to the existence of an IR zero (IRZ) in the beta function. We label the upper and lower boundaries of the \(I_{IRZ}\) region as \(\mathcal{B}_{IRZ,u}\) and \(\mathcal{B}_{IRZ,\ell}\), respectively. In plotting these boundaries, one formally generalizes \(N_{F}\) and \(N_{A_{2}}\) from positive integers (or half-integers for \(N_{A_{2}}\) if \(N_{c}=4\)) to positive real numbers, with the understanding that the physical cases are integral (or half-integral for \(N_{A_{2}}\) if \(N_{c}=4\)). Analogously, we denote the upper and lower boundaries of the conformal window as \(\mathcal{B}_{CW,u}\) and \(\mathcal{B}_{CW,\ell}\), respectively. The upper boundary \(\mathcal{B}_{CW,u}=\mathcal{B}_{IRZ,u}\) is the solution locus to the condition \(b_{1}=0\). The lower boundary \(\mathcal{B}_{CW,\ell}\) is not exactly known even for the case of theories with fermions transforming in only one representation; indeed, much work using lattice simulations has been, and continues to be, devoted to determining the approximate location of this lower conformal-window boundary [20; 21]. We discuss this lower boundary \(\mathcal{B}_{CW,\ell}\) further below for the \(N_{c}=4\) theory. Following our labelling convention in [8], we take the horizontal and vertical axes of the first quadrant of the \((N_{F},N_{A_{2}})\) plane to be the \(N_{F}\) and \(N_{A_{2}}\) axes, respectively. The boundaries of the \(I_{IRZ}\) region given by the equations \(b_{1}=0\) and \(b_{2}=0\) are both line segments in this first quadrant of the \((N_{F},N_{A_{2}})\) plane. In general, the slope of the line \(b_{1}=0\) is \[\frac{dN_{A_{2}}}{dN_{F}}=-\frac{1}{N_{c}-2}\, \tag{10}\] and the slope of the line \(b_{2}=0\) is \[\frac{dN_{A_{2}}}{dN_{F}}=-\frac{(13N_{c}^{2}-3)}{2(N_{c}-2)(8N_{c}^{2}-3N_{c}- 6)}. \tag{11}\] ### Higher-Order Terms in Beta Function For a theory with a general gauge group \(G\) and \(N_{f}\) fermions in a single representation, \(R\), the coefficients \(b_{1}\) and \(b_{2}\) were calculated in [1] and [2], while \(b_{3}\), \(b_{4}\), and \(b_{5}\) were calculated in the commonly used \(\overline{\mathrm{MS}}\) scheme [22] in [23; 24], [25], and [26], respectively (see also [27]). For the analysis of a theory with fermions in multiple different representations, one needs generalizations of these results. These are straightforward to derive in the case of \(b_{1}\) and \(b_{2}\), but new calculations are required for higher-loop coefficients. These were performed in [28] (again in the \(\overline{\mathrm{MS}}\) scheme) up to four-loop order, and we used the results of Ref. [28] in [8]. ### Anomalous Dimensions The conventional expansion of the anomalous dimension \(\gamma_{\bar{f}f}\) of the fermion bilinear \(\bar{f}f\) in a gauge theory, in terms of the squared gauge coupling, is \[\gamma_{\bar{f}f}=\sum_{\ell=1}^{\infty}c_{\ell}^{(f)}\,a^{\ell}\, \tag{12}\] where \(c_{\ell}^{(f)}\) is the \(\ell\)-loop coefficient, and correspondingly for \(\gamma_{\bar{f}^{\prime},f^{\prime}}\) in a theory with both \(f\) and \(f^{\prime}\) fermions. These expansions apply, in particular, at an IRFP. They may also be useful in the analysis of a quasi-conformal field theory with parameters such that it lies slightly below the lower end of the conformal window and hence exhibits UV to IR evolution over an extended interval of \(\mu\) governed by an approximate IRFP. The one-loop coefficient \(c_{1}^{(f)}\) is scheme-independent, while the \(c_{\ell}^{(f)}\) with \(\ell\geq 2\) are scheme-dependent, and similarly with the \(c_{\ell}^{(f^{\prime})}\). For a general gauge group \(G\) and \(N_{f}\) fermions in a single representation \(R\) of \(G\), the \(c_{\ell}^{(f)}\) have been calculated up to loop order \(\ell=4\) in [29; 30] and \(\ell=5\) in [31]. For the case of multiple fermion representations, the coefficients \(c_{\ell}^{(f)}\) have been calculated up to four-loop order in [32] in the \(\overline{\rm MS}\) scheme. Physical quantities such as anomalous dimensions at an IRFP clearly must be scheme-independent. In conventional computations of these quantities, one first writes them as series expansions in powers of the coupling as in (2.12), and then evaluates these series expansions with \(\alpha\) set equal to \(\alpha_{IR}\), calculated to a given loop order. These calculations have been performed for anomalous dimensions of gauge-invariant fermion bilinears in a theory with a single fermion representation up to the four-loop level [33]-[35] and to the five-loop level in [36]. However, as is well known, these conventional (finite-order) series expansions are scheme-dependent beyond the leading terms. Studies of scheme dependence in the context of an IRFP have been carried out in [37]-[42]. The fact that the conventional series expansions for physical properties are scheme-dependent does not, by itself, reduce the usefulness of these expansions. For example, this scheme dependence is also true of higher-order calculations in quantum chromodynamics (QCD), which were used to analyze data from hadron colliders such as the Tevatron at Fermilab and the Large Hadron Collider at CERN. Considerable effort has been, and continues to be, expended to construct and apply schemes that minimize higher-order contributions in these QCD calculations [43]. Indeed, in QCD, because the RG fixed point is an ultraviolet fixed point at the origin in coupling constant space, it is, in principle, possible to transform to a scheme where the beta function has no terms higher than two-loop order (the 't Hooft scheme) [44]. However, as was shown in [37; 38], it is considerably more difficult to try to carry out such a scheme transformation to remove terms at loop order 3 and higher for a fixed point away from the origin. Thus, in the analysis of the properties of a theory at a fixed point away from the origin, as in the case of the IRFP of interest here, it is useful to employ a series expansion method for calculating physical quantities, such as anomalous dimensions, with the property that the results to each order are scheme-independent. A simple fact makes this possible: at the upper end of the conformal window, as \(b_{1}\to 0\), this implies that \(\alpha_{IR}\to 0\). Hence, one can reexpress a series expansion at an IRFP in the conformal window as an expansion in the manifestly scheme-independent variable \(b_{1}\). For a theory with fermions \(f\) in a single representation, it is natural to use the scheme-independent Banks-Zaks variable [3; 45]\(\Delta_{f}=N_{f,u}-N_{f}\). Such calculations were carried out in [46]-[54]. In [8] we generalized this analysis to theories with fermions \(f\) and \(f^{\prime}\) in different representations \(R\) and \(R^{\prime}\) of a general gauge group, \(G\). The corresponding expansion variables for the scheme-independent series expansions of physical quantities at an IRFP are \[\Delta_{f} = N_{f,u}-N_{f} \tag{2.13}\] \[= \frac{11C_{A}-4N_{f^{\prime}}T_{f^{\prime}}-4N_{f}T_{f}}{4T_{f}}\] \[= = \frac{3b_{1}}{4T_{f}}\,\] and similarly for \(\Delta_{f^{\prime}}\) with \(f\leftrightarrow f^{\prime}\). Note that these expansion variables satisfy the relation \[\Delta_{f^{\prime}}=\frac{T_{f}}{T_{f^{\prime}}}\,\Delta_{f}. \tag{2.14}\] The scheme-independent expansion for \(\gamma_{ff,IR}\) is \[\gamma_{ff,IR}=\sum_{j=1}^{\infty}\kappa_{j}^{(f)}\,\Delta_{f}^{j}\, \tag{2.15}\] and similarly for \(\gamma_{\bar{f}^{\prime}f^{\prime},IR}\) with \(f\to f^{\prime}\). The calculation of the coefficient \(\kappa_{j}^{(f)}\) in Eq. (2.15) requires, as inputs, the values of the \(b_{t}\) in Eq. (2.2) for \(1\leq\ell\leq j+1\) and the \(c_{\ell}\) for \(1\leq\ell\leq j\). We refer the reader to our previous papers for further details of the calculations. Using the calculation of the beta function for multiple fermion representation to four-loop order in [28], together with the calculation in [32] of the anomalous dimension coefficients in (2.12) up to \(\ell=3\) loop order, we can calculate \(\gamma_{\bar{f}fIR}\) to order \(O(\Delta_{f}^{3})\) and \(\gamma_{\bar{f}^{\prime}f^{\prime},IR}\) to order \(O(\Delta_{f^{\prime}}^{3})\) Parenthetically, note that we cannot make use of the four-loop calculation of the \(c_{\ell}^{(f)}\) in [32] to compute \(\gamma_{\bar{f}f,IR}\) to order \(O(\Delta_{f}^{4})\) and \(\gamma_{f^{\prime}f^{\prime},IR}\) to \(O(\Delta_{f^{\prime}}^{4})\), because this would require, as an input, the five-loop coefficient \(b_{5}\) in the beta function for this case of multiple fermion representations, and, to our knowledge, this has not been calculated. In our specific application here, where the \(f\) and \(f^{\prime}\) fermions transform according to the representations \(R=F\) and \(R^{\prime}=A_{2}\) of SU(\(N_{c}\)), we will write these scheme-independent series expansions as \[\gamma_{\bar{\psi}\psi,IR}=\sum_{j=1}^{\infty}\kappa_{j}^{(F)}\Delta_{F}^{j} \tag{2.16}\] and \[\gamma_{\bar{\chi}\chi,IR}=\sum_{j=1}^{\infty}\kappa_{j}^{(A_{2})}\Delta_{A_{2 }}^{j}\, \tag{2.17}\] where \[\Delta_{F}=\frac{11}{2}N_{c}-N_{F}-(N_{c}-2)N_{A_{2}} \tag{2.18}\] and \[\Delta_{A_{2}}=\frac{11N_{c}-2N_{F}-2(N_{c}-2)N_{A_{2}}}{2(N_{c}-2)}. \tag{2.19}\] For this \((N_{c},N_{F},N_{A_{2}})\) theory, Eq. (2.14) reads \[\Delta_{A_{2}}=\frac{T_{F}}{T_{A_{2}}}\,\Delta_{F}=\frac{\Delta_{F}}{N_{c}-2}. \tag{2.20}\] The truncation of the series (2.16) to order \(O(\Delta_{F}^{p})\) is denoted as \(\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{p}}\) and similarly, the truncation of the series (2.17) to order \(O(\Delta_{A_{2}}^{p})\) is denoted \(\gamma_{\bar{\chi}\chi,IR,\Delta_{A_{2}}^{p}}\). In accord with the remarks on the \(N_{c}=3\) special of the theory at the beginning of this section, we note the identities \[N_{c}=3 \Rightarrow \kappa_{j}^{(F)}=\kappa_{j}^{(A_{2})}\, \tag{2.21}\] \[\Delta_{F}=\Delta_{A_{2}}\,\] \[\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{p}}=\gamma_{\bar{\chi}\chi, IR,\Delta_{A_{2}}^{p}}\.\] In general, series expansions in powers of interaction couplings in quantum field theory are asymptotic expansions rather than Taylor series. As we discussed in [52], the scheme-independent expansion (2.15) is also generically an asymptotic expansion rather than a Taylor series expansion with finite radius of convergence. This is a consequence of the property that in order for a series expansion of a function \(f(z)\) in powers of \(z\) to be a Taylor series with finite radius of convergence, it is necessary (and sufficient) that \(f(z)\) must be analytic at the origin of the complex \(z\) plane. With \(z=\Delta_{f}\), this means that the properties of the theory should remain qualitatively similar for small positive and negative real \(\Delta_{f}\). However, as \(\Delta_{f}\) passes from real positive values through zero to negative real values, i.e., as \(N_{f}\) increases through the value \(N_{f,u}\), the theory changes qualitatively from being asymptotically free to being IR-free. Nevertheless, just as with perturbative calculations in quantum electrodynamics, one may still use the scheme-independent expansions (2.16) and (2.17) to get approximate information about these anomalous dimensions. In our previous works, e.g., [46; 47; 48; 49; 51; 52], we have carried out the requisite assessment of higher-order contributions, up to order \(O(\Delta_{f}^{4})\) for \(\gamma_{\bar{f}f,IR}\) and \(O(\Delta_{f}^{5})\) for \(\beta_{IR}^{\prime}\) in theories with fermions in a single representation. These showed that the scheme-independent series expansions are reasonably convergent throughout the conformal window, although, of course, the higher-order terms make relatively larger contributions as one approaches the lower end of this window. The curves that we will show below for \(\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{p}}\) and \(\gamma_{\bar{\chi}\chi,IR,\Delta_{A_{2}}^{p}}\) for \(1\leq p\leq 3\) provide an analogous quantitative measure of the effective convergence of these expansions. Interestingly, in [50] we studied the \({\cal N}=1\) supersymmetric \({\rm SU}(N_{c})\) theory with matter content consisting of \(N_{F}\) copies of chiral superfields and their conjugates, for which the anomalous dimension of the gauge-invariant chiral superfield bilinear is exactly known [55; 56], and we showed (a) that the \(\kappa_{j}\) coefficients precisely reproduce the series expansion coefficients of the exact results to all orders, and (b) the scheme-independent expansion of this anomalous dimension is convergent throughout the full nonabelian Coulomb phase, which corresponds to the conformal window in that theory. ### Condition on Anomalous Dimensions for Conformal Window On the basis of analyses of the Schwinger-Dyson equation for the propagator of a fermion \(f\), operator product expansions, and other arguments [57; 58; 59; 60], it has been suggested that an upper bound \[\gamma_{\bar{f}f,IR}\leq 1 \tag{2.22}\] applies for an IRFP in the conformal window. In view of the uncertainties pertaining to strong coupling and non-perturbative effects, this bound is also sometimes stated as \(\gamma_{\bar{f}f,IR}\lesssim 1\); here we will take this as implicit in our discussions. Since \(\gamma_{\bar{f}f,IR}\) increases as one moves down through the conformal window from the upper end where \(b_{1}=0\), it follows that when the inequality (2.22) is saturated, i.e, when the critical (denoted \(\gamma\)CC) \[\gamma_{\bar{f}f,IR}=1 \tag{2.23}\] holds, this defines the lower end of the conformal window, \({\cal B}_{CW,\ell}\). As we discussed in [50], this is true in the case of an \({\cal N}=1\) supersymmetric theory with gauge group \({\rm SU}(N_{c})\) and a set of chiral superfields in the \(F\) and \(\bar{F}\) representations, where the anomalous dimension of the gauge-invariant chiral superfield bilinear is exactly known [55; 56]. The occurrence of the quadratic equation \[\gamma_{\bar{f}f,IR}(2-\gamma_{\bar{f}f,IR})=1 \tag{2.24}\] as a critical condition for fermion condensation and its connection with the condition (2.23) was noted in [57]. This quadratic equation (2.24) has a double root at \(\gamma_{\bar{f}f,IR}=1\), and hence an exact solution of the quadratic equation (2.24) yields the same result as the linear condition (2.23). However, when applied in the context of series expansions such as Eq. (2.16) and (2.17), as calculated to finite order, the results differ from those obtained with the linear condition (2.23). This difference arises because the quadratic condition (2.26) generates higher-order terms in powers of the scheme-independent expansion variable, and leads to different coefficients of lower-order terms [12; 61]. In a theory with \(N_{f}\) fermions transforming according to a single representation of the gauge group, the use of the quadratic condition (2.24) was found [12; 61] to (i) show better convergence as a function of increasing order of truncation of the series (2.15) than the linear condition (2.23) and (ii) predict that the lower boundary \({\cal B}_{CW,\ell}\) of the conformal window occurs at a higher value of \(N_{f}\) than the linear condition. In a theory with multiple fermions in different representations of the gauge group, the generalization of the condition (23) for the lower boundary \({\cal B}_{CW,\ell}\) of the conformal window is that this lower boundary is reached when the larger of the anomalous dimensions increases through unity, since this would be expected to result in the dynamical mass generation for the fermion with the larger anomalous dimension, thereby driving the system out of the conformal window. Thus, in our present theory, this lower end of the conformal window occurs if \[\max(\gamma_{\tilde{\psi}\psi,IR},\ \gamma_{\tilde{\chi}\chi,IR})=1. \tag{25}\] In this type of theory, the quadratic form of the critical condition is Eq. (24) with \(\gamma_{\tilde{f}f,IR}\) being given by \(\max(\gamma_{\tilde{\psi}\psi,IR},\ \gamma_{\tilde{\chi}\chi,IR})\). Since \(\gamma_{\tilde{\chi}\chi,IR}>\gamma_{\tilde{\psi}\psi,IR}\) here, Eq. (24) reduces to \[\gamma_{\tilde{\chi}\chi,IR}(2-\gamma_{\tilde{\chi}\chi,IR})=1. \tag{26}\] Because of the approximations involved in applying either the linear condition (23) or the quadratic condition (24) in the context of finite-order series expansions, it is useful to compare the lower boundary \({\cal B}_{CW,\ell}\) predicted by each of these for the present theory. The difference gives a measure of the uncertainties involved in the determination of this lower boundary using the \(\gamma\)CC condition. The boundary \({\cal B}_{CW,\ell}\) was calculated in [12] using the quadratic \(\gamma\)CC condition. We have checked and confirmed the result for \({\cal B}_{CW,\ell}\) obtained in [12] with the quadratic \(\gamma\)CC condition. For the comparison, here we will calculate the prediction for this boundary using the linear condition. As a side note to our study, it may be recalled that the conditions (22) and (25) have a connection to approaches to physics beyond the Standard Model involving dynamical electroweak symmetry breaking (EWSB). In such approaches there has been interest in models featuring a new gauge interaction that becomes strongly coupled on the TeV scale, producing fermion condensates and thus EWSB. Models with the property of having a slowly running gauge coupling and approximate scale invariance over an extended interval of Euclidean energy/momentum scales, due to an approximate zero of the relevant beta function, have been of particular interest. One reason for this is that when the approximate scale invariance in the theory is dynamically broken by the formation of fermion condensates, this gives rise to an approximate Nambu-Goldstone boson, namely a dilaton [62]. In turn, insofar as the observed Higgs boson is modelled as a composite particle, at least partially dilatonic in nature, this can provide a means of helping to protect its mass aginst large radiative corrections. Although the observed properties of the Higgs boson, including the production cross section and couplings to the \(W\) and \(Z\) vector bosons and to Standard-Model fermions, are in excellent agreement with SM predictions [63; 64], experimental work will continue to search for, and set constraints on, Higgs compositeness and possible deviations from SM predictions. A second reason is that a renormalization-group flow from the UV to the IR that is influenced by an approximate IRFP can naturally give rise to large anomalous dimension(s) \(\gamma_{\tilde{f}f,IR}\simeq 1\) for the fermions \(f\) subject to the strongly coupled gauge interaction. This has been useful in the effort to produce a realistically large top quark mass while suppressing flavor-changing neutral-current processes and minimizing corrections to precision electroweak observables. (In this model-building effort, one must also confront the challenge of producing the requisite large splitting between the \(t\) and \(b\) quark masses.) Examples of reasonably UV-complete models with dynamical EWSB that also feature sequential breaking of an extended gauge symmetry to produce a generational hierarchy in quark and charged lepton masses, as well as neutrino masses, and make use of this \(\gamma_{\tilde{f}f,IR}\simeq 1\) property, are discussed, e.g., in [65]. With fermions in a single representation of the gauge group, such as SU(3) with \(N_{F}=8\) fermions in the fundamental representation, lattice simulations [5]-[7] have found an anomalous dimension \(\gamma_{m}\sim 1\) for the strongly coupled fermion and have shown that the spectrum of the theory includes a light \(0^{++}\) state consistent with being an approximate dilaton. Lattice simulations have also been carried out for other models, including an SU(3) theory with two!flavors of fermions in the sextet representation [17]. We recall that a rigorous upper bound on \(\gamma_{\tilde{f}f,IR}\) in a conformal field theory is that [66; 67; 68] \[\gamma_{\tilde{f}f,IR}\leq 2\, \tag{27}\] where here, \(f\) refers to any fermion in the theory. This is evidently less restrictive than the bound (22) and need not be saturated at the lower boundary \({\cal B}_{CW,\ell}\) of the conformal window. In passing, it should be mentioned that an approximate condition for spontaneous chiral symmetry breaking via formation of the condensate \(\langle\tilde{f}f\rangle\) derived from analysis of the Schwinger-Dyson equation for the \(f\) fermion propagator is that this occurs as the coupling \(\alpha\) exceeds the value \(\alpha_{cr}=\pi/(3C_{f})\). As applied to estimate the lower boundary of the conformal window, this would be a condition on the value of \(\alpha_{IR}\) at the IRFP as one approaches this lower boundary. While this is a reasonable rough guide, the maximal scheme-independent level to which it can be applied is the two-loop level, since the value of the \(n\)-loop (\(n\ell\)) IR coupling \(\alpha_{IR,n\ell}\) at higher-loop level, as calculated from the beta function (2), is scheme-dependent. Furthermore, as one approaches the strongly coupled regime near the lower end of the conformal window, the value of the IR zero of the \(n\)-loop beta function, \(\alpha_{IR,n\ell}\), changes substantially as one goes from two-loop order to higher-loop order. For example, as listed in Table II of [47], for SU(3) with \(N_{F}=9\) fermions in the fundamental representation, \(\alpha_{IR,2\ell}=5.24\), while \(\alpha_{IR,3\ell}=1.03\), as calculated in the widely used \(\overline{\rm MS}\) scheme. Consequently, here we focus on the \(\gamma\)CC condition (25), since it can be applied in a scheme-independent manner. ## III Scheme-independent calculation of anomalous dimensions of fermion bilinear operators In this section, for a theory with an SU(\(N_{c}\)) gauge group and (massless) fermion content consisting of \(N_{F}\) fermions in the fundamental representation, \(F\), and \(N_{A_{2}}\) fermions in the antisymmetric rank-2 representation, \(A_{2}\), we present explicit calculations of the coefficients \(\kappa_{j}^{(F)}\) and \(\kappa_{j}^{(A_{2})}\) with \(j=1,2,3\) using the scheme-independent expansions of the anomalous dimensions \(\gamma_{\bar{\psi}\psi,IR}\) and \(\gamma_{\bar{\chi}\chi,IR}\) in Eqs. (2.15) and the analogue for \(\gamma_{\bar{\chi}\chi,IR}\) with \(1\leq j\leq 3\). These yield the anomalous dimensions \(\gamma_{\bar{\psi}\psi,IR}\) and \(\gamma_{\bar{\chi}\chi,IR}\) up to \(O(\Delta_{F}^{3})\) and \(O(\Delta_{A_{2}}^{3})\), respectively. It is convenient to define factors that occurs repeatedly in the denominators of \(\gamma_{IR,F}\) and \(\gamma_{IR,A_{2}}\), namely \[{\cal D}_{F}=N_{c}(25N_{c}^{2}-11)+2N_{A_{2}}(N_{c}-2)(N_{c}+1)(N_{c}-3) \tag{3.1}\] and \[{\cal D}_{A_{2}}=N_{c}(18N_{c}^{2}-11N_{c}-22)-N_{F}(N_{c}-3)(N_{c}+1)\;. \tag{3.2}\] For the first two coefficients we calculate \[\kappa_{1}^{(F)}=\frac{4(N_{c}^{2}-1)}{{\cal D}_{F}}\;, \tag{3.3}\] \[\kappa_{1}^{(A_{2})}=\frac{4(N_{c}-2)^{2}(N_{c}+1)}{{\cal D}_{A_{2}}}\;, \tag{3.4}\] \[\kappa_{2}^{(F)}=\frac{4(N_{c}^{2}-1)}{3{\cal D}_{F}^{3}}\left[N_{c}(9N_{c}^{2 }-2)(49N_{c}^{2}-44)+4N_{A_{2}}(N_{c}-2)(N_{c}+1)(N_{c}-3)(3N_{c}-2)(5N_{c}+3) \right]\,, \tag{3.5}\] and \[\kappa_{2}^{(A_{2})}=\frac{(N_{c}-2)^{3}(N_{c}+1)}{3{\cal D}_{A_{2}}^{3}} \left[N_{c}(11N_{c}^{2}-4N_{c}-8)(93N_{c}^{2}-88N_{c}-176)-2N_{F}(N_{c}-3)(N_{ c}+1)(37N_{c}^{2}-16N_{c}-33)\right]\,. \tag{3.6}\] Following our notation in [8], we write the third-order coefficients in the form \[\kappa_{3}^{(F)}=\frac{8(N_{c}^{2}-1)}{27{\cal D}_{F}^{5}}\Bigg{[}A_{0}^{(F) }+A_{1}^{(F)}N_{A_{2}}+A_{2}^{(F)}N_{A_{2}}^{2}+A_{3}^{(F)}N_{A_{2}}^{3}\Bigg{]} \tag{3.7}\] and \[\kappa_{3}^{(A_{2})}=\frac{(N_{c}-2)^{3}(N_{c}+1)}{54{\cal D}_{A_{2}}^{5}} \Bigg{[}A_{0}^{(A_{2})}+A_{1}^{(A_{2})}N_{F}+A_{2}^{(A_{2})}N_{F}^{2}+A_{3}^{ (A_{2})}N_{F}^{3}\Bigg{]}\;, \tag{3.8}\] and we calculate \[A_{0}^{(F)}=N_{c}^{2}\bigg{[}\Big{(}274243N_{c}^{8}-455426N_{c}^{6}-114080N_{ c}^{4}+47344N_{c}^{2}+35574\Big{)}-4224N_{c}^{2}(4N_{c}^{2}-11)(25N_{c}^{2}-11 )\zeta_{3}\,\bigg{]}\;, \tag{3.9}\] \[A_{1}^{(F)} = 4N_{c}(N_{c}-2)(N_{c}-3)\bigg{[}\Big{(}16981N_{c}^{7}+35460N_{c}^ {6}+42927N_{c}^{5}+47342N_{c}^{4}+9432N_{c}^{3}-12849N_{c}^{2} \tag{3.10}\] \[- 18843N_{c}-11616\Big{)}-576N_{c}^{2}\Big{(}25N_{c}^{4}+198N_{c}^{ 3}+187N_{c}^{2}-121N_{c}-121\Big{)}\zeta_{3}\,\bigg{]}\;,\] \[A_{2}^{(F)} = 8(N_{c}-2)(N_{c}-3)\bigg{[}\Big{(}689N_{c}^{8}-1402N_{c}^{7}-9208N _{c}^{6}-15693N_{c}^{5}-9219N_{c}^{4}+16662N_{c}^{3}+19860N_{c}^{2}\] \[+ 10617N_{c}+5598\Big{)}-192N_{c}^{2}\Big{(}3N_{c}^{5}-65N_{c}^{4}-238N_{c}^{3}-1 65N_{c}^{2}+231N_{c}+198\Big{)}\zeta_{3}\ \Big{]}\,\] \[A_{3}^{(F)}=128N_{c}(N_{c}-2)^{2}(N_{c}-3)^{2}(N_{c}+1)(3N_{c}^{2} +7N_{c}+6)(-11+24\zeta_{3})\,\] \[A_{0}^{(A_{2})} = N_{c}^{2}\bigg{[}\Big{(}1670571N_{c}^{9}-7671402N_{c}^{8}+2181584 N_{c}^{7}+25294256N_{c}^{6}-13413856N_{c}^{5} \tag{3.13}\] \[- 17539136N_{c}^{4}+16707328N_{c}^{3}+3046912N_{c}^{2}-27320832N_{c }-18213888\Big{)}\] \[- 8448N_{c}^{2}(N_{c}+2)(18N_{c}^{2}-11N_{c}-22)(3N_{c}^{3}-28N_{c }^{2}+176)\zeta_{3}\,\bigg{]}\,\] \[A_{1}^{(A_{2})} = -4N_{c}(N_{c}-3)\bigg{[}\Big{(}60552N_{c}^{8}-150015N_{c}^{7}-373 894N_{c}^{6}+138737N_{c}^{5}+300380N_{c}^{4}+421197N_{c}^{3}+768345N_{c}^{2} \tag{3.14}\] \[+ 858660N_{c}+435468\Big{)}-192N_{c}^{2}\Big{(}141N_{c}^{5}-2075N_{ c}^{4}-6226N_{c}^{3}+1056N_{c}^{2}+17424N_{c}+11616\Big{)}\zeta_{3}\,\bigg{]}\,\] \[A_{2}^{(A_{2})} = 8(N_{c}-3)\bigg{[}\Big{(}1148N_{c}^{8}-3919N_{c}^{7}-17365N_{c}^{ 6}-5724N_{c}^{5}+35724N_{c}^{4}+84915N_{c}^{3}+70641N_{c}^{2} \tag{3.15}\] \[+ 32928N_{c}+15588\Big{)}-192N_{c}^{2}\Big{(}3N_{c}^{5}-164N_{c}^{ 4}-271N_{c}^{3}+396N_{c}^{2}+1320N_{c}+792\Big{)}\zeta_{3}\,\bigg{]}\,\] and \[A_{3}^{(A_{2})}=-128N_{c}(N_{c}+1)(N_{c}-3)^{2}(3N_{c}^{2}+7N_{c}+6)(-11+24 \zeta_{3}). \tag{3.16}\] Here, \(\zeta_{s}=\sum_{n=1}^{\infty}n^{-s}\) is the Riemann zeta function, and \(\zeta_{3}=1.2020569...\) We have remarked above on the reason for the occurrence of \((N_{c}-2)\) factors (or powers thereof) in conjunction with the numbers \(N_{A_{2}}\). The occurrence of \((N_{c}-3)\) factors in various expressions reflects the reduction of the theory to one with \(N_{F}+N_{A_{2}}\) fermions in the fundamental representation in the case \(N_{c}=3\). We straightforwardly check that our general-\(N_{c}\) results above satisfy the identities (2.21). ## IV SU(4) theory In this section, for the case \(N_{c}=4\), i.e., \(G=\) SU(4), we list the special cases of the general-\(N_{c}\) expressions for the coefficients \(\kappa_{j}^{(F)}\) and \(\kappa_{j}^{(A_{2})}\) with \(1\leq j\leq 3\) and the resultant expressions for \(\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{p}}\) and \(\gamma_{\bar{\chi}\chi,IR,\Delta_{A_{2}}^{p}}\) with \(1\leq p\leq 3\). ### Direct Calculations of Anomalous Dimensions For \(N_{c}=4\), the upper end of the conformal window, defined by the condition \(b_{1}>0\), is \[N_{F}+2N_{A_{2}}<22\, \tag{4.1}\] and the condition that the two-loop beta function should have an IR zero is \[205N_{F}+440N_{A_{2}}>2176. \tag{4.2}\] As before, we denote the region in the first quadrant of the \((N_{F},N_{A_{2}})\) plane where the inequalities (4.1) and (4.2) are simultaneously satisfied as \(I_{IRZ}\). The lines that are the upper and lower boundaries of the \(I_{IRZ}\) region have slopes that are almost equal. From Eq. (2.10), the upper boundary, i.e., the solution to the condition \(b_{1}=0\) has slope \(dN_{A_{2}}/dN_{F}=-1/2\), while from Eq. (2.11), the lower \(I_{IRZ}\) boundary, i.e., the solution to the condition \(b_{2}=0\), has slope \(dN_{A_{2}}/dN_{F}=-41/88=-0.4659\). Regarding the figures to be presented below, we note that if one sets \(N_{F}=4\), then the \(I_{IRZ}\) interval in \(N_{A_{2}}\) is \(3.082<N_{A_{2}}<9\) and if one sets \(N_{A_{2}}=4\), then the \(I_{IRZ}\) interval in \(N_{F}\) is \(2.029<N_{F}<14\). In the (4,4,4) theory, \(\Delta_{F}=10\) and \(\Delta_{A_{2}}=5\). Substituting \(N_{c}=4\) in the results for \(\kappa_{j}^{(F)}\) and \(\kappa_{j}^{(A_{2})}\) given for \(G={\rm SU}(N_{c})\) in the previous section, we find the following: \[\kappa_{1}^{(F)}=\frac{15}{389+5N_{A_{2}}}\, \tag{4.3}\] \[\kappa_{2}^{(F)}=\frac{25(5254+115N_{A_{2}})}{(389+5N_{A_{2}})^{3}}\, \tag{4.4}\] \[\kappa_{1}^{(A_{2})}=\frac{80}{888-5N_{F}}\, \tag{4.5}\] and \[\kappa_{2}^{(A_{2})}=\frac{400(19456-165N_{F})}{(888-5N_{F})^{3}}. \tag{4.6}\] \[\kappa_{2}^{(A_{2})}=\frac{400(19456-165N_{F})}{(888-5N_{F})^{3}}. \tag{4.7}\] and \[\kappa_{3}^{(F)} = \frac{5}{36(389+5N_{A_{2}})^{5}}\left[(8039476475-696689664\zeta_{ 3})+(479848740-197766144\zeta_{3})N_{A_{2}}\right. \tag{4.8}\] \[+ (-16264767+46568448\zeta_{3})N_{A_{2}}^{2}+(-288640+629760\zeta_{3 })N_{A_{2}}^{3}\left.\right]\,,\] and \[\kappa_{3}^{(A_{2})} = \frac{640}{27(888-5N_{F})^{5}}\left[(28645111296+7201751040\zeta_{ 3})-(120552246+1055342592\zeta_{3})N_{F}\right. \tag{4.9}\] \[+ (-12526131+33675264\zeta_{3})N_{F}^{2}+(72160-157440\zeta_{3})N_{ F}^{3}\left.\right]\,.\] For the theory with \(N_{c}=4\), i.e., \(G={\rm SU}(4)\), \(N_{F}=4\), and \(N_{A_{2}}=4\), our general expressions yield the following (where floating-point values are quoted to the indicated precision): \[\kappa_{1}^{(F)}=\frac{15}{409}=3.6675\times 10^{-2}\, \tag{4.10}\] \[\kappa_{2}^{(F)}=\frac{142850}{(409)^{3}}=2.0879\times 10^{-3}\, \tag{4.11}\] \[\kappa_{3}^{(F)} = \frac{48400811015}{36\cdot(409)^{5}}-\frac{715520}{3\cdot(409)^{4 }}\zeta_{3} \tag{4.12}\] \[= 2.37475\times 10^{-4}\,\] \[\kappa_{1}^{(A_{2})}=\frac{20}{217}=0.092166\, \tag{4.13}\] \[\kappa_{2}^{(A_{2})}=\frac{117475}{(217)^{3}}=1.1497\times 10^{-2}\, \tag{4.14}\] and \[\kappa_{3}^{(A_{2})} = \frac{17479439035}{27\cdot(217)^{5}}+\frac{3368960}{9\cdot(217)^{ 4}}\zeta_{3} \tag{4.15}\] \[= 1.5484\times 10^{-3}\,\] where partial factorizations are shown for denominators. Substituting these coefficients into Eqs. (2.16) and (2.17), with \(\Delta_{F}=2\Delta_{A_{2}}=10\) for this (4,4,4) theory, we have \[\gamma_{\bar{\psi}\psi,IR,\Delta_{F}}=0.367\, \tag{4.16}\] \[\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{2}}=0.576\, \tag{4.17}\] \[\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{3}}=0.683\, \tag{4.18}\] \[\gamma_{\bar{\chi}\chi,IR,\Delta_{A_{2}}}=0.461\, \tag{4.19}\] \[\gamma_{\bar{\chi}\chi,IR,\Delta_{A_{2}}^{2}}=0.748\, \tag{4.20}\] and \[\gamma_{\bar{\chi}\chi,IR,\Delta_{A_{2}}^{3}}=0.942. \tag{4.21}\] Because \(\kappa_{j}^{(F)}\) and \(\kappa_{j}^{(A_{2})}\) are positive for all of the orders \(j=1,2,3\) for which we have calculated them, several monotonicity relations follow. These are the analogues, in the current theory, of the relations noted in [8]. First, for these orders, with fixed \(\Delta_{F}=2\Delta_{A_{2}}\), the anomalous dimensions \(\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{p}}\) and \(\gamma_{\bar{\chi}\chi,IR,\Delta_{A_{2}}^{p}}\) are monotonically increasing functions of \(p\). Second, for a fixed \(p\), \(\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{p}}\) is a monotonically increasing function of \(\Delta_{F}\) and \(\gamma_{\bar{\chi}\chi,IR,\Delta^{p}_{A_{2}}}\) is a monotonically increasing function of \(\Delta_{A_{2}}\). Since finite-order perturbative calculations of this type tend to become progressively less accurate as one approaches the lower boundary \(\mathcal{B}_{CW,\ell}\) of the conformal window, one is motivated to assess the effect of higher-order corrections. One approach for this purpose is to perform a rough extrapolation (ext) of our results for \(p=1,2,3\) to \(p=\infty\). This yields values for \(\gamma_{\bar{\psi}\psi,IR}\) and \(\gamma_{\bar{\chi}\chi,IR}\) that we estimate to be approximately 10-20 % larger than our respective \(\gamma_{\bar{\psi}\psi,IR,\Delta^{3}_{F}}\) and \(\gamma_{\bar{\chi}\chi,IR,\Delta^{3}_{F}}\) values, namely \(\gamma_{\bar{\psi}\psi,IR,\text{ext}}\simeq 0.7-0.8\) and \(\gamma_{\bar{\chi}\chi,IR,\text{ext}}\simeq 1.0-1.1\). We next compare these results with the values obtained from lattice simulations in Ref. [11], namely \(\gamma^{(4)}_{m}\simeq 0.75\) and \(\gamma^{(6)}_{m}\simeq 1.0\). (Recall the equivalences of notation for this SU(4) theory: \(\gamma^{(4)}_{m}\equiv\gamma_{\bar{\psi}\psi,IR}\) and \(\gamma^{(6)}\equiv\gamma_{\bar{\chi}\chi,IR}\).) To within the uncertainties in our extrapolation and in the lattice measurements, our calculations are in agreement with these values of anomalous dimensions obtained in [11]. This agreement between our perturbative calculations, which require an IR fixed point in the conformal window, and the values of these anomalous dimensions measured in the lattice simulations, is consistent with the conclusion in [11] that this theory is in the conformal window (near the lower boundary, since the measured \(\gamma^{(6)}_{m}\simeq 1\) and our \(\gamma_{\bar{\chi}\chi,IR,\text{ext}}\simeq 1\)). A cautionary remark is that the uncertainties in the perturbative calculation of anomalous dimensions are substantial at the lower end of the conformal window, as are the uncertainties in our rough extrapolation. ### Calculation of Pade Approximants for Anomalous Dimensions Another useful approach to estimating anomalous dimensions from finite series expansions is the use of Pade approximants, and we have used these in our earlier work for theories with fermions in a single representation [47; 48; 49; 52; 36]. Given a series expansion calculated to a finite order, a \([p,q]\) Pade approximant is a rational function with numerator and denominator having respective degrees \(p\) and \(q\) in the expansion variable and satisfying the property that the Taylor series expansion of this rational function fits all of the coefficients in the original series expansion [70; 69]. In our present context, for a given fermion \(f\) (equal to \(\psi\) in the \(F\) representation or \(\chi\) in the \(A_{2}\) representation of SU4)), let us consider the scheme-independent expansion calculated to order \(s\): \[\gamma_{\bar{f}f,IR,\Delta^{j}_{f}} = \sum_{j=1}^{s}\kappa^{(f)}_{j}\Delta^{j}_{f}\] \[= \kappa^{(f)}_{1}\Delta_{f}\Big{[}1+\frac{1}{\kappa^{(f)}_{1}} \sum_{j=2}^{s}\kappa^{(f)}_{j}\Delta^{j-1}_{f}\Big{]}\.\] We calculate the \([p,q]\) Pade approximant to the expression in square brackets, which has the form \[\gamma_{ff,IR,[p,q]}=\kappa^{(f)}_{1}\Delta_{f}\bigg{[}\frac{1+\sum_{i=1}^{p} \mathcal{N}_{f,i}\Delta^{i}_{f}}{1+\sum_{j=1}^{q}D_{f,j}\Delta^{j}_{f}}\bigg{]}\, \tag{4.22}\] where \(p+q=s-1\). With \(s=3\), the possible Pade approximants to the expression in square brackets are then [2,0], [1,1], and [0,2]. The [2,0] approximant is just the original series, which we have already used to calculate \(\gamma_{\bar{f}f,IR,\Delta^{3}_{F}}\) for \(f=\psi\) and \(f=\chi\), so we focus on the [1,1] and [0,2] approximants here. In addition to providing a closed-form rational-function approximation to the finite series (4.21), a Pade approximant also can be used in another way, namely to yield an estimate of the effects of higher-order terms. By construction, the \([p,q]\) Pade approximant in (4.22) is analytic at \(\Delta_{f}=0\), and if it has \(q>0\), then it is a meromorphic function with \(q\) poles. The radius of convergence of the Taylor series expansion of the \([p,q]\) Pade approximant is set by the magnitude of the pole nearest to the origin in the complex \(\Delta_{f}\) plane. Consequently, a necessary condition that must be satisfied for a Pade approximant to be useful for our analysis here is that, considered as a function of the general variable \(\Delta_{f}\), it should not have a pole at any \(\Delta_{f,\text{pole}}\) that is closer to the origin than the actual value of \(\Delta_{f}\) in the theory of interest. We recall that for the (4,4,4) theory, \(\Delta_{F}=10\) and \(\Delta_{A_{2}}=5\). A further caveat with this method is that if a \([p,q]\) Pade approximant has a pole that is near to the physical value of the expansion parameter, even if it is farther from the origin, this might produce a spuriously large value of the anomalous dimension. We focus here on approximants for \(\gamma_{\bar{\chi}\chi,IR}\), since it is larger than \(\gamma_{\bar{\psi}\psi,IR}\). We calculate the following [1,1] and [0,2] Pade approximants for this anomalous dimension: \[\gamma_{\bar{\chi}\chi,IR,[1,1]}=0.092166\Delta_{A_{2}}\bigg{[}\frac{1-0.00994 44\Delta_{A_{2}}}{1-0.13468\Delta_{A_{2}}}\bigg{]}\, \tag{4.23}\] and \[\gamma_{\bar{\chi}\chi,IR,[0,2]}=0.092166\Delta_{A_{2}}\Bigg{[}\frac{1}{1-0.12474 \Delta_{A_{2}}-0.0012404\Delta_{A_{2}}^{2}}\Bigg{]}. \tag{4.24}\] The approximant \(\gamma_{\bar{\chi}\chi,IR,[1,1]}\) has a pole at \(\Delta_{A_{2}}=7.42492\), and \(\gamma_{\bar{\chi}\chi,IR,[0,2]}\) has poles at \(\Delta_{A_{2}}=7.46299\) and \(-108.022\). These are farther from the origin than the physical value \(\Delta_{A_{2}}=5\), although the poles at \(7.42\) and \(7.46\) are moderately close to the physical value, \(\Delta_{A_{2}}=5\). Evaluating these approximants at this value of \(\Delta_{A_{2}}\), we obtain \(\gamma_{\bar{\chi}\chi,IR,[1,1]}=1.34\) and \(\gamma_{\bar{\chi}\chi,IR,[0,2]}=1.33\), somewhat larger than our rough extrapolations discussed above. ### Estimates of Lower Boundary of Conformal Window Ref. [11] observed that its conclusion that the \((N_{c},N_{F},N_{A_{2}})=(4,4,4)\) theory has an IRFP, and is thus in the conformal window, disagreed with the lower boundary \({\cal B}_{CW,\ell}\) of the conformal window presented in [12] on the basis of the \(\gamma\)CC condition in quadratic form (2.24). As noted in [11], the theory (4,4,4) theory is below the lower boundary of the conformal window from the \(\gamma\)CC condition shown in Fig. 4 of [12] as a function of \((N_{F},N_{A_{2}})\) and reproduced in Fig. 1 of [11]. (In referring to Fig. 4 in [12], we remind the reader that the symbol \(N_{A_{2}}\) used in that figure is the number of Majorana \(A_{2}\) fermions and hence is equal to \(2N_{A_{2}}\) in our notation, where our \(N_{A_{2}}\) is the number of Dirac \(A_{2}\) fermions.) So the implication from the lower boundary \({\cal B}_{CW,\ell}\) in [12] is that the (4,4,4) theory is in the chirally broken phase, not in the conformal window. To investigate this further, we have performed an alternative calculation of \({\cal B}_{CW,\ell}\) using our results for \(\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{3}}\) and \(\gamma_{\bar{\chi}\chi,IR,\Delta_{F}^{3}}\) in conjunction with the linear \(\gamma\)CC critical condition \[\max(\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{3}},\ \gamma_{\bar{\chi}\chi,IR, \Delta_{A_{2}}^{3}})=1. \tag{4.25}\] In applying this condition, the maximal anomalous dimension is \(\gamma_{\bar{\chi}\chi,IR,\Delta_{A_{2}}^{3}}\), which is larger here, for a given \((N_{F},N_{A_{2}})\), than \(\gamma_{\bar{\psi}\psi,IR,\Delta_{A_{2}}^{3}}\). Therefore, Eq. (4.25) reduces to \[\gamma_{\bar{\chi}\chi,IR,\Delta_{A_{2}}^{3}}=1. \tag{4.26}\] We show our results in Fig. 1. The uppermost line (colored blue) is the upper boundary \({\cal B}_{CW,u}={\cal B}_{IRZ,u}\) of the conformal window, given by the condition \(b_{1}=0\). The locations of the lower boundary \({\cal B}_{CW,\ell}\) as calculated in [12] from the quadratic \(\gamma\)CC condition (colored green), and as calculated here from the linear \(\gamma\)CC condition (4.25), which reduces to (4.26) (colored red), are shown. Both of these calculations of \({\cal B}_{CW,\ell}\) use the \(\kappa_{j}^{(F)}\) and \(\kappa_{j}^{(A_{2})}\) calculated to order \(j=3\) from the general results in [8]. The dashed line is the solution locus of the equation \(b_{2}=0\) and is the lower boundary \({\cal B}_{IRZ,\ell}\) of the IRZ region. For general \(N_{c}\) and, in particular, for \(N_{c}=4\), the conditions (4.25) and (4.26) are nonlinear equations in the variables \(N_{F}\) and \(N_{A_{2}}\), but the coefficients of the nonlinear terms are small compared to the coefficients of the linear terms and get smaller as the total degree \(s+t\) of a term \(N_{F}^{t}N_{A_{2}}^{t}\) increases, so that the solution locus is close to being linear in the \((N_{F},N_{A_{2}})\) plane. As is evident in Fig. 1, the use of the linear form of the \(\gamma\)CC in Eq. (4.26) yields a boundary \({\cal B}_{CW,\ell}\) that lies to the lower left of the boundary \({\cal B}_{CW,\ell}\) obtained with the use of the quadratic \(\gamma\)CC condition in the \((N_{F},N_{A_{2}})\) plane. As was discussed in Sect. II.4, this is a consequence of the fact that the quadratic \(\gamma\)CC condition (2.24) generates higher-order terms in powers of the scheme-independent expansion variables and leads to different coefficients for lower-order Figure 1: Plot of regions and boundaries in the \((N_{F},N_{A_{2}})\) plane for \(G=\mathrm{SU}(4)\). The upper solid line (colored blue) is the solution locus of the equation \(b_{1}=0\) and is the upper boundary \({\cal B}_{CW,u}\) of the conformal window. The plot shows the locations of the boundary \({\cal B}_{CW,\ell}\), as calculated in [12] from the quadratic \(\gamma\)CC condition (colored green), and as calculated here from the linear \(\gamma\)CC condition (colored red). The dashed line is the solution locus of the equation \(b_{2}=0\) and is the lower boundary \({\cal B}_{IRZ,\ell}\) of the IRZ region. terms. With \({\cal B}_{CW,\ell}\) as determined from (4.26) and shown in Fig. 1, the (4,4,4) theory is within the conformal window, close to the lower boundary. Along the diagonal \(N_{F}=N_{A_{2}}\), the boundary \({\cal B}_{CW,\ell}\) calculated from (4.26) crosses the point \((N_{F},N_{A_{2}})=(3.88,3.88)\), slightly to the lower left of the point \((N_{F},N_{A_{2}})=(4,4)\). Therefore, with \({\cal B}_{CW,\ell}\) computed via the linear form of the \(\gamma\)CC condition, (4.25) or (4.26), the (4,4,4) theory is in the conformal window. This is in accord with our result that at cubic order in the scheme-independent expansion coefficients, the values of anomalous dimensions that we obtain, namely \(\gamma_{\bar{\chi}\chi,IR,\Delta_{A_{2}}^{3}}=0.942\) in Eq. (4.20) and \(\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{3}}=0.683\) in Eq. (4.17) in the (4,4,4) theory, are both less than 1. Our comparative analysis showing the difference in the location of the boundary \({\cal B}_{CW,\ell}\) as computed via the quadratic \(\gamma\)CC condition in [12] and as computed via the linear \(\gamma\)CC condition here (with inputs for the \(\kappa_{j}^{(A_{2})}\) and \(\kappa_{j}^{(F)}\) calculated up to the same maximal order, \(j=3\)) provides a quantitative measure of the importance of higher-order terms in the scheme-independent expansions and hence the uncertainty in the determination of the location of \({\cal B}_{CW,\ell}\). This comparison makes it clear that these higher-order corrections are significant. Since the anomalous dimensions increase as one moves downward within the conformal window toward the lower boundary \({\cal B}_{CW,\ell}\), the linear form of the \(\gamma\)CC condition implies that for any theory below this lower boundary, at least some fermion \(f\) has an anomalous dimension \(\gamma_{\bar{f},IR}\) that is larger than 1, where here, \(\{f\}=\{\psi,\ \chi\}\), i.e., \(\{F,A_{2}\}\). A peculiar feature of the quadratic form of the \(\gamma\)CC condition is that if one uses it to determine \({\cal B}_{CW,\ell}\) with input coefficients \(\kappa_{j}^{(f)}\) calculated to the same maximal order as with the linear \(\gamma\)CC condition, then this boundary \({\cal B}_{CW,\ell}\) from the quadratic \(\gamma\)CC condition has the property that there are theories that lie outside the conformal window but in which all fermions \(f\) have anomalous dimensions \(\gamma_{\bar{f}f,IR}\) that are less than 1. This situation occurs here; our direct calculation of \(\gamma_{\bar{\psi}\psi,IR}\) and \(\gamma_{\bar{\chi}\chi,IR}\) in the (4,4,4) theory, with input values for the \(\kappa_{j}^{(F)}\) and \(\kappa_{j}^{(A_{2})}\) computed to the \(j=3\) order, yields values for \(\gamma_{\bar{\chi}\chi,IR,\Delta_{A_{2}}^{3}}\) and \(\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{3}}\) that are both less than 1, but the point \((N_{F},N_{A_{2}})=(4,4)\) lies outside the conformal window, as calculated in [12] via the quadratic \(\gamma\)CC condition with the same inputs for \(\kappa_{j}^{(F)}\) and \(\kappa_{j}^{(A_{2})}\) computed up to order \(j=3\). To investigate the behavior of \(\gamma_{\bar{\psi}\psi,IR,\Delta_{F}^{p}}\) and \(\gamma_{\bar{\chi}\chi,IR,\Delta_{F}^{p}}\) for \(p=1,2,3\) further in this SU(4) theory, we calculate how they vary as functions of \(N_{F}\) and \(N_{A_{2}}\), in particular, when one sets \(N_{F}=4\) and varies \(N_{A_{2}}\) or one sets \(N_{A_{2}}=4\) and varies \(N_{F}\). These intervals are, respectively, a vertical and a horizontal line segment in the \((N_{F},N_{A_{2}})\) plane, which both pass through the point of primary interest, \((N_{F},N_{A_{2}})=(4,4)\). Our results are presented in Figs. 2-5. As one can see from Figs. 4 and 5, for \(N_{F}=4\), this yields the value \(N_{A_{2}}=3.8\) as being on \({\cal B}_{CW,\ell}\) calculated from the linear \(\gamma\)CC condition (4.26), and for \(N_{A_{2}}=4\), it yields the value \(N_{F}=3.6\) as being on this boundary. These calculations thus serve as a check on our calculation of the boundary \({\cal B}_{CW,\ell}\) from the linear \(\gamma\)CC condition (4.26), since one can verify that this boundary does pass through the points \((N_{F},A_{A_{2}})=(4.0,3.8)\) and (3.6,4.0). One of the interesting features of this SU(4) theory is that the gauge-singlet particle spectrum contains composite fermion(s), \(\{f_{s}\}\). The lattice simulations in [11] yield anomalous dimensions for several composite-fermion operators, which are found to be \(\lesssim 0.5\), smaller than desired for models of a partially composite top quark. Comparison is made with one-loop perturbative calculations of the anomalous dimensions for these gauge-singlet composite fermion operators. In future work, it could be useful to carry out higher-order scheme-independent perturbative calculations of the anomalous dimensions for these composite-fermion operators. This is beyond the scope of our present work, since the requisite higher-order coefficients \(c_{\ell}^{(f_{s})}\) in the conventional series expansions (12) have not, to our knowledge, been calculated. ## V Conclusions In this paper we have used our general results in [8] to calculate scheme-independent expansions for anomalous dimensions \(\gamma_{\bar{\psi}\psi,IR}\) and \(\gamma_{\bar{\chi}\chi,IR}\) of the fermion bilinear operators \(\bar{\psi}\psi\) and \(\bar{\chi}\chi\) at an infrared fixed point in an asymptotically free SU(\(N_{c}\)) gauge theory with massless fermion content consisting of \(N_{F}\) fermions \(\psi_{a}^{a}\) in the fundamental representation and \(N_{A_{2}}\) fermions \(\chi_{a}^{ab}\) in the antisymmetric rank-2 tensor representation. These calculations were performed to the highest order, namely cubic order in the respective expansion variables \(\Delta_{F}\) and \(\Delta_{A_{2}}\), for which the necessary inputs are available. We have taken the special case \(N_{c}=4\) and compared the results with values of these anomalous dimensions in an SU(4) theory with \(N_{F}=4\) and \(N_{A_{2}}=4\) from a lattice simulation in [11]. We find agreement with these measured values at the cubic order to which we have performed the perturbative calculations, and we have given estimates of higher-order corrections to our results. More generally, we have studied the dependence of \(\gamma_{\bar{\psi}\psi,IR}\) and \(\gamma_{\bar{\chi}\chi,IR}\) as functions of \(N_{F}\) and \(N_{A_{2}}\) in the SU(4) theory and have compared different ways of calculating the lower boundary of the conformal window. ###### Acknowledgements. We thank Professors Yigal Shamir and Jong-Wan Lee for valuable discussions via email. This research of R.S. was supported in part by the U.S. NSF Grant NSF-PHY-22-15093. ## Appendix A Group Invariants In this appendix we identify our notation for various group invariants. Let \(T_{R}^{a}\) denote the generators of the Lie algebra of a group \(G\) in the representation \(R\), where \(a\) is a group index, and let \(d_{R}\) denote the dimension of \(R\). The Casimir invariants \(C_{2}(R)\) and \(T_{R}\) are defined as follows: \(T_{R}^{a}T_{R}^{a}=C_{2}(R)I\), where here \(I\) is the \(d_{R}\times d_{R}\) identity matrix, and \({\rm Tr}_{R}(T_{R}^{a}T_{R}^{b})=T(R)\delta^{ab}\). For a fermion \(f\) transforming according to a representation \(R\), we often use the equivalent compact notation \(T_{f}\equiv T(R)\) and \(C_{f}\equiv C_{2}(R)\). We also use the notation \(C_{A}\equiv C_{2}(A)\equiv C_{2}(G)\). Thus, e.g., for the \(F\) and \(A_{2}\) representations of SU(\(N_{c}\)), \(T(F)=1/2\), \(C_{2}(F)=(N_{c}^{\phantom{c}}-1)/(2N_{c})\), \(T(A_{2})=(N_{c}-2)/2\), and \(C_{2}(A_{2})=(N_{c}-2)(N_{c}+1)/N_{c}\). The coefficients \(\kappa_{j}^{(f)}\) with \(j\geq 3\) also involve higher-order group invariants. In general, for a given representation \(R\) of \(G\), \[d_{R}^{abcd} = \frac{1}{3!}{\rm Tr}_{R}\Big{[}T^{a}(T^{b}T^{c}T^{d}+T^{b}T^{d}T^ {c}+T^{c}T^{b}T^{d} \tag{10}\] \[+ T^{c}T^{d}T^{b}+T^{d}T^{b}T^{c}+T^{d}T^{c}T^{b})\Big{]}\.\]
2306.03602
TestLab: An Intelligent Automated Software Testing Framework
The prevalence of software systems has become an integral part of modern-day living. Software usage has increased significantly, leading to its growth in both size and complexity. Consequently, software development is becoming a more time-consuming process. In an attempt to accelerate the development cycle, the testing phase is often neglected, leading to the deployment of flawed systems that can have significant implications on the users daily activities. This work presents TestLab, an intelligent automated software testing framework that attempts to gather a set of testing methods and automate them using Artificial Intelligence to allow continuous testing of software systems at multiple levels from different scopes, ranging from developers to end-users. The tool consists of three modules, each serving a distinct purpose. The first two modules aim to identify vulnerabilities from different perspectives, while the third module enhances traditional automated software testing by automatically generating test cases through source code analysis.
Tiago Dias, Arthur Batista, Eva Maia, Isabel Praça
2023-06-06T11:45:22Z
http://arxiv.org/abs/2306.03602v1
# TestLab: An Intelligent Automated Software Testing Framework ###### Abstract The prevalence of software systems has become an integral part of modern-day living. Software usage has increased significantly, leading to its growth in both size and complexity. Consequently, software development is becoming a more time-consuming process. In an attempt to accelerate the development cycle, the testing phase is often neglected, leading to the deployment of flawed systems that can have significant implications on the users daily activities. This work presents TestLab, an intelligent automated software testing framework that attempts to gather a set of testing methods and automate them using Artificial Intelligence to allow continuous testing of software systems at multiple levels from different scopes, ranging from developers to end-users. The tool consists of three modules, each serving a distinct purpose. The first two modules aim to identify vulnerabilities from different perspectives, while the third module enhances traditional automated software testing by automatically generating test cases through source code analysis. Keywords:Automated Software Testing Artificial Intelligence Testing Framework. ## 1 Introduction Software systems have become a crucial asset from a professional and personal standpoint. What was previously considered superfluous has now become much more ingrained to a point where it has become a dependency. Consequently, avid users have become compelled to trust these systems, which can be dangerous if they are faulty. The trustiness of software directly relates to its quality [1]. Higher software quality is typically associated with more reliable and secure systems, which in turn boosts users' confidence in interacting with them. The quality and effectiveness of software testing directly impacts the security of the software application. Incomplete or insufficient testing can leave vulnerabilities undetected, compromising the overall security of the system. Therefore, it is crucial to ensure the quality of software used in real-world tasks. One way to achieve this is through software testing, which plays a vital role in the software development cycle by enhancing the quality of the software. Software testing focuses on evaluating software to determine its correctness and ensure it meets the project requirements. The main objective of software testing is to identify defects within the software, that when correctly addressed may improve the software's quality, reliability, and longevity [1]. It consists of executing the Software Under Test (SUT) and monitoring it for errors, bugs, and other issues, and is usually performed by developers or testers. Despite the availability of automation and management tools, software testing is often skipped or rushed due to its time-consuming nature and the perceived increase in development costs. However, the long-term cost associated with the lack of testing and subsequent issues can also be significant. According to recent reports, the total cost of poor software quality in the United States (US) was around 2 trillion dollars [2]. Additionally, the National Institute of Standards and Technology (NIST) found that an average bug detected in early development stages takes approximately five hours to be fixed, whereas, in a post-product release, it takes around 15.3 hours. Therefore, in addition to improving the quality of the Software, testing decreases development time and costs. Regarding the use of automated testing tools, 44% of Information Technology companies automated 50% of testing in the year of 2020, with 24% seeing an increase in Return Of Investment [3]. Additionally, automated tools have proven to be helpful in uncovering vulnerabilities often missed by testers because of their randomness [1]. Despite the integration of automated software testing tools as part of the software testing process, during the first quarter of 2022, over 8000 vulnerabilities were discovered and documented [4]. The concerning number of software defects recently reported, underscore the need for improved software testing processes. As such, this work introduces the TestLab framework, an automated software testing tool designed to address multiple levels of software testing from different scopes ranging from developers to end-users. The framework aims to ensure the delivery of high-quality software systems throughout the software development cycle. By covering testing from different perspectives, TestLab enhances the efficiency and effectiveness of the software testing process, ultimately improving the overall quality of software systems. This paper is organized as follows. Section 2 reviews the fundamentals of software testing and related work presented in recent literature. Section 3 describes the conceptualization of the proposed framework. Finally, Section 4 summarises and concludes the work, describing new research lines to be explored in the future. ## 2 State of the Art Software testing is the verification and validation process of software, whose goal is to answer two respective questions: (i) is the right system being built?, and (ii) is the system being rightfully built? [5]. The validation question is answered by the product owner and end-users that validate the user requirements and the satisfaction of the developed solution, respectively [1]. The verification question can be answered by ensuring, to a certain extent, that the software was correctly built and tested. In this sense, software testing plays a critical role in the software development process, as it usually appears as the phase that evaluates the correctness, completeness and accuracy of the developed work, revealing its defects. Nonetheless, compliance with best engineering practices is essential for ensuring the testability of software under development. Testability, which refers to the ease of testing a system, is a critical software quality attribute. Neglecting testability, along with insufficient testing, can lead to the failure of major projects. Therefore, prioritizing testability and conducting comprehensive testing are crucial for delivering high-quality software systems [6]. Errors and defects raised by software testing should not be disregarded nor considered an adversity towards the software being developed [6]. Software testing involves the creation of test cases, which are executed using test data to generate testing reports, as depicted in Fig. 1. However, this process can be arduous and time-intensive, particularly for large and intricate systems under test. Hence, automation is necessary to efficiently ensure software quality while minimizing the resources required [7]. Automated Software Testing (AST) can be applied in many different perspectives and at various levels, with the goal of testing in different ways multiple parts of the system. AST is often divided in three different types: (i) black-box, (ii) white-box, and (iii) grey-box. The former is used to evaluate the SUT from a end user's perspective by providing only information regarding the interaction with the system, without revealing its inner-workings [8, 9]. White-box testing evaluates the SUT from the developers point of view by examining the internal structure and logic of a software program [9, 10, 11]. Grey-box testing combines the former types described, leveraging from their techniques to design more effective and efficient test cases [12]. Moreover, AST can be applied in multiple testing levels ranging from fine-grained testing, where the system is tested in units, to more general testing simulating the actions of a user. Testing a system in multiple levels is relevant, as the lack of it can jeopardise the quality of the software being developed [13]. The levels of testing are the following [6]: 1. **Unit Testing** - The system is broken down into units, and these are tested independently. 2. **Integration Testing** - The integration between units is tested to ensure it is defect-free. 3. **System Testing** - The system is tested in its entirety, focusing on multiple testing aspects inherent to publicly accessible systems. 4. **Acceptance Testing** - The last level of testing, which is performed by the end-users, reflects the acceptability of the system. Figure 1: Software Testing Process Model Testing a system in multiple levels ensures the correctness of the system at different granularity levels. Though, combined with multiple testing types, the produced test cases can simulate testing performed from different perspectives, which will ultimately provide different insights regarding the software's quality. ### Related Work The ever-growing interest, size and complexity of software systems have increased the testing effort. As a result, this process is many times skipped, in order to fasten the software development cycle. However, the release of faulty software can also produce unpleasant outcomes, creating an urge to invest in AST. Nowadays, most used and well-known AST tools rely on scripts to automate the execution of test scripts and the generation of reports to deliver faster and less prone to error testing. Additionally, they may even be integrated into Continuous Integration/Continuous Delivery pipelines to ensure the quality of the software under development throughout its development. Nonetheless, these tools still require human effort to write the test cases into the scripts and to function as oracle [5]. In this sense, the integration of Artificial Intelligence (AI) in the software testing process is promising, as it may be capable of fulfilling the human tasks that are found in the current state of AST. Table 1 presents a comprehensive comparison of the most common AST tools based on the findings of multiple recent reviews [14, 15, 16, 17, 18], describing the license, testing levels considered and if they leverage AI. Even though these tools are capable of automating software testing, they still require some degree of human expertise during the test case generation, as their use of AI is merely for assistance rather than automation. Moreover, most of them focus on high-level testing of the SUT, performing only at the system testing and acceptance testing levels. Regarding the integration of AI in the AST process to automate the test case generation, multiple works have been developed and reviewed [19, 20, 21]. Vitorino et al. [20] present the challenges associated with the automatic test \begin{table} \begin{tabular}{|p{85.4pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Tool and Frameworks & License & Testing Levels & AI-Aided \\ \hline Selenium & Open Source & System Testing and Acceptance Testing & Yes \\ \hline Unified Functional Testing & Licensed & System Testing & Yes \\ \hline Test Complete & Licensed & System Testing and Acceptance Testing & Yes \\ \hline Ranorex & Licensed & System Testing and Acceptance Testing & Yes \\ \hline Watir & Open Source & System Testing & No \\ \hline Load Runner & Licensed & System Testing & Yes \\ \hline \end{tabular} \end{table} Table 1: Common automated software testing tools. case generation faced in different testing type conditions. Their systematic review concludes that in a white-box setting most approaches resort to metaheuristics to constraint the input generation in order to achieve different test cases. Whilst in a black-box or grey-box setting, since the lack of data is the root problem, most approaches rely on the information regarding the SUT provided by the testers. Vinicius et al. [19] reviews multiple works that achieve successful results using Machine Learning (ML) to further automate software testing, focusing on test case generation, refinement and evaluation, with some works mentioning the necessity for explainable algorithms in this context. In conclusion, the findings show that even though there has been recent development towards software testing automation resorting to more intelligent methods, no prior work has addressed the development of a comprehensive framework comprising multiple testing methods that operate at various levels and in different settings (e.g. white-box, grey-box, and black-box). Such a framework would be capable of performing fully automated continuous testing of a SUT while integrating and analyzing the results from different testing methods. ## 3 Proposed Framework TestLab is an automated and intelligent software testing framework that comprises multiple testing methods at different levels and various perspectives, such as white-box, black-box, and grey-box. This paper presents TestLab and its automated testing methods. TestLab consists of three modules, as depicted in Fig. 2: FuzzTheREST, VulnRISKatcher, and CodeAssert. The first two aim to automate the process of analyzing the SUT for potential vulnerabilities in order to verify the software's security. FuzzTheREST is a fuzzer that focuses on testing Representational State Transfer (REST) Application Programming Interfaces (APIs) from a black-box point of view, where only data unrelated to the system's inner workings are utilized by the testing tool. VulnRISKatcher is a ML-based tool that has the capability to test source code developed in various programming languages in order to identify vulnerabilities. Lastly, CodeAssert is the part of the framework that attempts to automate the test scripts writing process by analyzing the source code in a white-box fashion. The subsequent subsections provide a more detailed description of each of these modules. Figure 2: TestLab architecture. ### FuzzTheREST As previously described in Section 2, software testing is composed of multiple methods capable of evaluating the quality of different software characteristics. Fuzzy testing is an automated software testing technique that consists in generating random malformed input and testing it against the SUT to find vulnerabilities and code defects [22]. This technique is usually applied in a black-box or grey-box setting leading to different input generation techniques. Typically, black-box fuzzers are quite naive, as their testing input is randomly generated. This makes the vulnerability discovery an indefinitely time consuming task, rendering this kind of fuzzer unfeasible considering the size and complexity of modern day software systems [23]. On the other hand, grey-box fuzzers leverage existing information regarding the SUT's inner workings. Even though these fuzzers are usually capable of computing faulty input faster, they require more in-depth knowledge about the SUT. FuzzTheREST is a fuzzer that performs intelligent fuzzy testing on REST APIs in a black-box fashion, with the goal of finding software defects. The fuzzer proposed relies on Reinforcement Learning (RL) to solve the search problem associated with discovering the right input values and combinations to uncover vulnerabilities in the software. The goal of the tool is to find vulnerabilities in web APIs in a platform-independent way, hence the need for the black-box approach. To tackle the naiveness associated with black-box fuzzers, the tool is equipped with a RL algorithm capable of driving the input that is generated relying on multiple mutation methods and using the API's feedback as guidance. Additionally, the tool can be fine-tuned to include grey-box information by integrating code execution analysis tools to obtain real-time execution metrics, such as coverage. Fig. 3 describes the execution flow of the system. The tester provides all necessary information regarding the SUT, which includes the OpenAPI specification file, the scenarios file and the parameters of the RL algorithm. The OpenAPI specification file containing information to establish interaction with the target API, and the scenarios file describing user-defined combinations between functionalities to find vulnerabilities that may result of state changes. Afterwards, the tool starts fuzzying the API under test, iterating through each user-defined Figure 3: FuzzTheREST pipeline. scenario. At the beginning of each scenario, a RL algorithm is instantiated with the chosen parameters and the initial population composed of the necessary input values is randomly generated. The fuzzer then starts sending the requests to the API to evaluate the generated input, using its feedback to mutate it accordingly. Throughout the fuzzy testing process, the algorithm records vulnerabilities, generated input values, and respective mutations to balance exploration and exploitation. This balance ensures that the search for faulty input values does not get stuck in a suboptimal solution (_local optimum_) allowing for a more efficient exploration of the solution space. After testing all scenarios, the fuzzer exports a test report file containing multiple metrics and the vulnerabilities found. ### VulnRISKatcher Every year, the number of vulnerabilities found contaminating software systems is rising, consequence of the ever-growing digital transformation. Their early detection and identification is crucial to ensure quality of the software and to significantly reduce the software development and maintenance costs. Currently, code review tools mainly rely on static approaches and are typically introduced at the system test level. However, these tools are often less effective and may require a comprehensive understanding of the code context, resulting in a complex and time-consuming process [24]. As mentioned in Section 2, the potential of ML in software testing is significant [25], as it can overcome the limitations of static code review tools. By analyzing software metrics such as code complexity and code coverage, ML algorithms can learn patterns in the code that may suggest potential vulnerabilities, thereby improving the accuracy and efficiency of identifying them. VulnRISKatcher is designed to accurately identify and classify vulnerabilities in source code and their associated risks. This tool uses ML techniques for code verification, which do not require the full context of the code, making it applicable at different levels of testing. VulnRISKatcher utilizes multiple ML techniques and leverages lexical code from diverse programming languages to train the models. This approach enables the solution to identify and classify various types of vulnerabilities accurately. Fig. 4 illustrates pipeline of VulnRISKatcher. The user provides either all or part of the code, along with the programming language for accurate data processing. Following this, the tool will preprocess the input data by cleaning Figure 4: VulnRISKatcher proposed pipeline. the code and converting it into discrete units. The preprocessing stage entails breaking down the code provided by the user into smaller units that can be analyzed. The preprocessed code is then analyzed to identify patterns and characteristics associated with vulnerabilities. Then, a classification of the potential vulnerabilities in the provided code is performed. Finally, a source-code report identifying and classifying the code vulnerabilities is presented to the user. ### CodeAssert The most commonly utilized AST tools attempt to automate the software testing process via scripting, as described in Section 2. These scripts define multiple tests which attempt to validate a software at multiple levels. However, they still require the manual effort of analyzing the code to create test cases, test data, and the expected results based on the tester's understanding of the SUT. CodeAssert builds on top of the traditional AST process model by adopting the tools that already use scripting as their AST method. However, it further expands this model by automating the generation of the test scripts, which include all information that is traditionally provided by testers except for the expected result. Naturally, this is a white-box software testing tool that leverages Natural Language Processing (NLP) to analyze source code files and extract all possible test cases. It achieves 100% code coverage in an automated manner, ensuring the quality of the System Under Test (SUT) at both the unit and integration level. Fig. 5 describes a simplified execution pipeline of CodeAssert. The user provides the testing tool access to the source code files and selects the type of testing (i.e. unit or integration) that should be conducted. Since it analyzes the source code files, the user must specify the programming language of the source code and the respective scripting tool, to generate the scripts in the correct format. CodeAssert then proceeds to use NLP to analyze the source code and identify all conditional statements. Then the test cases are written based on the identified conditions and the input values are generated accordingly. Afterwards, the generated testing scripts are reviewed by the tester, who proceeds to populate the expected value for each identified test case. Finally, the test cases are computed with the chosen scripting AST tool and a testing report is presented to the tester. Figure 5: CodeAssert pipeline. ## 4 Conclusions Considering the impact that software has in the people's daily lives, it is essential to ensure its quality in order to avoid disastrous events caused by faulty or vulnerable software. This work presents TestLab, an intelligent AST framework. Its purpose is to improve software testing in multiple testing levels from different testing perspectives relying on AI-driven methods to achieve automation. TestLab was designed to integrate with the software development cycle and enable continuous, comprehensive testing of the SUT, with the objective of ensuring high-quality output throughout the entire process. Automated software testing has demonstrated its efficiency in saving time, resources, and costs by automating the most monotonous tasks, enhancing software testing, and elevating the overall quality of software. The authors strongly believe that the AI-driven proposed framework has the potential to further improve testing efficiency and effectiveness, reducing the risk of human error, and enabling a more comprehensive testing coverage. The framework's future work includes the implementation and experimentation of all its modules to assess their performance and effectiveness, as well as to explore additional testing methods that can benefit from AI automation. #### Acknowledgements. The present work was partially supported by the Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (ERDF), within project "Cybers SeC IP" (NORTE-01-0145-FEDER-000044). This work has also received funding from UIDB/00760/2020.
2302.00824
SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft Feature Detection
The rapid proliferation of non-cooperative spacecraft and space debris in orbit has precipitated a surging demand for on-orbit servicing and space debris removal at a scale that only autonomous missions can address, but the prerequisite autonomous navigation and flightpath planning to safely capture an unknown, non-cooperative, tumbling space object is an open problem. This requires algorithms for real-time, automated spacecraft feature recognition to pinpoint the locations of collision hazards (e.g. solar panels or antennas) and safe docking features (e.g. satellite bodies or thrusters) so safe, effective flightpaths can be planned. Prior work in this area reveals the performance of computer vision models are highly dependent on the training dataset and its coverage of scenarios visually similar to the real scenarios that occur in deployment. Hence, the algorithm may have degraded performance under certain lighting conditions even when the rendezvous maneuver conditions of the chaser to the target spacecraft are the same. This work delves into how humans perform these tasks through a survey of how aerospace engineering students experienced with spacecraft shapes and components recognize features of the three spacecraft: Landsat, Envisat, Anik, and the orbiter Mir. The survey reveals that the most common patterns in the human detection process were to consider the shape and texture of the features: antennas, solar panels, thrusters, and satellite bodies. This work introduces a novel algorithm SpaceYOLO, which fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on these human-inspired decision processes exploiting shape and texture. Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments under different lighting and chaser maneuver conditions at the ORION Laboratory at Florida Tech.
Trupti Mahendrakar, Ryan T. White, Markus Wilde, Madhur Tiwari
2023-02-02T02:11:39Z
http://arxiv.org/abs/2302.00824v1
# SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft Feature Detection ###### Abstract The rapid proliferation of non-cooperative spacecraft and space debris in orbit has precipitated a surging demand for on-orbit servicing and space debris removal at a scale that only autonomous missions can address, but the prerequisite autonomous navigation and flightpath planning to safely capture an unknown, non-cooperative, tumbling space object is an open problem. This requires algorithms for real-time, automated spacecraft feature recognition to pinpoint the locations of collision hazards (e.g. solar panels or antennas) and safe docking features (e.g. satellite bodies or thrusters) so safe, effective flightpaths can be planned. Prior work in this area reveals the performance of computer vision models are highly dependent on the training dataset and its coverage of scenarios visually similar to the real scenarios that occur in deployment. Hence, the algorithm may have degraded performance under certain lighting conditions even when the rendezvous maneuver conditions of the chaser to the target spacecraft are the same. This work delves into how humans perform these tasks through a survey of how aerospace engineering students experienced with spacecraft shapes and components recognize features of the three spacecraft: Landsat, Envisat, Anik, and the orbiter Mir. The survey reveals that the most common patterns in the human detection process were to consider the shape and texture of the features--antenna, solar panels, thrusters, and satellite bodies. This work introduces a novel algorithm SpaceYOLO, which fuses a state-of-the-art object detection algorithm YOLOv5 with a separate neural network based on these human-inspired decision processes exploiting shape and texture. Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments under different lighting and chaser maneuver conditions at the ORION Laboratory at Florida Tech. ## 1 Introduction With the sharp rise in near-earth space debris, autonomous On-Orbit Servicing (OOS) and Active Debris Removal (ADR) operations around non-cooperative resident space object (RSO) has seen renewed interest. Currently, the US Department of Defense's global Space Surveillance Network (SSN) tracks more than 27,000 space debris that are greater than 10cm and around a third of them are old, decommissioned spacecraft. An effective approach to prevent further growth of debris is by deorbiting or servicing decommissioned spacecraft. This prevents the risk of smaller debris colliding with them and forming more debris. Some related OOS and ADR missions that demonstrated rendezvous, capture or inspection around a resident space object include ETS-VII [1], NASA, DARPA and AFRLs DART [2], XSS-10 [3], XSS-11 [4], ANGELS [5], MiTEx, Orbital Express [6], Northrop Grumman's MEV-1/-2 [7, 8], and Astroscale's ELSA-d [9]. These missions involved a single chaser spacecraft approaching a cooperative RSO with known geometry, capture interfaces, stable attitude, or ability to estimate stable attitude with RSOs surface markings. Though the missions demonstrated that it is feasible to perform autonomous operations around a cooperative spacecraft, autonomous operations around a non-cooperative spacecraft, representative of space debris is still an unresolved challenge. Our research at the ORION facility focuses on conducting autonomous ADR and OOS operations with a swarm of chasers around a non-cooperative spacecraft whose structure, functionality and attitude are unknown like the non-cooperative RSOs that are currently in orbit. As described in [10, 11, 12], we use machine vision with YOLOv5 models on raspberry pi 4 to classify and locate components of non-cooperative spacecraft such as body, thrusters, solar panels, and antennas. Based on the position and attitude data of these components, an artificial potential field guidance algorithm evaluates a safe approach trajectory for our swarm of chaser spacecraft (DJI Tello Talent Drones) to autonomously navigate to the non-cooperative spacecraft. Our previous machine vision algorithms - YOLOv5 and Faster RCNN were trained on over 1200 images of spacecraft from web searches and synthetic data. These images were augmented based on certain criteria to artificially increase the training data. Despite various training efforts, the machine vision algorithms still struggled to accurately detect components. To address this issue, we propose a new method called SpaceYOLO that uses contextual understanding of spacecraft to improve detection. This paper is written to serve as proof of concept of SpaceYOLO and the method requires further research and development prior to implementation. The paper is structured as follows: sections 2 and 3 consist of literature review on YOLO and context-based learning along with the survey conducted to develop the initial version of SpaceYOLO. Sections 4 and 5 discuss the development of SpaceYOLO and the training dataset. Finally, the paper concludes with a discussion of the results of the proof of concept and future work. ## 2 Computer Vision in Space Convolutional neural networks (CNNs) accelerated by graphics processing units (GPUs) [15] have revolutionized the performance of computer vision over the past 10 years. Initially, computer vision models required conventional computing hardware to supply sufficient computational power and huge visual datasets to train them. More recent innovations in small computing components, particularly edge GPUs (e.g. Intel Neural Compute Stick or NVIDIA Jetson) and FPGAs (e.g. Xilinx Kria KV260), have tremendously increased computational power available on low energy budgets making it feasible to be used on spacecraft [16, 17]. Additionally, there have been innovations in CNN architectures optimized for deployment on edge hardware [18, 19]. In the computer vision literature, the problem this paper is addressing falls into the area of object detection, which combines locating objects by drawing a bounding box tightly surrounding each object and classifying each as one of a pre-determined set of object types. From an input image, an object detector will ideally output three things for each object in the image: (1) a bounding box as the pixel position coordinates of the center of the box with width and height, (2) objectness score measuring how confident the model is that the box contains an object, and (3) a predicted object class. There are three major types of object detection algorithms in the literature: vision transformer-based detectors, multi-stage detectors, and single-stage detectors. The computational cost of transformers rules them out for on-board in-space use. Multi-stage detectors attack the object detection problem in multiple stages, most frequently using one stage of the model to propose regions where objects _might_ exist and another stage to classify those regions. Single-stage detectors do it all in one shot--they map image pixels directly to bounding box estimates, confidences, and classifications with a single neural network. A multi-stage detector (Faster R-CNN) was previously shown to deliver higher accuracy than a single-stage detector (YOLOv5) at satellite component detection [14]. However, the framerate of Faster R-CNN is too low (approximately 0.2 FPS) on the proposed hardware, which would be unsafe for guiding a chaser to a tumbling satellite. Further, YOLOv5 produces good accuracy at a much higher framerate, reaching 2-5 FPS. Although the quality of the detections is slightly diminished, they occur 10-25 times faster, meaning an occasional missed detection is not as risky since it will be corrected very quickly in later frames. A major disadvantage of typical object detectors is that they are not very interpretable, verging on black boxes. This article has taken a different approach to object detection that exploits the efficiency of YOLOv5, and the thought processes humans employ to perform the same task of identifying satellite components and drawing bounding boxes around them. Through surveys of the task performed by humans, we find common thought processes and build an object detector that uses these ideas explicitly to make decisions on Figure 1: Autonomous Docking with Swarm Satellites classifications. ## 3 Context Based Detection and Survey of Human Detection As previously mentioned, the idea behind using context-based detection techniques in SpaceYOLO is to minimize the detection issues found in our previous research in [10, 14, 12, 11]. A common issue we noticed was that the algorithm would misclassify under varying lighting conditions. Such misclassifications would be avoidable if the algorithm were more aware of the fundamentals of what these target components looked like and their fundamental use on the spacecraft, like how humans think. Hence, the goal of SpaceYOLO is to force human-interpretable information based on visual context distantly related to the work conducted in [27]. In [27], the authors simplify an object recognition algorithm to recognize and categorize the place they are in and object around them. They identify the overall type of scene and then identify objects within the scene. Some other works that inspire the SpaceYOLO model presented in this paper are from [28, 29, 30] where the authors correlate spatial information, scaling, textural information with the surroundings to classify the object. ### Survey To identify the criteria for the context-based technique, a survey was conducted among 24 aerospace engineering students, familiar with spacecraft components. The survey was used to better understand how humans use geometric shapes, colors and context to identify and classify features. Each student was presented with the same four distinct spacecraft images - Landsat, Envisat, Anik, and Mir. They were asked to identify spacecraft components such as solar panels, body, antennas (horn, parabolic, phased array), thrusters and radiators. In addition to identifying those features, they were also required to provide their reasoning for each feature. The survey images with overall results are marked in Figure 2 through Figure 5 below. The top three reasoning for features - solar panels, parabolic antennas, thrusters, and body are tabulated in Table 1. These reasonings are used to form the basis of SpaceYOLO. Figure 4: Anik Survey Results Figure 5: Orbiter Mir Survey Results Figure 3: Envisat Survey Results Figure 2: Landsat Survey Results From Table 1, it is evident that the first step to identify the components is to detect shapes. The second step in the process is texture of the components, and the third is the spatial position of the features. Based on this interpretation, the SpaceYOLO version in this paper only employs steps 1 and 2 to study the concept's feasibility. ## 4 SpaceYOLO Leveraging the data accrued from the survey, the first portion of the current version of SpaceYOLO consists of shape detector, and the second portion consists of variance classifier. The shape detector looks for circular and rectangular shapes as listed in Table 1. The variance classifier classifies these detected shapes from shape detector based on texture. The texture of the shapes is evaluated based on the pixel variance in the grayscale images. ### Shape Detector The shape detector in this algorithm is a YOLOv5s model trained to localize and classify shapes, specifically rectangles and circles. For the training dataset, a total of 30 images with circles, rectangles and triangles were built using MS Paint and labelled on Roboflow [31]. These images were augmented to artificially generate more images producing the final count of the training images to 53. The augmentations include random rotation of images between \(\pm 45^{\circ}\), random shear between \(\pm 27^{\circ}\) horizontally and vertically, random brightness adjustment between \(\pm 63\%\) and, and salt and pepper noise for 5% of pixels. Below is a snippet of a few training images. These augmentations were picked based on the issues the camera might experience during tests and how relevant these shapes would be on a spacecraft. Additionally, the training images were randomly placed with the goal of ensuring that the shapes covered most of the blank space within a training image frame. By doing this, the YOLOv5 algorithm will not be limited to a specific region within a test image to identify a shape. Below are the labeled heatmaps depicting that both classes properly covered most of the white space with these shapes. Since the idea behind the paper was not to define a benchmark for SpaceYOLO but introduce the concept of using human interpretability to identify the components, we were not concerned about finding the best weights for the model. Due to the limited dataset, training and comparing on the validation set was not an accurate representation of the detector. Therefore, we did not evaluate the weights on the validation dataset but rather tested it with hardware-in-the-loop experiments. Hence, for proof of concept, we started with something minimal that could detect simple shapes. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Solar Panel** & **Antennas** & **Body** & **Thrusters** \\ \hline Rectangular & Circular, & Cuboid or & Conical \\ & concave, Disk & Cylindrical & \\ \hline Reflective & Metallic & Covered in & Metallic, shiny \\ & & non-descriptive & but not \\ & & features (high & reflective \\ & & visual density) & \\ \hline Thin Profile & Protruding & Largest Single & Directly on \\ & from Body & Object & body \\ from the body & & & \\ \hline \end{tabular} \end{table} Table 1: Common Reasoning Occurrences Figure 8: Heatmap of All Classes Figure 6: SpaceYOLO Model Figure 7: Shape Detector Training Dataset Figure 9: Circle Heatmap Figure 10: Rectangle Heatmap Additionally, we used neural network methods instead of edge detection because neural networks are capable of functioning with distortion, while edge detection cannot. For example, a neural network trained on rotated and sheared images of a rectangle would still be able to predict a rotated rectangular solar panel, which looks almost trapezoidal as a rectangle. However, edge detection would fail to identify this almost trapezoidal shape if it is tasked to look for rectangles. ### Variance Classifier The variance classifier is the second part of SpaceYOLO. The variance classifier was built based on the ORION Facility spacecraft mock-up model. Several videos of the spacecraft under extreme lighting conditions were taken while also changing the orientation These videos were labelled at 1FPS, and each of the components (solar panel, body, thruster, and antenna images) were cropped out and converted to grayscale before extracting the pixel variance for each of the images. The histograms for each classes' variances for our lab data are shown in Figure 11. The variance value quantifies how much the pixels vary in the grayscale image. Prior to using the spacecraft mock-up images from the lab, we extracted variances from synthetic image data. It consisted of images of spacecraft from the internet and NASA's stereolithography files. More information on this dataset is described in [12]. When using this varied set of images, it was noticed that the variance values were random leading to massive errors compared to the ones from hardware-in-the-loop images form the lab. This is a result of some images in the dataset coming from STK or the Kerbal Space Program, while others were from images of spacecraft in orbit or in a facility with non-realistic lighting conditions. Variance is directly related to lighting conditions and texture. Hence, for consistency and to obtain realistic variance values we chose lab images to build the variance classifier. For visualization, histograms of variances for each component for synthetic images are shown in Figure 12. There are two separate variance classifiers, one for each shape - rectangular and circular. Cropped images of the shapes from the shape detector pass into the respective variance classifier. The variance of the cropped image is compared to the histogram in Figure 11. The variance classifier then outputs how probable the detected shape is an antenna, solar panel, thruster, or body and then gets multiplied by an optimized set of weighting factors depending on if it was detected as a circle or rectangle. The probability is calculated per Equation 1. \(f_{c_{n}}\) is the number of images corresponding to class n in that variance range. The weights used in this paper for the circular branch are [0.45, 0.045, 0.45, 0.045] and rectangular branch are [0.2 0.2 0.2 0.4]. They were optimized by exhaustive search. \[p(c_{n})=\frac{f_{c_{n}}}{\sum_{i=1}^{4}f_{c_{i}}},n=1,2,3,4 \tag{1}\] ## 5 Test Dataset As mentioned earlier, since the synthetic data was either too smooth or unrealistic, only hardware-in-the-loop experiment data was used for this paper. All the experimental data for the tests were collected from the ORION Facility at Florida Tech. ### ORION Facility The ORION facility [32] is equipped with a maneuver kinematics simulator, as seen in Figure 13. The simulator consists of a spacecraft mock-up. It is equipped with solar panels made from acrylic sheets, painted blue, the body is rectangular and wrapped with foil that accurately represents the Multilayer Insulation (MLI), the antenna is a smooth parabolic like dish, and finally, the thruster is conical shaped object, painted with low luster grey paint. Images of all the components are shown in Figure 14. Figure 11: Histogram of Variance – Lab Images Figure 12: Histogram of Variance - Synthetic Data The platform on which the mock-up is placed allows the spacecraft to rotate about three axes which creates yaw, pitch and roll at a maximum speed of 60\({}^{\circ}\)/\(s\) and maximum acceleration of 60\({}^{\circ}\)/\(s^{2}\). The lighting is simulated with Hilio D12 LED Panel. The panel is rated for 350 W and for a color temperature of 5,600 K (daylight balanced). The intensity of the LED is equivalent to a 2,000 W incandescent lamp. The intensity can be continuously adjusted from 100% to 0% and vice versa. The beam angle can be varied between 10\({}^{\circ}\) and 60\({}^{\circ}\) using lens inserts, allowing us to replicate weaker and diffused Earth albedo effects and solar illumination. Finally, the walls, ceiling and the floor of the facility are painted with low reflectivity black, and all the windows are covered with black-out blinds and black drapes to enable the simulation of realistic lighting conditions. #### Datasets A total of three datasets were studied to evaluate the performance of SpaceYOLO to compare it against YOLOv5 models developed in the past work [14] and against human predictions. The testing datasets shown in Figures 15, 16 and 17. SpaceYOLO's performance is collectively dependent on both the shape detector's and the variance classifier's performance. To accurately depict the performance of the detector, we used precision and recall as our two-base metrics. Precision is described as the number of correct positive object predictions, while recall is the percentage of correctly classified ground truth objects at the class level. Both precision and recall are calculated using the following equations. \[Precision\left(P\right)=\frac{TP}{TP+FP} \tag{2}\] \[Recall\left(R\right)=\frac{TP}{TP+FN} \tag{3}\] Following are the definitions of TP, FP and FN: * True positives (TP): The correct classification of a ground-truth bounding box. * False positives (FP): The incorrect classification of an object that is devoid or misplaced. * False negative (FN): The missed classification of a ground-truth bounding box. In order to have a well performing detector, it needs to have an increasing recall with high precision as the threshold of the confidence levels decreases. Average Precision (AP) depicts this relationship between precision and recall by utilizing a modified version of the area under a smoothed precision-recall curve for each class. Mean Average Precision(mAP) calculates the average AP value across all classes. The shape detector's performance is evaluated based on mean average precision, mAP over an IoU (intersection of union) threshold of 0.5 and 0.5:0.95 as described in [12]. The shape detector's performance is evaluated based on mean average precision, mAP over an IoU (intersection of union) threshold of 0.5 and 0.5:0.95 as described in [12]. However, if the algorithm detects any component as a certain shape, the variance classifier further evaluates which class the shape belongs to. Hence, though we compare the mAP values, it does not control the outcome of SpaceYOLO. The algorithm was fed in RGB images of hardware-in-the-loop datasets taken and labelled at 1FPS. For generalizing purposes, the antenna and thrusters were classified as circles while body and solar panels as rectangles. This forced any smooth edges to be detected as circles and sharp edges as rectangles. The following Tables contain metrics on shape detector's performance for each dataset. Dataset 1 performed the best out of the three classes due to the similarity of the training dataset with the spacecraft mockup. Additionally, the spacecraft was well illuminated, and the camera was closer to the mock-up compared to Dataset 2. Dataset 2 performed the worst of all the three datasets since the images were taken from 5 meters which is the farthest out of the three cases. Additionally, to complicate the matter, the light was set to 20% which made it difficult to even label the video. Notice that for Dataset 3 the solar panels are octagonal. Though the training dataset for the shape detector did not contain any octagons, the algorithm was still able to detect solar panels as rectangles while the sharp edges were prominent and detect as circles while the spacecraft was rotating, and the sharp edges were not prominent. The outputs from the shape detector, cropped images of circles and rectangles shown in **Error! Reference source not found.** through Figure 19, were converted to grayscale and passed into the variance classifier. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Class** & **Images** & **Instances** & **P** & **R** & **[email protected]** & **mAP@ 5.95** \\ \hline All & 81 & 308 & 0.278 & 0.258 & 0.192 & 0.0414 \\ \hline Circle & 81 & 94 & 0.191 & 0.128 & 0.0493 & 0.0103 \\ \hline Rectangle & 81 & 214 & 0.365 & 0.409 & 0.335 & 0.0724 \\ \hline \end{tabular} \end{table} Table 3: Test Dataset 2 Metrics \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Class** & **Images** & **Instances** & **P** & **R** & **[email protected]** & **[email protected]:95** \\ \hline All & 30 & 100 & 0.481 & 0.37 & 0.31 & 0.0942 \\ \hline Circle & 30 & 34 & 0.757 & 0.412 & 0.226 & 0.0548 \\ \hline Rectangle & 30 & 66 & 0.684 & 0.327 & 0.394 & 0.133 \\ \hline \end{tabular} \end{table} Table 2: Test Dataset 1 Metrics Figure 18: Antenna detected as Circle \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Class** & **Images** & **Instances** & **P** & **R** & **[email protected]** & **mAP@ 5.95** \\ \hline All & 81 & 308 & 0.278 & 0.258 & 0.192 & 0.0414 \\ \hline Circle & 81 & 94 & 0.191 & 0.128 & 0.0493 & 0.0103 \\ \hline Rectangle & 81 & 214 & 0.365 & 0.409 & 0.335 & 0.0724 \\ \hline \end{tabular} \end{table} Table 4: Test Dataset 3 Metrics #### SpaceYOLO Performance: Variance Classifier Currently, the performance of the variance classifier is evaluated manually. The gray scaled images from the shape detector were passed into the variance classifier where each image was either classified as body, solar panel, thruster, or antenna. Precision and recall for each of the two variance classifiers (rectangle and circle) for each dataset are tabulated as a confusion matrix and are discussed in this sub-section. The rows of the confusion matrix correspond to the predicted values, the columns correspond to the ground truth or actual value. The number in the brackets under the component names in the top row corresponds to the actual number of component images either classified as circles or rectangles. The numbers in the diagonal of the matrix correspond to true positives, i.e., the number of images correctly classified as either antennas, body, thrusters, or solar panels. Other numbers along a specific row represent false positive (FP) i.e., for example 11 thrusters were falsely classified as antennas. Off-diagonal numbers along a column represent false negative (FN) values such as missed classifications. The right most column contains the precision for each class and the bottommost row contains the recall percentage for each class. Per the APF algorithm as implemented in [13], thrusters and body are treated as attractive nodes i.e., the chaser spacecraft navigates to these points to dock with the RSO while avoiding solar panels and antennas. This means that higher recall values are preferred for thrusters and body while higher precision values are preferred for antennas and solar panels. For dataset 1, the overall precision of the rectangular branch was 83.33% and the circular branch was 41.67%. It is seen that solar panels were classified with the highest precision and recall while thrusters performed with the lowest recall value. The accuracy of the variance classifier for dataset 3 for the rectangular branch was 57.143% and the circular branch was 42.87%. Though the solar panels were octagonal, the shape detector classified them as both circular and rectangular. Despite this bias, the variance classifier forced the final classification to be solar panels indicating that the variance classifier works. Additionally, for a minimum of 7 instances, the variance classifier accurately forced the solar panel detections to 100% regardless of the shape detector results. #### YOLOv5 Experiments The performance of the YOLOv5 with weights from [12] are tested against the three datasets. The results are tabulated below. These results are explained in [14]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{6}{c|}{**Actual Value**} \\ \hline & & \begin{tabular}{c} **Antenna** \\ **(0)** \\ \end{tabular} & \begin{tabular}{c} **Body** \\ **(26)** \\ \end{tabular} & \begin{tabular}{c} **Thruster** \\ **(0)** \\ \end{tabular} & \begin{tabular}{c} **Solar** \\ **(16)** \\ \end{tabular} & **Precision** \\ \hline \multirow{3}{*}{**Category**} & **Antenna** & 0 & 0 & 0 & 0 & N/A \\ \cline{2-7} & **Body** & 0 & 13 & 0 & 7 & 65\% \\ \cline{1-1} \cline{2-7} & **Thruster** & 0 & 0 & 0 & 0 & N/A \\ \cline{1-1} \cline{2-7} & **Solar Panel** & 0 & 13 & 0 & 9 & 59\% \\ \hline \multirow{3}{*}{**Solar**} & **Recall** & N/A & 50\% & N/A & 56.25\% & \\ \cline{1-1} \cline{2-7} & **Solar Panel** & 2 & 0 & 0 & 9 & 81.82\% \\ \cline{1-1} \cline{2-7} & **Recall** & 90.48\% & N/A & 0\% & 69.23\% & \\ \hline \end{tabular} \end{table} Table 6: Confusion Matrix Test Dataset 2 \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{6}{c|}{**Actual Value**} \\ \hline & & \begin{tabular}{c} **Antenna** \\ **(21)** \\ \end{tabular} & \begin{tabular}{c} **Body** \\ **(2)** \\ \end{tabular} & \begin{tabular}{c} **Thruster** \\ **(16)** \\ \end{tabular} & \begin{tabular}{c} **Solar** \\ **(13)** \\ \end{tabular} & **Precision** \\ \hline \multirow{3}{*}{**Category**} & **Antenna** & 19 & 0 & 15 & 4 & 50\% \\ \cline{1-1} \cline{2-7} & **Body** & 0 & 0 & 1 & 0 & N/A \\ \cline{1-1} \cline{2-7} & **Thruster** & 0 & 0 & 0 & 0 & N/A \\ \cline{1-1} \cline{2-7} & **Solar Panel** & 2 & 0 & 0 & 9 & 81.82\% \\ \hline \multirow{3}{*}{**Solar**} & **Recall** & 90.48\% & N/A & 0\% & 69.23\% & \\ \cline{1-1} \cline{2-7} & **Thruster** & 1 & 0 & 2 & 0 & 66.67\% \\ \hline \end{tabular} \end{table} Table 7: Confusion Matrix for Test Dataset 3 \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{6}{c|}{**Actual Value**} \\ \hline & & \begin{tabular}{c} **Antenna** \\ **(10)** \\ \end{tabular} & \begin{tabular}{c} **Body** \\ **(2)** \\ \end{tabular} & \begin{tabular}{c} **Thruster** \\ **(16)** \\ \end{tabular} & \begin{tabular}{c} **Solar** \\ **(10)** \\ \end{tabular} & **Precision** \\ \hline \multirow{3}{*}{**Category**} & **Antenna** & 8 & 0 & 11 & 0 & 42.11\% \\ \cline{1-1} \cline{2-7} & **Body** & 0 & 1 & 1 & 33.33\% \\ \cline{1-1} \cline{2-7} & **Thruster** & 1 & 0 & 2 & 0 & 66.67\% \\ \hline \end{tabular} \end{table} Table 5: Confusion Matrix for Test Dataset 1 Figure 20: Solar Panel detected as Rectangle detected as Rectangle ## 7. SpaceYOLO, YOLOv5 and Human Detection Comparison Because this paper compares entirely different methods - SpaceYOLO, YOLOv5 and human detections, the testing metrics used to evaluate the performance is essentially the number of images localized and classified into different components out of all the images presented to the algorithm. Results for each method are presented in Table 11. \(\mu\) corresponds to SpaceYOLO, \(\beta\) corresponds to YOLOv5 and Labelled Occurrence corresponds to the number of human labels, which forms the baseline. Based on results above, antennas in dataset 1 performed better than YOLOv5 however overall, YOLOv5 shows better performance. When compared to precision and recall values from the _SpaceYOLO: Variance Classifier_ confusion matrices and YOLOv5 test metrics, SpaceYOLO shows promising results especially for identifying shapes of the spacecraft without ever being trained on spacecraft dataset and using only 53 training images instead of 1200+. If the shape detector was trained on more variety of shapes, the algorithm's overall accuracy would improve significantly since the results above show that the variance classifier has a huge impact on the final detections. It is also seen that YOLOv5 surpassed human detections for recognizing solar panels for dataset 2. The algorithm was able to identify the slightest blue shade of solar panels despite the zoomed-out images of dataset 2. This shows the algorithm's ability to recognize colors and small shapes. The same could be implemented on SpaceYOLO by augmenting the image dataset to improve its ability to detect shapes. Additionally, since SpaceYOLO only detects shapes and evaluates variance to classify the detection (not a neural network), it significantly reduces the computational complexity otherwise required by the previously employed YOLOv5 methods. The most current YOLOv5 model runs at 2FPS on Raspberry Pi [11] and, theoretically since SpaceYOLO requires less computational power, the algorithm should run at a higher frame rate, providing more reliable localization of spacecraft components. ## 8. Conclusions The current work on SpaceYOLO demonstrates the benefits of bringing human decision-making processes into an AI framework. The benefits of this model include human interpretability which leads to less dependence on a black box to make safety-critical decisions. A secondary benefit of this type of implementation is generating and labelling shapes for SpaceYOLO can be easily automated. This tremendously reduces time and resource costs associated with data development. Currently the algorithm demonstrates that this method is feasible but not quite up to par with the past YOLOv5 implementation however, SpaceYOLO has shown promising results that suggest future work in the direction of human interpretable computer vision implementation for OOS and ADR operations. One of the major issues noticed during testing SpaceYOLO is that the images of antennas and thrusters contain part of the body. This leads to confusion in the variance classifier. Future work will evaluate the effects of disregarding the body and only detecting antennas, solar panels, and thrusters. In addition, the performance evaluation of the algorithm will be automated so testing with larger datasets can be feasible. Furthermore, the decision tree model for future work will expand more than variance and account for spatial position and orientation of the features. ## Acknowledgements This project was supported by the NVIDIA Applied Research Accelerator Program. Additional conference funding support was provided by N000142012669 Inspiring Students to Pursue U.S. Navy STEM Careers through Experiential Learning grant. The authors wish to thank Dr. Brooke Wheeler for coordinating conference support. The authors also thank Mackenzie Meni whose help in editing the paper was critical to its submission. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{**Antenna**} & \multicolumn{2}{c|}{**Body**} & \multicolumn{2}{c|}{**Thruster**} & \multicolumn{2}{c|}{**Solar**} \\ \hline & \(\mu\) & \(\beta\) & \(\mu\) & \(\beta\) & \(\mu\) & \(\beta\) & \(\mu\) & \(\beta\) \\ \hline **Set 1** & 8 & 7 & 1 & 14 & 2 & 3 & 9 & 18 \\ \hline _Labelled Occurrence_ & \multicolumn{2}{c|}{15} & \multicolumn{2}{c|}{30} & \multicolumn{2}{c|}{17} & \multicolumn{2}{c|}{36} \\ \hline **Set 2** & 0 & 0 & 13 & 53 & 0 & 5 & 9 & 176 \\ \hline _Labelled Occurrence_ & \multicolumn{2}{c|}{44} & \multicolumn{2}{c|}{81} & \multicolumn{2}{c|}{50} & \multicolumn{2}{c|}{133} \\ \hline **Set 3** & 19 & 24 & 0 & 8 & 0 & 12 & 9 & 57 \\ \hline _Labelled Occurrence_ & \multicolumn{2}{c|}{29} & \multicolumn{2}{c|}{42} & \multicolumn{2}{c|}{23} & \multicolumn{2}{c|}{65} \\ \hline _a: SpaceYOLO & \multicolumn{2}{c|}{_b: YOLOv5_} & \multicolumn{2}{c|}{_c: YOLOv5_} & \multicolumn{2}{c|}{_c: YOLOv5_} & \multicolumn{2}{c|}{_c: YOLOv5_} \\ \hline \end{tabular} \end{table} Table 11: Results Comparison \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Class** & **Images** & **Intaunces** & **P** & **R** & **[email protected]** & **[email protected]** \\ \hline **All** & 81 & 308 & 0.93 & 0.31 & 0.38 & 0.18 \\ \hline **Antenna** & 81 & 44 & N.A & 0 & 0.04 & 0.015 \\ \hline **Body** & 81 & 81 & 0.89 & 0.60 & 0.71 & 0.34 \\ \hline **Solar** & 81 & 133 & 0.82 & 0.60 & 0.60 & 0.27 \\ \hline **Thruster** & 81 & 50 & 1 & 0.03 & 0.18 & 0.08 \\ \hline \end{tabular} \end{table} Table 9: YOLOv5 Test Dataset 2
2303.09297
Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals
Counterfactual explanations are an increasingly popular form of post hoc explanation due to their (i) applicability across problem domains, (ii) proposed legal compliance (e.g., with GDPR), and (iii) reliance on the contrastive nature of human explanation. Although counterfactual explanations are normally used to explain individual predictive-instances, we explore a novel use case in which groups of similar instances are explained in a collective fashion using ``group counterfactuals'' (e.g., to highlight a repeating pattern of illness in a group of patients). These group counterfactuals meet a human preference for coherent, broad explanations covering multiple events/instances. A novel, group-counterfactual algorithm is proposed to generate high-coverage explanations that are faithful to the to-be-explained model. This explanation strategy is also evaluated in a large, controlled user study (N=207), using objective (i.e., accuracy) and subjective (i.e., confidence, explanation satisfaction, and trust) psychological measures. The results show that group counterfactuals elicit modest but definite improvements in people's understanding of an AI system. The implications of these findings for counterfactual methods and for XAI are discussed.
Greta Warren, Mark T. Keane, Christophe Gueret, Eoin Delaney
2023-03-16T13:16:50Z
http://arxiv.org/abs/2303.09297v1
# Explaining Groups of Instances Counterfactually for XAI: ###### Abstract. Counterfactual explanations are an increasingly popular form of post hoc explanation due to their (i) applicability across problem domains, (ii) proposed legal compliance (e.g., with GDPR), and (iii) reliance on the contrastive nature of human explanation. Although counterfactual explanations are normally used to explain individual predictive-instances, we explore a novel use case in which groups of similar instances are explained in a collective fashion using "group counterfactuals" (e.g., to highlight a repeating pattern of illness in a group of patients). These group counterfactuals meet a human preference for coherent, broad explanations covering multiple events/instances. A novel, group-counterfactual algorithm is proposed to generate high-coverage explanations that are faithful to the to-be-explained model. This explanation strategy is also evaluated in a large, controlled user study (N=207), using objective (i.e., accuracy) and subjective (i.e., confidence, explanation satisfaction, and trust) psychological measures. The results show that group counterfactuals elicit modest but definite improvements in people's understanding of an AI system. The implications of these findings for counterfactual methods and for XAI are discussed. XAI, Counterfactual, Group Explanation, User Study + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal: this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + + Footnote †: journal this research. + + Footnote †: journal this research. + + Footnote †: journal this research. + Footnote †: journal this research. + Footnote †: journal this research. sector. As the onset of mastitisitis is quite hard to predict, inappropriate treatments can be administered, in which both sick and healthy animals are given antibiotics (leading to antibiotic resistance in herds, with many consequential negative effects). Accordingly, recent SmartAg research aims to better predict the onset of this disease (Sommer et al., 2017) and to explain these predictions sensibly to farmer end-users (Sommer et al., 2018). Specifically, these AI models predict when an individual cow is likely to fall ill, so they can be isolated from the herd to prevent contagion and receive appropriate treatment. To explain these predictions, Ryan et al. (Ryan et al., 2018) used counterfactual XAI techniques. For instance, an animal with a high white blood-cell count (\(>\)100 units) might be predicted to fall ill in the coming week, thus a counterfactual explanation for this prediction may inform the farmer: "if this animal had a lower white blood-cell count (e.g., \(<\)100 units), it would have been predicted to be healthy in the coming week". Reported work with focus groups in this application area (Ryan et al., 2018) revealed that farmers were likely to see predictions for several similar animals with common features (e.g., five cows with elevated cell-counts of \(\sim\)100 units, all predicted to fall ill). In such scenarios, using a common explanation for similar predictive-instances makes sense. Instead of generating five different counterfactuals, one for each animal (e.g., "if cow-1 had a lower cell-count of 19 then it would not fall ill", "if cow-2 had a lower cell-count of 74 then it would not fall ill" and so on), it seems better to find common feature-differences that explain all five animals (e.g., "if cow-1 had a lower cell-count of _54_ then it would not fall ill", "if cow-2 had a lower cell-count of _54_ then it would not fall ill" and so on). This _group counterfactual_ strategy potentially provides a consistent counterfactual explanation (implicitly) signalling that these instances are a related group. Farmer focus groups maintained that these explanations would be much more informative about patterns of illness in the herd and better inform herd-level decisions (e.g., about disease treatment). Presenting users with these related counterfactuals for several predictive-instances rather than different counterfactuals for each instance may also have psychological advantages, as it should reduce memory load and facilitate pattern-finding (Sommer et al., 2018). Finally, this scenario is clearly not unique to SmartAg. For example, in medicine, related use cases occur in the diagnosis and treatment of human illnesses. In Smart Manufacturing, related scenarios occur in the prediction and explanation of breakdowns in identical machines on a production line. Indeed, the use of group counterfactuals should apply to any situation in which users encounter multiple predictions over instances that bear similarities in input features and outcomes, and where the user needs to spot patterns over these instances reflecting underlying regularities in the domain. In the next section, we elaborate on how this idea could be applied in an income-prediction domain, to work up this proposal from scratch, and to sketch how group counterfactuals might be computed (see Figures 1 and 2). ### Explaining Multiple Income Predictions In the current paper, we used the Adult1 (Census) dataset (Keen and Friedman, 2017) to build a classifier that predicted whether individuals earned under or over $50,000 in annual income (see Figure 1). In our envisioned use case, a certain level of domain expertise can be assumed for the user evaluation of explanations (e.g., farmers receiving mastitis predictions), thus our motivation for using the Adult dataset was to approximate this when evaluating it with participants from a general population. We developed a novel method - called **Group-CF** - that explains the classifications of similar individuals in the same class using a group counterfactual, implicitly collecting them together using common feature-differences (see also Figure 1). In this use case, we assume an administrator is checking the classification of multiple individuals (e.g., a bank clerk assessing customer risk). To evaluate this new explanation strategy with users, two different counterfactual XAI methods were implemented to produce test materials, the: Footnote 1: The New Adult Datasets (Keen and Friedman, 2017) could be used here and may be preferred if examining group counterfactual explanations through the lens of fairness in XAI (see also (Keen and Friedman, 2017)). * _CF-Single_ system that explained classifications using diverse, single counterfactuals for each predictive-instance based on diverse feature-differences. For instance, "if Tom had worked _50_ hours per week instead of 43, then he would have earned over $50k", "if Mary had worked _62_ hours per week instead of 40, then she would have earned over $50k", "if Joe had worked _45_ hours per week instead of 22, then he would have earned over $50k" and so on (see also example in Figure 1). * _CF-Group_ system, using the Group-CF method, that explained classifications using a group counterfactual covering several predictive-instances, re-using the same target-value in the feature-differences. For instance, "if Tom had worked _50_ hours per week instead of 43, then he would have earned over $50k", "if Mary had worked _50_ hours per week instead of 40, then she would have earned over $50k", "if Joe had worked _50_ hours per week instead of 22, then he would have earned over $50k" and so on (see also example in Figure 1). Figure 1 shows a detailed two-feature-difference, worked example of how these two systems differ in the counterfactual explanations they generate for the same predictive-instances. Figure 2 graphically represents the main steps in the Group-CF algorithm used in the CF-Group system. In a user study, participants in the CF-Single condition were presented with classifications that were explained using diverse, single counterfactual explanations (i.e., from the CF-Single system). Participants in the CF-Group and CF-Group-Hint conditions were presented with the same classifications, which were explained using group counterfactuals for related classified instances (i.e., from the CF-Group system). The participants in the CF-Group-Hint condition received an additional "hint" along with each explanation that informed users that the individual belonged to a related group of people, to explicitly signal the commonality between the counterfactual instances. We hypothesised that because these group counterfactuals present users with the same values in key feature-differences, they represent a broader and more coherent explanation (Keen and Friedman, 2017; Friedman et al., 2017; Ryan et al., 2018; Ryan et al., 2018) than single counterfactuals, which vary in the feature-changes they suggest. Thus, group counterfactual explanations should facilitate a better understanding of the AI system and domain (as evidenced by higher user prediction accuracy). We also predicted that this improved understanding engendered by group counterfactuals would lead to greater user confidence in their own predictions, as well as elicit higher satisfaction and trust in the AI system, relative to single counterfactuals. ### Outline of Paper & Contributions The remaining sections of the paper provide more detail on the proposed group-counterfactual algorithm (section 2), the methodology for the user study (section 3) and its results (section 4). Though there is very little work on this type of counterfactual explanation in both AI and Psychology, we try to identify the most relevant related work (in section 5), before concluding with caveats, limitations and future directions (section 6). The paper makes several novel contributions in use case definition, algorithmic development, and user testing: 1. _Use Case Definition_: a first formulation of the _multi-prediction use case_ in counterfactual XAI, a scenario in which end-users encounter multiple, similar predictive-instances generated by a single AI system that are explained by a common explanation revealing related occurrences and patterns in predictive outcomes 2. _Algorithmic Development_: a novel counterfactual XAI algorithm - the **Group-CF** method - that groups explanations of similar predictive-instances, along with a methodology for presenting these to end-users 3. _User Testing_: the first user evaluation of this new group counterfactual XAI method, carefully designed and executed to reveal the key impacts it has on objective (i.e., accuracy) and subjective (i.e., confidence, satisfaction and trust) psychological measures of human understanding ## 2. Group-Counterfactual Algorithm Figure 2 graphically represents the main steps in the **Group-CF** algorithm developed to meet the multi-explanation use case defined earlier (see Algorithm 1 for formalization). The algorithm's starting point is a set of related instances that have been classified Figure 1. Sample Queries with Single or Group Counterfactual Explanations. Five queries are shown in summary form with matched single or group counterfactual explanations (n.b., details for John and Sarah omitted). Note that target-values for Weekly Hours and Education in the single counterfactuals vary for each explanation, whereas those for the group counterfactuals are the same in each. By definition, all these counterfactual feature-differences flip the outcome class. ``` 0:\(b(.)\); to-be-explained black-box model 0:\(D_{c}\); instances in the training data with class label \(c\) 0:\(D_{c^{\prime}}\); instances in the training data with class label \(c^{\prime}\) 0:\(X_{q}\); Query instance described by a vector of features \([f_{1},f_{2},...,f_{k}]\) such that \(b(X_{q})=c\) Retrieve, \(X\in D_{c}\), the Nearest Like Neighbor subset pool for Query \([X_{1},X_{2},...,X_{n}]\in X\) are selected such that \(b(X)=c\). for\(x\in\{X_{q}\}\cup\{X_{i}\}_{i=1}^{n}\)do Generate individual CFE, \(X^{\prime}\), s.t. \(b(X^{\prime})=c^{\prime}\). Note the feature change and the direction if applicable. endfor The most commonly perturbed feature set from the individual counterfactuals informs the features changed in the Group-CF. Randomly sample feature pairs \([f_{i},f_{j}]\) from sub-region \(R\in D_{c^{\prime}}\) for\([f_{i},f_{j}]\) in samples do for\(X\in\{X_{q}\}\cup\{X_{i}\}_{i=1}^{n}\)do substitute feature values with \([f_{i},f_{j}]\gets X_{sub}\) If \(b(X_{sub})\neq c^{\prime}\) endfor Return sample feature pair that maximises coverage endfor Stop ``` **Algorithm 1** Group-Counterfactual Method ### Step 1: Key-Feature Identification Given a pool of to-be-explained instances - \(X_{q}\) and its nearest-like-neighbours (\(X_{1},X_{2},...X_{n}\)) - and an opaque black-box model, \(b\), ensure that2\(b(X)=c\). Then, generate individual counterfactuals for all instances in the pool, forming a set of counterfactual instances (\(X_{q}^{\prime},X_{1}^{\prime},X_{2}^{\prime},...,X_{n}^{\prime}\)) such that \(b(X^{\prime})=c^{\prime}\). Any "traditional" counterfactual method can be used to generate these diverse counterfactuals; we used DiCE [48] due to its popularity, performance Figure 2. A related group of queries, (\(X_{q},X_{1},X_{2},...,X_{n}\)), are classed as being on one side of a decision boundary (e.g., Under $50k). Single counterfactuals are generated for each of these points forming a set of individual explanations, to find the feature-differences most commonly used (i.e., Education and Weekly Hours). Values for these identified features are sampled from a region of training instances in the contrasting class (i.e., Over $50k). These values are substituted into the query-instances to create candidate group counterfactuals that are checked for validity, with the best being chosen based on coverage. beyond baseline methods, and the availability of well-maintained open source code. Then, the difference-features on which these counterfactuals rely are analysed, to find the most commonly-used features that produce contrasting instances. For example, Education and Weekly Hours emerge here as the most frequently used difference-feature that flips the classification (see Figure 1), so these would be chosen as the key features to modify in generating candidate group-counterfactuals (n.b., majority voting is just one way to do this). This step also identifies the direction-of-difference in the key features used to flip the classification (e.g., increase/decrease for a continuous feature), a property used during the next step. ### Step 2: Sample Feature Values Using the key features that were found to be the most important in single counterfactuals, sample data points in the **contrasting class** to find feature-values for use in candidate group-counterfactuals. For example, for the Education feature this sampling identifies candidate values such as Bachelor's, Doctorate or Associate's degree (see Figure 2). Importantly, these sampled instances are known to be valid data points, and therefore are more likely to yield feature-value transformations that result in invalid counterfactuals (n.b., unlike those generated in Step 1's single counterfactuals which could be synthetic, invalid data points (Srivastava et al., 2016)). The data points sampled are drawn from a region, that could be all the instances in the contrasting class (e.g., the Over $50k class). However, it makes more sense to reduce the size of this region (e.g., only consider data points with feature-values in the same direction-of-difference as those found earlier). We have also found that using feature-values from prototype-like instances in the selected region works well (e.g., medoids from \(k\)-medoids clustering). ### Step 3: Generating Candidates The feature-values the from region's data points in the contrasting class are substituted for the values in key-features of the original instances to generate candidate group-counterfactuals (see Figure 2). These perturbations are counterfactual transformations of the original instances, that could cover the pool of instances. ### Step 4: Selecting the Best Explanation The feature-value substitutions that create the candidate group-counterfactuals are checked for validity and coverage. The validity check determines whether the feature-changes flip the classification of a given instance to the contrasting class. The coverage check determines whether this classification change holds over all the original instances in the pool. Accordingly, the group counterfactual with the highest coverage is chosen as the best, to be used to explain multiple instances (see Figure 2). ## 3. User Study: Method We designed a user study to determine whether providing counterfactual explanations for a group of predictions improved people's understanding of an AI decision-making system. The study had two phases:a (i) _training phase_, in which people were shown instances from a dataset and asked to predict their outcomes before being shown the AI system's prediction along with an explanation (see Figure 3), and a (ii) _testing phase_, in which people were shown instances and asked to predict their outcomes with no feedback or explanation as to their correctness (see Figure 4). The main measure was _accuracy_, the proportion of instance-items correctly predicted in the testing phase (subjective measures of confidence, satisfaction and trust were also recorded). This train-and-test methodology has Figure 3. Sample Material from Training Phase of the user study. The participant first sees (a) with the task of providing their own prediction for the item (3 options). After they respond they are presented with (b), showing feedback on the correctness of their response along with an explanation (in this case for the CF-Group-Hint condition). previously been used to explore the effects of counterfactual explanations on user understanding in the XAI literature (6; 6; 7; 64; 65). Participants were shown 40 instances in each phase of the study (i.e., 80 unique items in total), with no overlap between the items in each phase. In the training phase, the 40 items were made up of eight 5-item groups of similar instances for which group counterfactuals were generated (in the relevant conditions). Participants were placed into one of three conditions - CF-Single, CF-Group, or CF-Group-Hint - that were matched in every respect except for the type of counterfactual explanations provided during the training phase (see section 3.1 for details). The main predictions made for the study were that: (i) the provision of explanations (any explanation) will improve accuracy, that is, participant task accuracy in the testing phase will be higher than accuracy in the training phase, (ii) task accuracy in the group-counterfactual conditions will be higher than the corresponding single-counterfactual condition, (iii) the provision of a hint in the CF-Group-Hint condition should improve accuracy relative to the CF-Group condition, (iv) that group-counterfactual explanations should improve confidence in answers, explanation satisfaction and trust in the AI system relative to the single-counterfactual condition. ### Design The study had a 3 (Explanation: CF-Single vs CF-Group vs CF-Group-Hint) x 2 (Phase: Training vs Testing) mixed design with Explanation as a between-participant variable and Phase as a within-participant variable. The three Explanation conditions varied the counterfactual explanations provided in the training phase: _CF-Single_ presented people with instance-predictions that were explained using a single counterfactual unique to that instance (e.g., "If Joe's _Weekly hours_ had been _45_ and his _Education level_ had been _Doctorate degree_, he would have earned Over $50k\({}^{*}\); see Figure 1), _CF-Group_ presented people with instance-predictions that were explained using group-counterfactuals based on common target-values, implicitly grouping the similar instances in a given 5-item set (e.g., "If Joe's _Weekly hours_ had been _50_ and his _Education level_ had been _Bachelor's degree_, he would have earned Over $50k\({}^{*}\); see Figure 1), and _CF-Group-Hint_ presented people with the same group-counterfactuals as in CF-Group along with a "hint" saying the instance was "part of a group of people with similar characteristics" (see Figure 3). Note, CF-Single is essentially the control condition, with CF-Group and CF-Group-Hint being the experimental conditions, as it is matched in every respect with the two latter conditions, except in its use of non-group-counterfactual explanations (i.e., single counterfactuals). ### Participants Participants (N=207) were recruited using _Proflic Academic_ ([https://www.proflic.co/](https://www.proflic.co/)), and assigned in a fixed order to three between-participant conditions: CF-Single (n=68), CF-Group (n=70) and CF-Group-Hint (n=69). The sample consisted of 122 women, 84 men and one non-binary person aged 18-76 years (\(M\)=40.86, \(SD\)=14.31), with respondents pre-screened to select native English speakers from Ireland, the United Kingdom, the United States, Australia, Canada and New Zealand, who had not participated in previous related studies. Prior to analysis, 19 participant responses were removed as they failed more than one attention or memory check. An _a priori_ power analysis with G*Power (15) indicated that 207 participants were required to achieve 90% power for a medium effect size with alpha \(\sim\).05 for two-tailed tests. ### Materials: Dataset & Implementations The Adult (Census) dataset describes census information (13), and contains a mixture of continuous (age, weekly work hours, capital gain, capital loss), and categorical variables (employment type, occupation, marital status, country of birth, gender, race). A model that predicted a binary outcome, whether a person is earning above or below 550k/year, was implemented using a Gradient Boosting Classifier (18), as the to-be-explained black-box model. The default sklearn hyperparameters are used with a log loss implemented and a learning rate of 0.1 achieving an accuracy of 0.874 on the task. Training and test data were split and pre-processed according to (34) where categorical features were encoded ordinally when possible. Instances were randomly selected from the dataset as query-items for the study. The instances selected were all correct classifications, with low predicted class confidence (within 0.15 of the other class) to make the classifications non-obvious to participants, and the selection was balanced with equal numbers for each class. For the training phase, 8 seed-instances were randomly selected (balanced across classes) and 4 nearest-like-neighbours were found for these seeds (using Hamming distance) to create the 40 queries used (i.e., eight 5-item sets of related instances). For the testing phase, all 40 items were randomly selected from the dataset as the queries to be used, balanced across classes. DiCE (48) was used to generate the individual counterfactual explanations. The DiCE-counterfactual explanations (i.e., singles) were generated for each of the 40 training phase queries and used in the CF-Single condition3. For the CF-Group conditions the Group-Counterfactual method (see section 2), Figure 4. Sample Material in Testing Phase of the User Study was applied to these the 5-item-instance-sets to find good group-counterfactuals to cover them. If a group-counterfactual was found then the five instances with their paired individual- and group-counterfactuals were used as an item-set in the study. Finally, the sets of counterfactuals used in the CF-Single and CF-Group conditions were matched on proximity and sparsity measured in the standard way (Yang et al., 2017; Zhang et al., 2018). Proximity was measured using \(\ell_{1}\) distance scores, scaled by the median absolute deviation (MAD) of the feature's values in the training set (for continuous features), and a metric that assigns a distance of 1 if the features differ from the original input or zero otherwise (for categorical features). These distance scores were computed for the matched query-explanation pairs used in the control and experimental conditions; a paired, two-tailed t-test showed that they were not significantly different to one another, \(t(39)=1.30,p=.197\). Sparsity was measured as the number of feature-differences between counterfactual pairs which was always 2 for all query-explanation pairs used. ### Measures: Accuracy & Subjective Measures A mix of objective and subjective measures was used (see (Zhang et al., 2018) for a discussion of this distinction). The key objective measure was _accuracy_, which measured the extent to which exposure to the model's predictions and explanations in the training phase improved their knowledge of the domain/model; specifically, it was measured as the proportion of correct responses made in the training phase (i.e., consistent with the model's predictions). The subjective measures evaluated people's self-assessments of their (i) _confidence_ in their own predictions made in the testing phase (ii) _satisfaction_ with explanations used overall by the AI system, (iii) _trust_ in the overall AI system. The latter two measures were made after people completed the training and testing phases of the study using the DARPA Explanation Satisfaction and Trust scales (Zhang et al., 2018). To ensure that participants were fully engaged with the task, they were shown four attention checks at randomised intervals and asked to recall and select a subset of the features used by the system from a list of 10 options (5 correct, 5 incorrect) at the end of the experiment. ### Procedure Ethics approval for the study was granted in advance by University College Dublin with the reference code _LS-E-20-11-Warren-Keane_. At the beginning of the study, participants were informed that they would be testing an AI system designed to predict people's annual income from available information about them. Participants read detailed instructions about the task and provided informed consent. After completing practice trials for each phase of the study, participants progressed through both phases of the main task, and subsequently completed the subjective measures (i.e., Explanation Satisfaction and Trust scales (Zhang et al., 2018)). During the training phase, on each screen participants were presented with an instance without the class prediction and asked to make an _income_ prediction from three options - Under $50k / Don't know / Over $50k - as shown in Figure 3(a). Button order was randomised for each item, to prevent users from repeatedly selecting the same response. After making their prediction, users were shown feedback, with the AI model's prediction (correct answer) shown in green with a tickmark, and the incorrect answer shown in red with a cross-mark. Figure 3(b) shows an example from the Group-CF-Hint condition where a correct prediction was made and explained using a group-counterfactual with a hint. After completing the 40 items in the training phase, people progressed to the testing phase, in which they were shown 40 further instances without outcomes shown (see Figure 4) and asked to choose one of the three options to make their prediction. Here, after each response, participants rated their confidence in their prediction (using a 5-point Likert scale, from 1 (_Not at all confident_) to 5 (_Extremely confident_). In the testing phase, participants progressed through the 40 items, providing their predictions and confidence judgments without receiving feedback or explanations. After the testing phase, participants completed the satisfaction and trust measures before concluding the study. All the items presented in both phases were randomly re-ordered for each participant to control for possible order effects in the instances seen. On completion, participants were debriefed on the background and motivation for the study and paid \(\ell 2.50\) for their time. The task instructions and data for the study are available at [https://osf.io/smupq/?view_only=3ce941bafcb74da825a384a7d24687](https://osf.io/smupq/?view_only=3ce941bafcb74da825a384a7d24687). ## 4. User Study: Results & Discussion Overall, across the three conditions in the study - CF-Single, CF-Group, and CF-Group-Hint - significant improvements were observed in people's accuracy as they moved from the training to the testing phases of the experiment, an improvement that trends higher in the two CF-group conditions. However, though significant trends are found in accuracy, the relative differences between the three conditions are modest. This finding may be due in part to ceiling effects; that is, participants' accuracy was initially high in the training phase, limiting the extent to which it could improve from the provision of explanations. Interestingly, accuracy levels for item-groups were found to be strongly correlated with the sequence in which they appeared to participants in the CF-group conditions. The manipulation also showed modest improvements in subjective judgments of confidence, explanation satisfaction and trust, with significant trends found across the three conditions. ### Objective Measure: Accuracy A 3 (Explanation: CF-Single vs CF-Group vs CF-Group-Hint) x 2 (Phase: Training vs Testing) mixed ANOVA with repeated measures on the second factor was carried out on the proportion of correct responses given by each participant (i.e., accuracy; see Figure 5). There was no main effect of Explanation _F_(2, 204)=.174, _p=.840, however, there was a main effect of Phase, _F_(1, 204)=25.153, _p_\(\cdot\)001, \(\eta_{p}^{2}\)=.11. Participants in all three conditions were more accurate in the testing phase (_M_=.829, _SD_=.096) than in the training phase (_M_=.795, _SD_=.111). The two factors did not interact _F_(2, 204)=1.608, _p_=.203. Notably, response accuracy was quite high from the outset in the training phase (\(\sim\)80% in all three conditions), potentially leading to ceiling effects in the testing phase. The margin for improvement was low, reducing room for differences between the conditions to emerge. However, in the testing phase, there was evidence of a trend in increasing accuracy across conditions with the following order: CF-Single \(<\) CF-Group \(<\) CF-Group-Hint conditions, Page's L(40)=500.0, _p=_.013 (see Figure 5). This trend indicates that when people were given group counterfactual explanations (without and with a hint), their accuracy progressively improved. Indeed, examining the improvement in accuracy (the difference between a given participant's training accuracy and their testing accuracy), we observe that the CF-Single condition shows the least improvement (\(M\)=0.019, _SD_=.089), with the CF-Group (\(M\)=0.049, _SD_=.083) and CF-Group-Hint conditions scoring higher (\(M\)=0.034, _SD_=.117)). To tunnel further into these effects, we conducted exploratory analyses on the specific item-sets used in the training phase of the study. In the training phase, participants were presented with eight distinct 5-item-sets (i.e., 40 items in total); that is, sets of 5 similar instances and predictions that were explained with the same group counterfactual in the experimental conditions (CF-Group and CF-Group-Hint) and matched with respect to the control condition (CF-Single). For each participant, the 40 items were randomly re-ordered to control for possible order effects between item-sets. However, due to this randomisation, participants would have seen more and less favourable sequences for a given item-set; that is, the 5 items in a set could happen to be presented together in the randomised sequence (with no gaps between them) while another item-set could be interspersed with instances from other item-sets (with many gaps between items from the same set). Importantly, in the CF-Group and CF-Group-Hint conditions, this would mean that a participant could be presented with five (grouped) counterfactual explanations one after the other using the same target-feature values, presumably making it easier for them to benefit from the group-counterfactual. So, if a given item-set has lower gap-scores, one would expect higher accuracy for that set in both CF-group conditions. Accordingly, we analysed the order of items presented and calculated gap-scores for each item-set presented to participants in the three conditions of the study to check whether favourable orderings had any effect. Spearman's correlations were computed between these gap-scores for item-sets in the training phase and the accuracy observed in that phase. This analysis showed that there were negative correlations between gap-scores and accuracy in all three conditions, the lower the gap-score between items from the same set, the higher the accuracy: CF-Single (\(r_{S}\)(6)=-0.43), CF-Group (\(r_{S}\)(6)=-0.38), and CF-Group-Hint (\(r_{S}\)(6)=-0.69). Here, moderate correlations are found in the CF-Single and CF-Group conditions, with a high correlation being found in the CF-Group-Hint conditions, in which participants were told that certain items were part of a group. While this correlation does not imply a causal relationship, it does present a consistent picture of the effects of group counterfactual explanations with a hint. Indeed, this finding suggests that XAI systems should present instances that are counterfactually-grouped in an unbroken sequence, to allow end-users to benefit from this ordering effect. ### Subjective Measures A series of subjective evaluations were made by participants in the study, comprising confidence in their responses and satisfaction and trust in the AI system (see Table 1 for means and standard deviations). Overall, these measures show no main effects between conditions, though significant trends are found in increasing scores across conditions with the following order: CF-Single \(<\) CF-Group \(<\) CF-Group-Hint. **Confidence Judgments** for each prediction in the testing phase were measured using a 5-point Likert scale. A one-way ANOVA \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Measure} & \multicolumn{2}{c}{CF-Single} & \multicolumn{2}{c}{CF-Group} & \multicolumn{2}{c}{CF-Group-Hint} \\ \cline{2-7} & \(M\) & _SD_ & \(M\) & _SD_ & \(M\) & _SD_ \\ \hline Accuracy & 0.816 & 0.104 & 0.839 & 0.085 & 0.832 & 0.100 \\ Confidence & 3.956 & 0.432 & 3.936 & 0.437 & 4.047 & 0.432 \\ Explanation Satisfaction & 26.632 & 6.462 & 25.800 & 6.826 & 28.493 & 6.910 \\ Trust & 24.324 & 5.454 & 24.786 & 5.522 & 25.899 & 5.480 \\ \hline \hline \end{tabular} \end{table} Table 1. Means and standard deviations for each measure in the conditions of the user study (CF-Single, CF-Group, CF-Group-Hint) for (i) _Accuracy_ (proportion of correct answers in the testing phase), (ii) _Confidence_ (ratings on a 5-point scale of each answered item in the testing phase), (iii) _Explanation Satisfaction_ (summed ratings from the 8-question DARPA-scale after the testing phase), (iv) _Trust_ (summed ratings from the 8-question DARPA-scale after the testing phase). Figure 5. Task Accuracy (proportion of correct answers) for the three conditions (CF-Single, CF-Group, CF-Group-Hint) in the Training and Testing Phases of the Study (error bars show standard error of the mean; y-axis begins at 0.33 to show chance-level for responding). conducted on the mean confidence judgments made by participants showed that there was no main effect of group _F_(2, 204)=1.424, _p=_243. However, a reliable trend was found in the judgments made across conditions in the order: CF-Single \(<\) CF-Group \(<\) CF-Group-Hint, Page's L(40)=513.5, _p_\(\sim\)001. Participants showed increasing confidence that their own predictions were correct after they received group-counterfactuals with hints about instances being in a group, with the other conditions showing equivalent scores. **Explanation Satisfaction** for the explanations given for the AI system was assessed after the testing phase using the 8-question DARPA scale (Krause et al., 2017). A one-way ANOVA conducted on the summed scores showed there was no main effect of group _F_(2, 204)=1.424, _p=_243. However, a reliable trend was found in the judgments made across conditions in the order: CF-Single \(<\) CF-Group \(<\) CF-Group-Hint, Page's L(8)=105.0, _p=_012. Here, participants showed increasing satisfaction in the explanations of the AI system, especially when they have received group counterfactuals with hints, with the other two conditions showing equivalent responding. **Trust** in the AI system was assessed following the testing phase using the 8-question DARPA scale (Krause et al., 2017). A one-way ANOVA conducted on the summed scores showed there was no main effect of group _F_(2, 204)= 1.496, _p=_226. However, again, a reliable trend was found in the judgments made across conditions in the order: CF-Single \(<\) CF-Group \(<\) CF-Group-Hint, Page's L(8)=108.0, _p=_001. Here, participants showed increasing trust in the AI system when they received group counterfactuals with hints, with the other two conditions showing equivalent responding. ## 5. Related Work The related work can be split into strands of prior computational and psychological research. In the computational literature (Krause et al., 2017; Krause et al., 2018; Krause et al., 2018; Krause et al., 2018), the traditional use case for counterfactual XAI tries to _explain a single predictive-instance from a single AI model using a single counterfactual_ (i.e., using the feature-differences in the counterfactual-pair that change the prediction) with the typical stakeholder being a customer end-user of that AI system (e.g., bank customer). The use case explored in this paper is directed at the same customer stakeholder but is _explaining multiple, similar predictive-instances from a single AI model using a single group-counterfactual_ (i.e., using a common feature-value in the difference-features that change the prediction; see Figure 1). As we shall see in the next subsection, there are a few papers that advance related (but different) algorithms addressing other use cases and stakeholder groups (e.g., end-users versus modellers) in the literature. In the user-study literature, several papers report psychological tests of counterfactual XAI (e.g., (Krause et al., 2018; Krause et al., 2018; Krause et al., 2018; Krause et al., 2018)) but none of this prior work explicitly tests the type of counterfactuals examined here (i.e., the group-counterfactual idea proposed in focus groups, see section 1.1). ### Computational: Related Algorithms Many different methods have been proposed in XAI to explain single instance-prediction from a single AI model using a single counterfactual: methods adopting optimisation approaches (Krause et al., 2017; Krause et al., 2018; Krause et al., 2018; Krause et al., 2018), instance-based approaches (Krause et al., 2018; Krause et al., 2018; Krause et al., 2018; Krause et al., 2018), distributional approaches (Krause et al., 2018; Krause et al., 2018; Krause et al., 2018) and other approaches (Krause et al., 2018; Krause et al., 2018) (for reviews see (Krause et al., 2018; Krause et al., 2018; Krause et al., 2018; Krause et al., 2018)). However, none of this work explains multiple similar, predictive-instances by grouping them using a group-counterfactual, although some prior work has considered related problems for different use cases and stakeholders. Artelt et al. (Artelt et al., 2018) consider using a counterfactual to explain predictions by an ensemble of distinct models; their domain involved a water-distribution system, wherein different sections were modelled in the ensemble, relying on different sensed data, some of which come from faulty sensors. The Ensemble Consistent Explanations (ECEs) method produces a single counterfactual explanation for this set of distinct, individual predictions, a type of summary of the diverse decisions by the different models. The stakeholder in this use case appears to be a model-user deploying the ensemble (i.e., an engineer or statistician). Furthermore, the method attempts to solve a different problem to the current one, as it is _explaining multiple_, _different predictive-instances from multiple AI models using a single counterfactual_ (that is not a group-counterfactual in our sense). It should also be noted that ECEs does not seem to produce very sparse counterfactuals (\(>\) 20 feature-differences are common) relative to simpler baselines (that use \(\sim\)3 feature-differences), so it is unclear whether non-expert end-users would find them easy to understand (Krause et al., 2018; Krause et al., 2018). In other related work, Rawal & Lakkaraju (Rawal and Lakkaraju, 2018) propose the Actionable Recourse Summaries (AReS) framework to address a different use case for other stakeholders. Their method constructs global counterfactual rules that are interpretable recourse summaries for subgroups in the dataset; here, these subgroups are defined by features-of-interest (e.g., race, gender) with a view to auditing datasets for bias. AReS finds compact feature-difference rule-sets that capture counterfactual recourse for sub-populations of the dataset; for instance, showing that the recourse for foreign worker borrowers (e.g., to earn 20% more in income) might be less favourable than that for non-foreign workers (e.g., to earn 5% more in income). This method also assesses fairness using recourse costs based on the actionability of different feature-changes. A small user study (N=21) is reported showing that the recourse summaries generated can aid model developers in detecting model bias and discrimination (though it is unclear whether these differences are statistically significant, or if the study is sufficiently powered to detect them). Here, by comparison to the current work, the stakeholder appears to be quite different as it is a model-developer or AI system auditor. Furthermore, the method again aims to solve a different problem to the current one, as it is _explaining selected instance subgroups from a single AI model using a counterfactual summary_ (i.e., abstracted feature-differences expressed as rules), rather than a group-counterfactual. AReS is a very interesting solution to this recourse problem but addresses very different concerns to those handled by the current method. Plumb et. al. (Plumb et al., 2018) propose another counterfactual-summary method for a different use case with the same stakeholders (i.e., model developers). Their use case deals with the exploration of groups of points in low-dimensional representations in the ML pipeline. Using an initial model they propose Global Counterfactual Explanations (GCEs) to represent key feature-differences between sub-groups of points in the dataset, which are computed from what they call Transitive Global Translations (TGT). This method computes counterfactual summaries to assess the utility of the low-dimensional representations being used in an initial model. So, as with AReS, the stakeholder appears to be a model developer and the method is again attempting to solve a different problem to the current one; it is _explaining subgroups of data-points from a single AI model using a counterfactual summary_, rather than a group-counterfactual. ### Psychological: User Studies Although many criticisms have been made of the paucity of user studies in counterfactual XAI (Sundhi et al., 2017), more well-designed studies have begun to appear in response (Krause et al., 2017; Krause et al., 2017; Krause et al., 2017; Krause et al., 2017; Krause et al., 2017; Krause et al., 2017; Krause et al., 2017; Krause et al., 2017; Krause et al., 2017; Krause et al., 2017; Krause et al., 2017). These studies address the traditional counterfactual XAI use case where an AI system's prediction for a single instance is explained using a single counterfactual (e.g., a loan application that is refused, explained with a counterfactual such as "if you had earned more income you would have been granted the loan"). As such, none of this work has considered the use case examined here, where a user is presented with many counterfactual explanations for different predictive-instances in which the same feature-differences are being used over and over counterfactual pairs (see Figure 1). Indeed, in the wider psychological literature, we have not been able to find any work that specifically examines repeated explanations of this sort. However, there is a strand of explanation research in cognitive science that has argued that explanations that are broad, simple and cover more phenomena are to be preferred (Lombrozo, 2004; Gopal et al., 2006; Gopal et al., 2006; Gopal et al., 2006). Lombrozo (Lombrozo, 2004) has argued that a good explanation accounts for a broad range of observations; for example, explanations in the form of disease diagnoses that account for three observed symptoms are judged to be better than those that account for just one (Krause et al., 2017; Krause et al., 2017). Providing a repeated counterfactual explanation based on the same feature-differences, that covers multiple predictive-instances, allows people to form a common explanation that can be applied across similar examples they may encounter. Similarly, group-counterfactuals are simpler by virtue of this repetition too. As such, this literature provides background support for why people might find group counterfactuals useful in explaining the predictions of an AI system, allowing them to learn more about how the domain or system works (the main finding of the current user study). ## 6. Conclusions, Caveats & Future Work Based on user feedback, we have advanced a new use case for counterfactual XAI that groups predictive-instances using common counterfactual explanations. A novel algorithm has been implemented for this use case and applied to generate explanations that were tested in a carefully-controlled user study. This evaluation shows that group-counterfactuals elicit modest but definite improvements in people's understanding of an AI system, as evidenced by people being more (i) accurate in their own predictions, (ii) confident about their answers, and (iii) satisfied with and trusting in the system's performance. Given these results, we outline several conclusions, caveats, and proposals for future work. First, this work opens up a new frontier in the use of counterfactual explanations in XAI. Interestingly, it highlights the importance of recognising that sometimes users may be given multiple explanations and that it makes sense to group these explanations in some way. In current counterfactual XAI research, the standard scenario adopted is a one-shot, one-way interaction between the AI system and a user; the user receives a prediction and an explanation of it, without accounting for similar instances and their explanations. Beyond this restrictive one-shot paradigm, we have uncovered one real-world scenario in which an AI system is making repeated predictions for queries that can be meaningfully grouped together to provide additional insights to users. In order to provide useful explanation and recourse, we believe that XAI research needs to move beyond traditional, simplified scenarios to more complex real-world, user-driven solutions (Krause et al., 2017). Second, the current implementation, **Group-CF**, is clearly just the first of many potential algorithmic solutions to this multi-explanation use case. The present solution consciously adopted an instance-based approach, in which users are presented with explanations that specifically modify query instances in the same way, indicating their grouping by using the same counterfactual transformation repeatedly (i.e., substituting the same target-value for multiple predictive-instances). Many other options are possible; for example, the group-counterfactual could be based on showing ranges of values or generalisations of feature-differences computed over sets of related instances. While the core focus of this work was to establish the utility of group counterfactual explanations through a user study and the contribution of a novel technique, additional algorithmic testing (e.g., robustness experiments (Krause et al., 2017)), could further motivate the utility of group explanations beyond single counterfactuals. We would encourage the community to explore these other options to advance the area. Third, although the present user study provides firm empirical support for the use of group counterfactuals, it remains to be seen whether stronger effects can be found for other datasets and task contexts. We posit that the current findings were inhibited by ceiling effects; that is, from the outset people were already quite accurate in predicting outcomes (during the training phase) leaving them very little headroom for improvement. In other domains, where users have less expertise or where the interaction between features is more complex, we would predict stronger effects from supporting group-counterfactual explanations. Furthermore, these group counterfactuals could be presented in different ways; for example, in our tests, these common grouped-explanations typically were not in a sequence together, but if the context were to allow predictive-instances to be grouped in advance to appear together (e.g., sick cows of a similar age or breed), increased benefits of group counterfactuals for users may emerge. Finally, we would like to join recent calls for a more user-centred XAI (Krause et al., 2017; Krause et al., 2017; Krause et al., 2017). To date, XAI research has been dominated by a "mad dash" to implement various explanation strategies; witness the \(\sim\)120 distinct methods in the counterfactual XAI literature. It is entirely possible that many of these implementations are indistinguishable from a psychological perspective; that is, many of these algorithmic variants would not change people's responses in any way. As explanations must be ultimately understood by their intended users, we emphasise the importance of appropriate user evaluation when designing new explanation methods (see e.g., (Krause et al., 2017; Krause et al., 2017)). One of the take-home messages from the current work is that even a small amount of work on soliiting user views can lead to significant advances in use-case definition and algorithmic development. ## 7. Acknowledgements This publication has emanated from research conducted with the financial support of (i) the UCD Foundation, (ii) Science Foundation Ireland (SFI) to the Insight Centre for Data Analytics under Grant Number 12/RC/2289 P2 and (iii) SFI and the Department of Agriculture, Food and Marine on behalf of the Government of Ireland under Grant Number 16/RC/3835 (VistaMilk).
2305.16039
Optimizing Experimental Parameters for Orbital Mapping
A new material characterization technique is emerging for the transmission electron microscope (TEM). Using electron energy-loss spectroscopy, real space mappings of the underlying electronic transitions in the sample, so called orbital maps, can be produced. Thus, unprecedented insight into the electronic orbitals responsible for most of the electrical, magnetic and optical properties of bulk materials can be gained. However, the incredibly demanding requirements on spatial as well as spectral resolution paired with the low signal-to-noise ratio severely limits the day-to-day use of this new technique. With the use of simulations, we strive to alleviate these challenges as much as possible by identifying optimal experimental parameters. In this manner, we investigate representative examples of a transition metal oxide, a material consisting entirely of light elements, and an interface between two different materials to find and compare acceptable ranges for sample thickness, acceleration voltage and electron dose for a scanning probe as well as for parallel illumination.
Manuel Ederer, Stefan Löffler
2023-05-25T13:18:32Z
http://arxiv.org/abs/2305.16039v1
# Optimizing Experimental Parameters for Orbital Mapping ###### Abstract A new material characterization technique is emerging for the transmission electron microscope (TEM). Using electron energy-loss spectroscopy, real space mappings of the underlying electronic transitions in the sample, so called orbital maps, can be produced. Thus, unprecedented insight into the electronic orbitals responsible for most of the electrical, magnetic and optical properties of bulk materials can be gained. However, the incredibly demanding requirements on spatial as well as spectral resolution paired with the low signal-to-noise ratio severely limits the day-to-day use of this new technique. With the use of simulations, we strive to alleviate these challenges as much as possible by identifying optimal experimental parameters. In this manner, we investigate representative examples of a transition metal oxide, a material consisting entirely of light elements, and an interface between two different materials to find and compare acceptable ranges for sample thickness, acceleration voltage and electron dose for a scanning probe as well as for parallel illumination. keywords: transmission electron microscopy, electron energy-loss spectroscopy, orbital mapping, rutile, graphite, SrTiO\({}_{3}\)-LaMnO\({}_{3}\) + Footnote †: journal: ## 1 Introduction The wheel of technological progress is ever turning. Improvements to aberration correctors, electron detectors and spectrometers allow us to push spatial and energy resolution to continually smaller boundaries. Atomic resolution is now routinely achieved for both fixed-beam and scanning transmission electron microscopy (STEM) for many years [1]. Even coupled with electron-energy loss spectroscopy (EELS) [2; 3; 4] or energy dispersive x-ray (EDX) analysis [5; 6; 7], atomic resolution is now possible in the resulting energy-filtered maps. With orbital mapping [8; 9; 10], however, we strive to fly even closer to the sun. Contrary to elemental mapping, a tiny energy window of often less than 2 eV is necessary. This ensures that not the whole ionization edge is used for mapping, but only a single peak or feature of the energy loss near edge structure (ELNES), thus allowing us to map an electronic transition between specific orbitals in the sample material. In many cases, this procedure is equivalent to imaging the spatial distribution of originally unoccupied electronic orbitals near Fermi level. Thus, using this method allows to get a peak at the individual electronic orbitals directly responsible for basically all electronic and magnetic properties and chemical binding of a given material. Up to recently, this was only possible for states located at the material surface through scanning tunneling microscopy [11; 12; 13]. Using a TEM, however, orbital mapping of bulk materials becomes possible. In principle, this allows the investigation of any region of interest in the material such as defects or internal boundaries between different phases or materials. However, the demanding needs for high spatial and spectral resolution coupled with the extremely low signal-to-noise ratio due to the tiny energy windows limit the broad applicability of this promising new method for now. Thus, it is of paramount importance to overcome these limitations by optimizing all other, more freely chooseable experimental imaging parameters such as sample thickness, acceleration voltage and electron incidence dose. Aided by density functional theory and electron channelling simulations, our goal in this work is to find the perfect set of experimental parameters for three material classes: transition metal oxides (by the example of rutile), materials made up of light elements (by the example of graphite), and interfaces (by the example of the SrTiO\({}_{3}\)-LaMnO\({}_{3}\) heterostructure). If there exists no perfect set we will settle for parameter ranges that produce still acceptable orbital maps. Acceptability is gauged as objectively as possible by the use of an image difference metric and the resulting difference relative to a perfect reference image [14; 15; 16]. We have chosen rutile and graphite as their electronic structure makes these materials particularly suitable for orbital mapping while their crystal structure is still relatively simple. Further, they provide the opportunity to study the effects of elastic scattering on the resulting orbital maps, as graphite consists only of light atoms while rutile consists of both light and relatively heavy atoms. The SrTiO\({}_{3}\)-LaMnO\({}_{3}\) heterostructure, in particular the region at/around the interface, has been chosen due to its, in comparison, complex structure and as a potential application. Mapping core loss transitions in the vicinity of material defects and interfaces allows for potentially unrivaled insight into the local electronic and magnetic properties of the material [9]. While up to now, experimental orbital maps have only been reported using scanning TEM, using EFTEM should be equally possible [17]. Thus, we will simulate images both for a scanning probe as well as for parallel illumination. This allows us to compare acceptable parameter ranges between STEM and energy-filtered TEM (EFTEM) for orbital mapping whenever a direct comparison is possible. ## 2 Methods We calculate theoretical orbital maps by employing the multislice algorithm [18; 19] for the elastic propagation of the probe electron through the sample material. The inelastic interaction between probe electrons and sample electrons is calculated with the mixed dynamic form factor [17; 20; 21] based on density functional theory data obtained with WIEN2k [22]. Energy filtering and subsequent mapping allows us to image the corresponding transition in real space [8; 9]. The energy window will be chosen infinitely small for this work in order to decrease the numerical work load and will consist only of a single energy point. Further, we focus only on core loss transitions, in particular on K and L edges, and interpret the resulting image as the spatial representation of the originally unoccupied orbital in the conduction band. EFTEM images are simulated by assuming an incidental plane wave. STEM spectrum images, however, require an entire multislice calculation for each pixel with the convergent electron beam centered at the pixel's position. Thus, even when taking parallel execution into account, the STEM simulations result in often restrictively long computation times. For each material, we perform simulations for a representative range of sample thicknesses. The thinnest samples are only a single unit cell thick and result, for infinite electron dose, in the qualitatively best image. For relatively thick samples (60 - 100 unit cells), we are unfortunately quickly limited by the aforementioned STEM computation times, which is also the reason for the varying thickest samples for different materials. Nonetheless, for the thickness-dose investigations, we have successfully managed to include selected simulations of samples thicker than 20 nm for all materials. We perform these thickness-dose investigations by including shot noise in the resulting images. By keeping a constant ratio between the electron counts in the image and the thickness dependent signal intensity, we effectively simulate a certain electron incident dose. Thus, without knowing the relation between the absolute electron incident dose and the resulting image dose, we can still give the relative incident dose and calculate the specific electron counts from a given reference case. This allows us to provide a more realistic comparison between different sample thicknesses than it would be the case for infinite dose. In order to automate and objectify the evaluation of the image for a specific parameter set, we employ an image difference metric suitable for TEM [23] based on the scale invariant feature transform (SIFT) algorithm [24]. This method detects features in a reference image and compares their orientation and image gradient to the features at the same positions in any given image of interest. Based on these differences, a total image difference is assigned to the image pair. An image difference of 0 hereby indicates identical images and 1 the maximal achievable difference to the reference image. The parameters for the respective reference images are given in Tab. 2. The method is especially well suited for orbital mapping, as the optimal image can easily be calculated for infinite electron dose and a projected sample thickness of a single unit cell. For STEM, we have chosen \(\alpha=30\) mrad and \(\beta=50\) mrad, as these values result in highly detailed orbital maps while still being reasonably achievable in the experiment. Similarly, step sizes have been chosen to allow for enough pixels per feature (orbital), so that image difference detection still performs reliably, while not too far away from typical experimental values. Additionally, step sizes in x and y direction are different (except for rutile). We have chosen them with the restriction that the resulting pixels per unit cell are an integer number in order to avoid artifacts. For all simulated images, spherical aberration and incoherent source size broadening were neglected. We have chosen two thresholds to further categorize the resulting image difference number, based on investigations of nano particle detectability using the SIFT metric [23]. Below a value of 0.18, the images that are compared can for all practical purposes be considered as identical for current state-of-the-art TEMs and the parameter set resulting in the image as optimal. Images with difference values up to 0.3, while exhibiting already significant deviations to the reference image, still contain enough of the underlying orbital information for the parameter set to be considered acceptable here. Accordingly, we will mark these thresholds in the relevant figures throughout this work with dark green and olive coloured contour lines or backgrounds. We begin with an investigation of the relevant parameters for STEM-EELS and EFTEM separately. Afterwards, we compare the methods if possible and evaluate their respective theoretical capability of achieving orbital mapping. ## 3 Materials and Results ### Rutile The rutile phase of TiO\({}_{2}\) is a prime target for orbital mapping due to its crystal field induced splitting of its unoccupied states near the Fermi level into \(t_{2g}\)-like and \(e_{g}\)-like bands [25; 26; 27; 28]. A large enough energetic separation of final states in the conduction band is crucial for orbital mapping as otherwise individual transitions to final states with different angular momentum character overlap in the resulting image. Subsequently, all directional information of the orbital map would be lost according to Unsold's theorem [29]. In the past, the t\({}_{2g}\) transitions of Ti have already been successfully mapped in a proof of principle work of Loffler et al. [8] and more recently reproduced by Oberaigner et al. [30]. The O K-edge yields two different relevant orbital maps, both with a characteristic dumbbell shape for the chosen projection along the \([0\,0\,1]\) zone axis, due to the underlying electronic transition to a final state with p angular momentum character [17]. The resulting mapped orbitals are either aligned in the direction towards the nearest Ti neighbour atom or perpendicular to the direction and will be defined as p\({}_{x}\) and p\({}_{y}\), respectively. The specific energy losses of the two cases, 535.9 eV for p\({}_{x}\) and 539 eV for p\({}_{y}\), have been chosen based on the projected density of states (pDOS), ensuring high intensity of the map as well as a relatively large energy distance between the two points in order to exclude potential overlaps of experimental energy windows (see supplementary information). Orbitals with p\({}_{z}\) character, in our definition parallel to the \([0\,0\,1]\) zone axis and parallel to the electron beam, have little influence on the p\({}_{x}\) and p\({}_{y}\) orbital maps due to their symmetry for this particular geometry and will not be mapped individually. Compared to the O K-edge, the Ti L\({}_{2,3}\)-edge is harder to interpret and has more possible final wave functions. However, many of the cases can be disregarded for various reasons. Due to the chosen crystallographic direction of \([0\,0\,1]\), final orbitals with d\({}_{z^{2}}\) character can be excluded for symmetry reasons similar to the p\({}_{z}\) orbitals before. d\({}_{x^{2}-y^{2}}\) and d\({}_{xy}\) exhibit a four-lobed shape in the projection only for a one unit cell thick sample but lose any resemblance to a d-orbital for samples only a few nm thick (see Fig. S5 in the SI). Further, the general structure of the Ti DOS above Fermi level makes it hard to single out transitions to the d\({}_{x^{2}-y^{2}}\) or d\({}_{xy}\) orbitals, which would result in orbital maps with either a strong contribution of other final states or an unfeasibly low signal-to-noise ratio. Thus, only d\({}_{xz}\) and d\({}_{yz}\) remain, both with dumbbell shapes resembling p orbitals for this specific projection. The orbital at an energy loss of 458.7 eV, projection oriented towards the closest O atoms, is defined as d\({}_{xz}\), while the orbital at 462.4 eV with a perpendicular projected orientation is defined as d\({}_{yz}\) in this work. #### 3.1.1 Stem In the following, we present the simulated STEM spectral images of the four cases discussed in the previous section. We focus on the influence of sample thickness, incident electron dose and convergence (collection) semi-angle \(\alpha\) (\(\beta\)) on the image quality, i.e. the recognizability of the original projected orbital shapes. The two upper rows in Fig. 1 show the two relevant cases of the O-K edge. They exhibit a drastically different sample thickness dependence, due to the preferred inelastic scattering direction parallel to the orbital orientation. In the case of the p\({}_{x}\) orbital, when the sample thickness is increased, the perfect orbital shape deteriorates in the image due to the channeling effect [18, 31, 32] and due to partial waves created at neighbouring atomic columns in the bulk sample where the incident beam is already less focused. Thus, with increasing sample thickness, increasingly more intensity from the orbital shape is transferred to the closest Ti atom in the image, with the result that the two p\({}_{x}\) lobes appear as a single, long ellipse. The channeling effect is significantly less pronounced for the p\({}_{y}\) orbital due to the preferred scattering direction in between the (distorted) octahedra of the rutile crystal structure. Even for a projected sample thickness of 20 nm, the only perceivable change in the spectral image is a slight background intensity increase between the atoms. The dumbbell shape is mostly preserved. This is reflected in the comparatively low image difference indicated by the SIFT algorithm. The third row in Fig. 1 shows the transitions to final states with d\({}_{xz}\) character of the Ti-L\({}_{3}\) edge. Transitions to d\({}_{yz}\) states are not explicitly shown, as they are basically equivalent to Fig. 1(g)-(i) except for the 90\({}^{\circ}\) rotation of the orbital shapes. Unlike for the O orbitals, here, the comparatively heavy Ti atom is at the center of the orbitals. This leads to a focusing effect of the scattered electrons and with increasing thickness, the dumbbell shape shifts to a concentrated ellipse. We want to stress at this point that this is not a shortcoming that can be overcome by technical improvements of, e.g., the spatial/spectral resolution or the spot size of the electron beam, but solely by reducing the sample thickness, sometimes to unrealistic sizes. Nevertheless, while for this energy, the dumbbell shape is lost, the directional dependence is preserved for any thickness, thus the two relevant cases with final orbital d\({}_{xz}\) and d\({}_{xy}\) can still be distinguished. The last row of Fig. 1 shows in detail the thickness dependence of the image difference of the O and Ti energy-filtered STEM images. For an acceleration voltage of 300 kV, all energy losses display a drastic difference increase in the first few nm followed by an only slight increase over the next tens of nm. However, for a voltage of 80 kV, the difference to the 300 kV reference image either fluctuates around 0.3 or increases only slightly with sample thickness. Interestingly enough, for nearly all cases, the choice of acceleration voltage and sample thickness (above 5 nm) seems to have little to no influence on the quality of the resulting orbital map. The only exception is the O p\({}_{y}\) orbital for an acceleration voltage of 300 kV, displaying a considerably lower image difference compared to the other cases due to the aforementioned reasons. While the simulated thickness scans as in Fig. 1 for an infinite electron dose are very useful for indicating general trends of the orbital maps in regards to the sample thickness, a more holistic view is gained when shot noise and the resulting signal-to-noise ratio is considered. We effectively simulate a certain electron incident dose by keeping a constant ratio between the electron counts in the image and the thickness dependent signal intensity. Thus, without knowing the relation between the absolute electron incident dose and the resulting image dose \begin{table} \begin{tabular}{|l|c|c|c|} \hline parameters & rutile & graphite & STO-LMO \\ \hline thickness (1 unit cell) [A] & 3 & 4.3 & 7.8 \\ \hline pixel size x [A] & 0.14 & 0.15 & 0.36 \\ \hline pixel size y [A] & 0.14 & 0.21 & 0.24 \\ \hline acceleration voltage [kV] & 300 & 60 & 300 \\ \hline STEM convergence semi-angle \(\alpha\) [mrad] & 30 & 30 & 30 \\ \hline STEM collection semi-angle \(\beta\) [mrad] & 50 & 50 & 50 \\ \hline \end{tabular} \end{table} Table 1: Parameters of the simulated reference images. in a given microscope, we can still give the relative incident dose and calculate the specific electron counts from a given reference case. We choose for the reference image dose \(10^{6}\) e\({}^{-}\)/nm\({}^{2}\) for the 14.8 nm thick sample, as this should result in a representative range of image doses. A broad range of relative incident doses over several magnitudes is investigated. The image doses range from a few electrons for the whole image to many thousands, effectively approaching an infinite dose image. The resulting noisy images of the p\({}_{x}\) orbitals are shown in Fig. 2. In order to be able to properly gauge the effect the diminished signal-to-noise ratio has on the image quality, we again apply the image difference metric and use for each energy loss the respective infinite dose reference image with parameters according to Table 2. However, as all images are normalised for the comparison, changes in the total image intensity are not factored into the image difference. The results for the previously discussed four relevant energy losses are presented in Fig. 3. All cases exhibit the same general behaviour. Basically independent of sample thickness, an incident dose not lower than 1 % of the reference incident dose is required for the orbitals to be distinguishable from the noise. Contrary to the previous cases with infinite electron dose, the minimum for a given incident electron dose is no longer found at the thickness of a single unit cell. For very low doses, the minimum actually is at the thickest investigated sample, although we conjecture that the image difference would still drop for a higher sample thickness. For very high doses, on the other hand, an actual minimum is found at about 2 nm thickness, converging toward the value for an infinite electron dose. The map of O p\({}_{y}\) orbital (539 eV) shows a slightly different thickness dependence than the other cases, in agreement with the findings from Fig. 1. Above a thickness of about 5 nm, the image difference stays at a relatively low level and increases only negligibly with increasing thickness. This behaviour makes it the preferred case for experimental orbital mapping as long as high enough electron incident rates can be achieved without damaging the sample. Finally, for rutile STEM, we investigate the influence of the electron beam convergence semi-angle \(\alpha\) on the quality of the orbital maps (Fig. 4). For a single unit cell thick sample, we find that a higher convergence angle always results in a qualitatively better orbital map. This result is not surprising for simulations without aberrations. As such, we have chosen Figure 1: **(a)** - **(l)** Simulated STEM spectrum images of rutile in the [0 0 1] zone axis for the indicated sample thickness and energy loss. All scale bars indicate 1 nm. Green and orange dots mark the positions of Ti and O atoms, respectively. **(m)** SIFT image difference as function of sample thickness. All image differences are relative to the respective energy loss spectral image for 0.3 nm sample thickness and 300 kV acceleration voltage. the spectral images for the highest investigated \(\alpha\) as our reference images. It appears, however, that for increasing sample thickness, the \(\alpha\) dependence of the difference almost completely disappears. Only for the smallest convergence semi-angle of 10 mrad a (marginally) higher image difference can be observed. We can conclude from this data that for realistic sample thicknesses, the choice of convergence semi-angle has little influence on the final orbital map. When, in addition, various collection semi-angles \(\beta\) are considered, it becomes apparent that, with few exceptions, \(\beta<\alpha\) leads to higher image differences. Further, with \(\beta>\alpha\) the change of image difference with increasing \(\beta\) is negligible. Considering, however, that an increase of \(\beta\) leads to a higher signal intensity and, thus, better signal-to-noise ratio, a larger \(\beta\) is advantageous in general, as long as large angle scattering effects like thermal diffuse scattering do not contribute to the signal. #### 3.1.2 Tem In the following, we present the simulated energy-filtered TEM (EFTEM) images for the four energy losses already discussed previously. For simulation parameters where comparison to STEM is possible, such as sample thickness and incident electron dose, Figure 3: Image difference maps of STEM spectrum images of rutile. The incident dose is measured relative to the image dose of \(10^{6}\) e\({}^{-}\)/nm\({}^{2}\) for the image belonging to a 14.8 nm thick sample, marked by the red x. The reference image in all cases is calculated with infinite incident electron dose. The dark green and the olive contour lines indicate an image difference of 0.18 and 0.3, respectively. Figure 2: Simulated STEM spectrum images of a single rutile unit cell with added shot noise for varying sample thickness. The image dose values, chosen equidistant on a logarithmic scale, are given for the 14.8 nm thick sample. For the other cases, the image dose is scaled with the norm of the specific infinite dose image in order to simulate a constant incident dose. Shot noise is applied to a \(4\times 4\) unit cell image but only a single unit cell is shown. The images are individually normalized to the respective image intensity for better clarity. we will proceed fully analogously as in the previous section and compare the methods from a theoretical point of view. Instead of convergence angles, however, we will focus on the effect of (a small) defocus on the quality of the EFTEM orbital map. The upper two rows in Fig. 5 show very similar thickness dependence of the O EFTEM maps compared to the STEM spectrum images. However, for both energy losses, the degradation of the orbital shape due to elastic channeling seems even less pronounced. Nevertheless, the transition to the p\({}_{y}\) orbital, situated at 539 eV energy loss, still appears to be the most favourable of the four discussed transitions for orbital mapping. While the O orbitals display less elastic channeling compared to STEM, the Ti orbitals display the opposite behaviour. Both cases show a strong transfer of intensity from the orbital to the O atoms, fluctuating with the sample thickness according to the Pendellosung. Further, it appears that the specific orientation of the orbital does not influence the amount of elastic channeling that occurs. The thickness dependent image difference further highlights the difference in thickness dependence between EFTEM [Fig. 5(m)] and STEM [Fig. 1(m)] images. Firstly, with the exception of the O p\({}_{y}\) orbital, image differences are generally higher for EFTEM. At a thickness of one unit cell, however, the difference between 80 kV and 300 kV acceleration voltage are almost non-existent for EFTEM (under ideal conditions), in line with the reciprocity theorem [33; 34]. Secondly, the image difference fluctuates with sample thickness, especially noticeable for the Ti orbitals. While for STEM, a higher sample thickness generally means a higher image difference, this is not the case for EFTEM. On the contrary, in multiple cases, the image quality actually improves with thickness in some regions due to Pendellosung effects. This shows that finding the optimal sample thickness for EFTEM orbital mapping is entirely dependent on the particular energy loss, acceleration voltage, sample, and orientation. Similar to the previous section, the dose dependent image difference maps (Fig. 6) further support the findings from the images with infinite electron dose. In general, the image difference maps for EFTEM are very similar to the STEM image difference maps (Fig. 3). Notable exceptions are the even less pronounced thickness dependent changes for O p\({}_{y}\) images and the image difference fluctuations with sample thickness for Ti d\({}_{xz}\) and d\({}_{yz}\). Finally, we investigate the influence of defocus on the EFTEM maps for the largest rutile sample thickness investigated in this work, i.e., a sample 100 unit cells thick, corresponding to 29.6 nm (Fig. 7). We use the following sign convention: a negative defocus shifts the objective focus point from the bottom of the sample higher, i.e., inside the sample. For 80 kV, it becomes apparent that the image difference introduced by the sample thickness is too large for the defocus to be able to positively influence it to an acceptable range, with the singular exception of \(-3\) nm defocus for the p\({}_{y}\) orbital. Further, no clear trend can be discerned concerning the influence a defocus has on the image difference. For 300 kV, however, a clear trend is visible. A slightly negative defocus actually improves the similarity between the EFTEM image and the reference image, with the image difference minimum between \(-3\) nm and \(-5\) nm for all investigated energy losses. A positive defocus, however, has the opposite influence and very quickly (\(>2\) nm) worsens the energy-filtered image to the point of unrecognizability of any orbitals. ### Graphite For our second material, we have chosen graphite, as it consists entirely of light elements, in con Figure 4: Rutile SIFT image difference as a function of sample thickness for an acceleration voltage of 300 kV and several convergence and collection semi-angles \(\alpha\) and \(\beta\). All image differences are relative to the orbital map for \(\alpha\) = 30 mrad and \(\beta\) = 50 mrad. Figure 5: (a) - (l) Simulated energy-filtered TEM images of rutile in the \([0\:0\:1]\) zone axis for the indicated sample thickness and energy loss. All scale bars indicate 1 nm. Green and orange dots mark the positions of Ti and O atoms, respectively. (m) SIFT image difference as function of sample thickness. All image differences are relative to the respective energy-filtered image for 0.3 nm sample thickness and 300 kV acceleration voltage. Figure 6: Image difference maps of EFTEM images of rutile. The incident dose is measured relative to the image dose of \(10^{6}\) e\({}^{-}\)/nm\({}^{2}\) for the image belonging to a 14.8 nm thick sample, marked by the red x. The reference image in all cases is calculated with infinite incident electron dose. The dark green and the olive contour lines indicate an image difference of 0.18 and 0.3, respectively. Figure 7: SIFT image difference of EFTEM images as a function of defocus for acceleration voltages of 80 kV and 300 kV and a sample thickness of 29.6 nm. All image differences are relative to the orbital map for 0 defocus. trast to the material of the previous section, rutile. Graphite, especially its two-dimensional single layer form graphene, has received considerable attention in the last \(\sim\)20 years [35, 36, 37], in particular, but not exclusively, due to its potential manifold applications in electronics. Graphite consists of carbon layers in a hexagonal structure weakly bound by van der Waals forces. The high rotational symmetry of the individual graphene layers dictates the crystallographic direction necessary for orbital mapping. Conventionally, for graphene and similar two-dimensional materials, a top-down view is employed. In our case, however, this direction would result only in rotationally symmetric maps with negligible differences between different energy loss windows for pristine graphite [9, 38]. Thus, a side view is required and we opt to choose the \([1\,0\,\overline{1}\,0]\) zone axis, the same as in [10]. Two peaks dominate the DOS above the Fermi energy and, likewise, the energy loss near edge structure (ELNES) of the graphite K-edge. The first peak is a result of the \(\pi^{*}\) orbital, having mainly p\({}_{z}\) angular momentum character. The second peak is a result of the \(\sigma^{*}\) orbital with a mixture of p\({}_{x}\), p\({}_{y}\) and s atomic orbitals that combine to a sp\({}^{2}\) orbital. We simulate energy-filtered maps at 285.6 eV and 292.5 eV energy loss for the \(\pi^{*}\) and the \(\sigma^{*}\) orbitals, respectively. Contrary to the previous case of rutile, here, the slow changing DOS would allow for relatively large energy windows where the resulting orbital maps are basically the same. For graphite, we only consider a high tension of 60 kV as a higher acceleration voltage would already induce significant knock-on damage in the sample. [39] #### 3.2.1 Stem The resulting STEM spectrum images are shown in Fig. 8. In the upper row, spectrum images for an energy loss of 285.6 eV are displayed. For a single unit cell thick sample, the characteristic dumbbell shape of the p\({}_{z}\) orbitals is clearly visible. However, even for this thinnest possible sample, there is a small local intensity maximum at the positions of the C atoms due to the elastic scattering caused by the atomic columns. This effect is more pronounced with increasing sample thickness, resulting in the complete deterioration of the imaged dumbbell shapes to smeared out ellipses. As the sample thickness onset for this reshaping is already at thicknesses smaller than 5 nm, in practice, there is effectively no way to prevent it experimentally. Nevertheless, due to the channeling effect also being present in the reference image, the resulting image differences are relatively small. The electron dose dependent image difference map in Fig. 8(g) further confirms this finding. A similar trend can be observed for the spectrum images for an energy loss of 292.5 eV. Due to the projection, the \(\sigma^{*}\) orbitals appear as circular shapes significantly more confined to the atomic rows compared to the \(\pi^{*}\) orbitals. Horizontally, however, the intensity maxima are localized between the carbon positions. Thus, the elastic channeling effect of a thicker sample results primarily in a horizontal blurring and a comparatively low image difference to the reference image. In the dose-thickness difference map [Fig. 8(h)], a similar trend to Fig. 8(g), albeit more pronounced, can be observed. There is only little further increase of the image difference above 7 nm for all electron doses above 1 % relative dose. In contrast to the difference map of the \(\pi^{*}\) orbital, however, for medium to high electron doses, the image difference converges to the low value of 0.1. In general, the graphite spectrum images appear to be a little bit more robust concerning thickness variations compared to the rutile spectrum images of the previous section. Possible explanations for this robustness are the low high tension of 60 kV (the reference image itself appears already less detailed) and that carbon is the only element in the unit cell (less chance for noticeable dechanneling). For changes in the electron beam convergence semi-angle \(\alpha\), however, there are significant differences to before. In the case of rutile, we arrived at the more or less general conclusion that for realistic sample thicknesses, the convergence angle played no significant role at all. For the \(\pi^{*}\) orbital of graphite, however, the choice of \(\alpha\) is crucial [Fig. 9(a)]. The larger beam crossover for \(\alpha=20\) mrad results in the loss of the ability to resolve the individual dumbbells in the image. Further, \(\alpha=10\) mrad results in an effect similar to a contrast inversion. The beam crossover is large enough that the probe can excite inelastic transitions in two layers when the beam is centered between layers. Centered on the atomic layers, however, only transitions in the respective layer can be triggered, resulting in less intensity in the STEM-EELS map and a contrast inversion. For a sample thickness between 9 nm and 15 nm, however, we theorize that channeling effects again reverse this effect. The result is a large decrease in image difference for this thickness range. The thickness and \(\alpha\) dependent STEM-EELS maps can be seen in Fig. S13 in the supplementary information. In the case of the \(\sigma^{*}\) orbital, however, the convergence angle again has very little influence on the quality of the orbital map [Fig. 9(b)]. #### 3.2.2 Tem In the following, we present the EFTEM orbital maps of the graphite \(\pi^{*}\) and \(\sigma^{*}\) orbitals simulated with parameters corresponding to those of the STEM spectrum images in the previous subsection. While the basic shape of the orbitals in the reference images for a one unit cell thick sample is in principle the same as in the STEM images, there are some distinctions that influence the SIFT image differences. The orbitals in EFTEM for both energy losses appear more sharply and with clearer edges. Further, for the \(\pi^{*}\) orbitals, there is no intensity at all directly at the positions of the carbon atoms, the two lobes of the p orbital are distinctly separated. This was also the case for all investigated sample thicknesses. Further, an increase in sample thickness results in a large intensity increase between the atomic columns. This effect continues up to the point where the background intensity is nearly as high as the intensity of the orbitals themselves. For finite incident electron dose, this effect is especially pronounced with the result of a very large image difference. The image difference map in Fig. 10(g) shows a steep jump in image difference around 14 nm, from which thickness on the features of the orbitals are no longer discernible from the background. For the \(\sigma^{*}\) orbital the preferred inelastic scattering Figure 8: (a) - (f) Simulated STEM spectrum images of graphite in the [1 0 1 0] zone axis for the indicated sample thickness and energy loss with an infinite electron dose. All scale bars indicate 0.5 nm. Green dots mark the positions of C atoms. (g), (h) Image difference maps of the graphite spectrum images for the indicated energy losses. The incident dose is measured relative to the image dose of \(10^{6}\) e\({}^{-}\)/nm\({}^{2}\) for the image of a 12.8 nm thick sample, marked by the red x. The reference image in all cases is calculated with infinite electron dose. The dark green and olive contour lines indicate an image difference of 0.18 and 0.3, respectively. Figure 9: Graphite SIFT image difference as a function of sample thickness for an acceleration voltage of 60 kV and several convergence and collection semi-angles \(\alpha\) and \(\beta\). All image differences are relative to the orbital map for \(\alpha\) = 30 mrad and \(\beta\) = 50 mrad. direction is mainly in the plane of the carbon sheets, leading to the opposite result compared to the \(\pi^{*}\) case. The circular shapes at the positions of the carbon atoms are fully smeared out in the horizontal direction for samples exceeding 5 nm. We want to emphasize here that this effect is a fundamental result of the electron propagation through the material, i.e., it cannot be overcome with an increase in spatial or spectral resolution, only with a thinner sample. The thickness - dose difference map, however, reveals an oscillatory thickness dependence, similar to the rutile EFTEM images. Due to this oscillation, a sample thickness between 18 nm and 22 nm again nearly allows to distinguish individual orbitals. As for rutile, we have investigated the dependence of the image difference on the defocus (Fig. 11). For the energy loss of 285.6 eV, no image improvement is achieved by either under- or overfocusing. On the contrary, for relatively thick samples of 25.6 nm, already a small defocus of 1 nm in either direction is detrimental for the orbital image. This again highlights the tremendous precision required for orbital mapping. For the energy loss of 292.5 eV, we see, similar to the rutile EFTEM images, a significant improvement of image quality for a small (between \(-5\) and \(-2\) nm) defocus. In this case, however, the sample thickness has little influence on the effect of the defocus on the image. ### Sto-Lmo The last structure of our parameter investigation is a heterostructure consisting of the two perovskites SrTiO\({}_{3}\) (STO) and LaMnO\({}_{3}\) (LMO). Perovskites are a structure model of ever more increasing public interest [40; 41] due to their extreme range in synthesizable properties like superconductivity, ferroelectricity and ferromagnetism [42]. The combination of thin layers of different perovskites to a heterostructure allows the already widespread application of perovskites as lasers, LEDs, photodetectors and solar cells [43; 44; 45; 46; 47; 48]. The cubic perovskite STO has the space-group \(P\,m\,\overline{3}\,m\) with lattice parameter \(a_{\mathrm{STO}}=3.905\) A [49]. In contrast, LMO displays rhombohedral symmetry according to the \(R\,\overline{3}\,c\) space group with lattice parameters \(a_{\mathrm{LMO}}=5.508\) A and \(c_{\mathrm{LMO}}=13.310\) A [50]. While the structure is very similar to the cubic perovskite structure, the MnO\({}_{6}\) octahedra are rotated a few degrees relative to each other, reducing the crystal symmetry [51]. For epitaxial deposition on a STO substrate, however, LMO is formed in a pseudo-cubic perovskite structure with an effective lattice parameter \(a_{\mathrm{effLMO}}=3.914\) A [52; 53]. Due to the almost perfect match between the structure parameters (0.2 % mismatch), a stable heterostructure can be constructed. Computational constraints limit us to a periodic structure of 4 unit cells of each of the materials. Both, a Ti-La terminated interface Figure 11: SIFT image difference of EFTEM images as a function of defocus for graphite sample thicknesses of 9.8 nm and 25.6 nm. All image differences are relative to the orbital map for 0 defocus. Figure 10: Exactly the same as Fig. 8 but with simulated EFTEM images. and a Mn-Sr terminated interface are present in the structure (presented in supplementary information). However, we will exclusively focus on the Ti-La interface including the first adjacent unit cells, thus the supercell size is more than sufficient. Due to the presence of oxygen in both perovskites, we have selected the O K-edge for orbital mapping. In particular, two energy losses are of interest for the \([1\,0\,0]\) zone axis. At 533.4 eV, the imaged orbitals of the specific O atoms located between Sr atoms display an orientation perpendicular to the interface, which we will define as p\({}_{x}\) orbitals in this work. The corresponding LMO O atoms located between La atoms, however, display an orthogonal direction, which we will define as p\({}_{y}\) orbitals. These specific orbitals are of special interest as they, contrary to most of the remaining O orbitals, display a consistent orientation over all unit cells of the corresponding material and over energy windows as large as 1 eV (Fig. S4 in the SI). At the second chosen energy loss, however, at 536.7 eV, the orbitals swap their orientation with each other. The previous p\({}_{x}\) orbitals of STO have changed to p\({}_{y}\) orbitals and vice versa for LMO. Even though the mentioned orbitals are of particular interest to us, we will present EFTEM and STEM spectrum images of all the orbitals in close proximity to the Ti-La terminated interface. Likewise, the SIFT image difference will take into account all the orbitals. #### 3.3.1 Stem The resulting STEM spectrum images are shown in Fig. 12. The upper row in Fig. 12 displays the result for an energy loss of 533.4 eV. There is a clear distinction between the two parts of the heterostructure. In the region of interest, marked by the red rectangle, the p-like orbitals are oriented perpendicular to each other. Another noticeable difference is that in the LMO region, all observed orbitals are blurry and less distinct. This is a direct consequence of the rotation of the MnO\({}_{6}\) octahedra. The slightly varying projected positions of the O atoms lead to an overlay of orbital images and, thus, less distinct shapes. Increasing the sample thickness results in a relative intensity increase at the positions of the heavier Ti and Mn atoms due to channeling effects. Further, at these positions, the originally p-like shapes deteriorate to a circularly symmetric shape. Thus, all potential extractable directional orbital information is lost. On the other hand, this relative intensity increase leads to an intensity decrease in our region of interest and the STO p\({}_{x}\) orbitals have nearly vanished for a sample thickness larger than 10 nm. The situation is very similar for the energy loss of 536.7 eV. While the effects of channeling along Ti and Mn columns are just as present, there are a few differences. As mentioned before, most of the orbitals have switched their orientation, in particular all of the orbitals in the marked region. However, of the STO p\({}_{y}\) orbitals, only the one closest to the interface is visible. Nevertheless, for both energy losses, the majority of details and orbital information vanishes for increasing sample thickness. The thickness-dose plots [Fig. 12(g) and (h)] further confirm this. Although the image difference appears to reach a plateau for all incident doses above 8-10 nm, the difference values are in both cases above the acceptable threshold and, in general, significantly higher than for the previous materials. Increasing or decreasing the electron beam convergence semi-angle \(\alpha\) appears to have no significant influence on the orbital map quality for a sample thickness above 10 nm (Fig. 13). This is in line with the \(\alpha\) dependence of STEM orbital maps for rutile (Fig. 4) and the \(\sigma^{*}\) orbital of graphite [Fig. 9(b)], although for STO-LMO, the effect is even more pronounced. #### 3.3.2 Tem The EFTEM images for the STO-LMO heterostructure are presented in Fig. 14 and are remarkably similar to the STEM spectrum images presented in the previous section. However, while an increasing sample thickness still shows a relative intensity shift to the Ti and Mn sites, it is less severe than in the STEM case. The resulting image difference numbers are relatively low, even for realistically thick samples. The thickness-dose plots [Fig. 14(g) and (h)] show acceptable differences up to 17 nm for 533.4 eV and up to 20 nm for 536.7 eV, as long as a high enough electron dose is used. In the latter case, the effect of the Pendellosung can be clearly observed, in the same way as for the rutile and graphite EFTEM images before. There is a periodicity in the image difference dependent on the sample thickness which leads to an "island" of low image difference from 11 nm to 13 nm. Taking these effects into account, it seems that for this particular structure, EFTEM is more suited to the task from a purely image forming point of view. A small defocus of \(-1\) nm to \(-3\) nm can further improve the image quality, following the same apparent pattern as the other investigated materials but with less ambivalence in this case (Fig. 15). ## 4 Discussion and Conclusion In this work, we have successfully applied an image difference metric to energy-filtered (S)TEM images with the goal of identifying optimal sets of parameters for imaging. The specific materials, i.e., rutile, graphite and a heterostructure of SrTiO\({}_{3}\)-LaMnO\({}_{3}\), and the energy windows have been selected such that underlying electronic transitions can be mapped in real space. Thus, in principle, these orbital maps allow to directly measure the spatial dependence of initially unoccupied orbitals above Fermi energy. We have accomplished the task of finding suitable sets of parameters by calculating and minimizing the image difference between a perfect, albeit unachievable, image and the image for the specific set of parameters. In the process of finding the optimal, or at least acceptable, sample thickness of the chosen materials, we have encountered a fundamental inequality between EFTEM images and STEM spectral images concerning the dependence of the image difference on sample thickness. While the image difference of the STEM orbital maps increases fairly monotonically with the thickness, the EFTEM orbital maps display a periodicity which leads in many cases to comparably thicker samples being still acceptable. Further, in almost all the investigated cases, a slight defocus of \(-2\) nm to \(-5\) nm further decreased the EFTEM image difference. It appears that, in principle, EFTEM has the advantage over STEM Figure 12: (a) – (f) Simulated STEM spectrum images of the interface between STO and LMO in the \([1\,0\,0]\) zone axis for both materials, the indicated sample thickness and energy loss with an infinite electron dose. All scale bars indicate 1 nm. (g), (h) Image difference maps of the STO-LMO spectrum images for the indicated energy losses. The incident dose is measured relative to the image dose of \(10^{6}\) e\({}^{-}\)/nm\({}^{2}\) for the image of a 15.6 nm thick sample, marked by the red x. The reference image in all cases is calculated with infinite electron dose. The dark green and olive contour lines indicate an image difference of 0.18 and 0.3, respectively. Figure 13: STO-LMO SIFT image difference as a function of sample thickness for an acceleration voltage of 300 kV and several convergence and collection semi-angles \(\alpha\) and \(\beta\). All image differences are relative to the orbital map for \(\alpha\) = 30 mrad and \(\beta\) = 50 mrad.
2302.05784
On the number of edges of cyclic subgroup graphs of finite groups
In this note, we show that among finite nilpotent groups of a given order or finite groups of a given odd order, the cyclic group of that order has the minimum number of edges in its cyclic subgroup graph. We also conjecture that this holds for arbitrary finite groups.
Marius Tărnăuceanu
2023-02-11T21:13:10Z
http://arxiv.org/abs/2302.05784v1
# On the number of edges of cyclic subgroup graphs of finite groups ###### Abstract In this note, we show that among finite nilpotent groups of a given order or finite groups of a given odd order, the cyclic group of that order has the minimum number of edges in its cyclic subgroup graph. We also conjecture that this holds for arbitrary finite groups. **MSC2000 :** Primary 20D60; Secondary 20D15, 05C07. **Key words :** cyclic subgroup graphs, finite groups, element orders. ## 1 Introduction Recently, mathematicians constructed many graphs which are assigned to groups by different methods (see e.g. [4]). One of them is the _subgroup graph_\(L(G)^{*}\) of a group \(G\), that is the graph whose vertices are the subgroups of \(G\) and two vertices, \(H_{1}\) and \(H_{2}\), are connected by an edge if and only if \(H_{1}\leq H_{2}\) and there is no subgroup \(K\leq G\) such that \(H_{1}<K<H_{2}\). Finite groups with planar subgroup graph have been classified by Starr and Turner [14], Bohanon and Reid [3], and Schmidt [13]. Also, the genus of \(L(G)^{*}\) has been studied by Lucchini [10]. In the current note, we will focus on a remarkable subgraph of \(L(G)^{*}\), namely the _cyclic subgroup graph_\(C(G)^{*}\) of \(G\). Its vertex set is the poset \(C(G)\) of cyclic subgroups of \(G\) and, similarly, two vertices \(H_{1}\) and \(H_{2}\) are connected by an edge if and only if \(H_{1}\leq H_{2}\) and \(H_{1}<K<H_{2}\) for no cyclic subgroup \(K\) of \(G\). Note that the (cyclic) subgroup graph of a group \(G\) is the Hasse diagram of the poset of (cyclic) subgroups of \(G\), viewed as a simple, undirected graph. We will prove that if \(G\) is a nilpotent group of order \(n\) or a group of odd order \(n\), then the minimum number of edges of \(C(G)^{*}\) is attained for the cyclic group \(\mathbb{Z}_{n}\). This fact was somewhat expected, since finite group theory abounds with functions attaining their minimum/maximum at cyclic groups, such as the function sum of element orders [1], the function number of (cyclic) subgroups [12, 6, 7] or the function number of edges of (directed) power graph [5]. Our main result is as follows. **Theorem 1.1**.: _Let \(G\) be a finite group of order \(n\). If \(G\) is nilpotent or \(n\) is odd, then_ \[|E(C(G)^{*})|\geq|E(C(\mathbb{Z}_{n})^{*})|.\] _Moreover, we have equality if and only if \(G\cong\mathbb{Z}_{n}\)._ For the proof of Theorem 1.1, we need the following theorem. **Theorem**.: _Let \(G\) be a finite group of order \(n\). Then there is a bijection \(f:G\longrightarrow\mathbb{Z}_{n}\) such that \(o(a)\) divides \(o(f(a))\), for all \(a\in G\)._ This has been formulated as a question by Isaacs (see Problem 18.1 in [11]) and proved for solvable groups by Ladisch [9]. A proof for arbitrary groups has been recently proposed by Amiri [2]. We point out that we need this bijection only for groups of odd order, which are solvable. We conjecture that the inequality \(|E(C(G)^{*})|\geq|E(C(\mathbb{Z}_{n})^{*})|\) also holds for any group \(G\) of even order \(n\). Note that in this case there are examples of non-cyclic groups \(G\) of order \(n\) with \(|E(C(G)^{*})|=|E(C(\mathbb{Z}_{n})^{*})|\), such as \[|E(C(A_{4})^{*})|=|E(C(\mathrm{Dic}_{3})^{*})|=|E(C(\mathbb{Z}_{12})^{*}|=7.\] Most of our notation is standard and will usually not be repeated here. For basic notions and results on groups we refer the reader to [8]. ## 2 Proof of the main result We start by proving some auxiliary results. **Lemma 2.1**.: _Let \(G_{i}\), \(i=1,...,k\), be finite groups of coprime orders and \(G=G_{1}\times\cdots\times G_{k}\). Then_ \[|E(C(G)^{*})|=\left(\sum_{i=1}^{k}\frac{|E(C(G_{i})^{*})|}{|C(G_{i})|}\right) \prod_{i=1}^{k}|C(G_{i})|. \tag{1}\] _In particular, if \(n=p_{1}^{n_{1}}\cdots p_{k}^{n_{k}}\) is the decomposition of \(n\in\mathbb{N}^{*}\) as a product of prime factors, then_ \[|E(C(\mathbb{Z}_{n})^{*})|=\left(\sum_{i=1}^{k}\frac{n_{i}}{n_{i}+1}\right)\prod _{i=1}^{k}(n_{i}+1). \tag{2}\] Proof.: The equality (1) follows immediately by induction on \(k\). For the equality (2), it suffices to observe that \(|C(\mathbb{Z}_{p_{i}^{n_{i}}})|=n_{i}+1\) and \(|E(C(\mathbb{Z}_{p_{i}^{n_{i}}})^{*})|=n_{i}\), for all \(i=1,...,k\). A simple way to count the number of edges in the cyclic subgroup graph of a finite group is given by the following lemma. **Lemma 2.2**.: _Let \(G\) be a finite group. Then_ \[|E(C(G)^{*})|=\sum_{a\in G}\frac{\omega(o(a))}{\varphi(o(a))}\,, \tag{3}\] _where \(\omega(d)\) is the number of distinct primes dividing \(d\in\mathbb{N}^{*}\) and \(\varphi\) is the Euler's totient function._ Proof.: We have \[|E(C(G)^{*})|=\sum_{H\in C(G)}|\mathrm{Max}(H)|,\] where \(\mathrm{Max}(H)\) denotes the set of maximal subgroups of \(H\in C(G)\). Clearly, this leads to (3) because a cyclic subgroup \(H=\langle a\rangle\) of \(G\) has \(\omega(o(a))\) maximal subgroups and \(\varphi(o(a))\) generators. The next lemma shows that the function in the right side of (3) is decreasing with respect to divisibility on the set of odd positive integers. **Lemma 2.3**.: _Let \(d\) and \(d^{\prime}\) be two odd positive integers such that \(d\mid d^{\prime}\) and \(d\geq 3\). Then_ \[\frac{\omega(d)}{\varphi(d)}\geq\frac{\omega(d^{\prime})}{\varphi(d^{\prime}) }\,. \tag{4}\] _Moreover, we have equality if and only if either \(d=d^{\prime}\) or \(d\) is a prime power \(p^{\alpha}\) with \(p\geq 5\) and \(d^{\prime}=3d\)._ Proof.: Let \(d=p_{1}^{\alpha_{1}}\cdots p_{r}^{\alpha_{r}}\) and \(d^{\prime}=p_{1}^{\beta_{1}}\cdots p_{r}^{\beta_{r}}p_{r+1}^{\beta_{r+1}}\cdots p _{s}^{\beta_{s}}\), where \(1\leq\alpha_{i}\leq\beta_{i}\) for all \(i=1,...,r\) and \(r\leq s\). Then (4) becomes \[p_{1}^{\beta_{1}-\alpha_{1}}\cdots p_{r}^{\beta_{r}-\alpha_{r}}p_{r+1}^{\beta_ {r+1}-1}\cdots p_{s}^{\beta_{s}-1}(p_{r+1}-1)\cdots(p_{s}-1)\geq\frac{s}{r}\,,\] which is true for \(s=r\). So, we can assume that \(s\geq r+1\). We remark that it suffices to show \[(p_{r+1}-1)\cdots(p_{s}-1)\geq\frac{s}{r}\,.\] Since each \(p_{i}\) is odd, we get \[(p_{r+1}-1)\cdots(p_{s}-1)\geq 2^{s-r}.\] On the other hand, we can easily see that the function \(f(x)=2^{x-r}-\frac{x}{r}\) is increasing for \(x\geq r+1\). Thus \[2^{s-r}-\frac{s}{r}\geq 2-\frac{r+1}{r}\geq 0,\] as desired. Note that the equality occurs in (4) if and only if either \[r=s\text{ and }\alpha_{i}=\beta_{i},\forall\,i=1,...,r\] or \[r=1,s=2,\alpha_{1}=\beta_{1},\beta_{2}=1\text{ and }p_{2}=3,\] that is if and only if either \(d=d^{\prime}\) or \(d=p_{1}^{\alpha_{1}}\) with \(p_{1}\geq 5\) and \(d^{\prime}=3d\). This completes the proof. We mention that the inequality (4) is not true when \(d^{\prime}\) is even. For example, for \(d\) odd and \(d^{\prime}=2d\) we have \[\frac{\omega(d)}{\varphi(d)}<\frac{\omega(d)+1}{\varphi(d)}=\frac{\omega(d^{ \prime})}{\varphi(d^{\prime})}\,.\] We are now able to prove our main result. **Proof of Theorem 1.1.** Assume first that \(G\) is nilpotent and let \(G=G_{1}\times\cdots\times G_{k}\) be its decomposition as a direct product of Sylow subgroups. Denote \(|G_{i}|=p_{i}^{n_{i}}\), \(i=1,...,k\). By Theorem 2.5 of [7], we have \[|C(G_{i})|\geq|C(\mathbb{Z}_{p_{i}^{n_{i}}})|=n_{i}+1,\] with equality if and only if \(G_{i}\cong\mathbb{Z}_{p_{i}^{n_{i}}}\). Using Lemma 2.2, this leads to \[|E(C(G_{i})^{*})| =\sum_{H\in C(G_{i})}|\text{Max}(H)|=\sum_{H\in C(G_{i})\backslash 1}1 =|C(G_{i})|-1\] \[\geq|C(\mathbb{Z}_{p_{i}^{n_{i}}})|-1=|E(C(\mathbb{Z}_{p_{i}^{n_{ i}}})^{*})|=n_{i}.\] Then equalities (1) and (2) in Lemma 2.1 imply that \[|E(C(G)^{*})|\geq|E(C(\mathbb{Z}_{n})^{*})|,\] with equality if and only if \(G\cong\mathbb{Z}_{n}\). Assume next that \(n\) is odd and let \(f:G\longrightarrow\mathbb{Z}_{n}\) be a bijection such that \(o(a)\) divides \(o(f(a))\), for all \(a\in G\). Then \[\frac{\omega(o(a))}{\varphi(o(a))}\geq\frac{\omega(o(f(a)))}{\varphi(o(f(a))) },\forall\,a\in G\] by Lemma 2.3 and therefore we have \[|E(C(G)^{*})|=\sum_{a\in G}\frac{\omega(o(a))}{\varphi(o(a))}\geq\sum_{a\in G }\frac{\omega(o(f(a)))}{\varphi(o(f(a)))}=|E(C(\mathbb{Z}_{n})^{*})|.\] Finally, assume that \(|E(C(G)^{*})|=|E(C(\mathbb{Z}_{n})^{*})|\), but \(G\) is not cyclic. Then \[\frac{\omega(o(a))}{\varphi(o(a))}=\frac{\omega(o(f(a)))}{\varphi(o(f(a)))} \,,\forall\,a\in G.\] Let \(a_{1},...,a_{\varphi(n)}\in G\) with \(o(f(a_{i}))=n\) for all \(i\). By Lemma 2.3, it follows that there is a prime \(p\) and a positive integer \(\alpha\) such that \[n=3p^{\alpha}\text{ and }o(a_{i})=p^{\alpha},\forall\,i=1,...,\varphi(n).\] Then, from Sylow's theorems, we infer that \(G\) has a unique (normal) Sylow \(p\)-subgroup. On the other hand, since \(G\) contains at least \(\varphi(n)=2\varphi(p^{\alpha})\) elements of order \(p^{\alpha}\), it will contain at least two cyclic subgroups of order \(p^{\alpha}\), a contradiction. The proof of Theorem 1.1 is now complete. **Acknowledgments.** The author is grateful to the reviewer for remarks which improve the previous version of the paper.
2309.01848
A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey
The growing interest in the Metaverse has generated momentum for members of academia and industry to innovate toward realizing the Metaverse world. The Metaverse is a unique, continuous, and shared virtual world where humans embody a digital form within an online platform. Through a digital avatar, Metaverse users should have a perceptual presence within the environment and can interact and control the virtual world around them. Thus, a human-centric design is a crucial element of the Metaverse. The human users are not only the central entity but also the source of multi-sensory data that can be used to enrich the Metaverse ecosystem. In this survey, we study the potential applications of Brain-Computer Interface (BCI) technologies that can enhance the experience of Metaverse users. By directly communicating with the human brain, the most complex organ in the human body, BCI technologies hold the potential for the most intuitive human-machine system operating at the speed of thought. BCI technologies can enable various innovative applications for the Metaverse through this neural pathway, such as user cognitive state monitoring, digital avatar control, virtual interactions, and imagined speech communications. This survey first outlines the fundamental background of the Metaverse and BCI technologies. We then discuss the current challenges of the Metaverse that can potentially be addressed by BCI, such as motion sickness when users experience virtual environments or the negative emotional states of users in immersive virtual applications. After that, we propose and discuss a new research direction called Human Digital Twin, in which digital twins can create an intelligent and interactable avatar from the user's brain signals. We also present the challenges and potential solutions in synchronizing and communicating between virtual and physical entities in the Metaverse.
Howe Yuan Zhu, Nguyen Quang Hieu, Dinh Thai Hoang, Diep N. Nguyen, Chin-Teng Lin
2023-09-04T22:43:25Z
http://arxiv.org/abs/2309.01848v1
# A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey ###### Abstract The growing interest in the Metaverse has generated momentum for members of academia and industry to innovate toward realizing the Metaverse world. The Metaverse is a unique, continuous, and shared virtual world where humans embody a digital form within an online platform. Through a digital avatar, Metaverse users should have a perceptual presence within the environment and can interact and control the virtual world around them. Thus, a human-centric design is a crucial element of the Metaverse. The human users are not only the central entity but also the source of multi-sensory data that can be used to enrich the Metaverse ecosystem. In this survey, we study the potential applications of Brain-Computer Interface (BCI) technologies that can enhance the experience of Metaverse users. By directly communicating with the human brain, the most complex organ in the human body, BCI technologies hold the potential for the most intuitive human-machine system operating at the speed of thought. BCI technologies can enable various innovative applications for the Metaverse through this neural pathway, such as user cognitive state monitoring, digital avatar control, virtual interactions, and imagined speech communications. This survey first outlines the fundamental background of the Metaverse and BCI technologies. We then discuss the current challenges of the Metaverse that can potentially be addressed by BCI, such as motion sickness when users experience virtual environments or the negative emotional states of users in immersive virtual applications. After that, we propose and discuss a new research direction called Human Digital Twin, in which digital twins can create an intelligent and interactable avatar from the user's brain signals. We also present the challenges and potential solutions in synchronizing and communicating between virtual and physical entities in the Metaverse. Finally, we highlight the challenges, open issues, and future research directions for BCI-enabled Metaverse systems. Metaverse, brain-computer interface, human digital twin, non-invasive BCI, computer vision, AI, IoT, sensors, VR, machine learning. ## I Introduction and Motivations ### _The Development of a Human-Centric Metaverse_ The term "Metaverse" was first coined by Neal Stephenson in his science fiction novel "Snow Crash" in 1992. In this book, Stephenson described the Metaverse as a parallel existence of the physical and virtual worlds where users can interact with each other through their avatars. Although the idea of the Metaverse emerged 30 years ago, the technology was not yet ready for it until recent years. Recently, a large number of research and innovation efforts have been put into developing virtual reality (VR), extended reality (XR), and computing services to bring the Metaverse closer to reality [1]. Previously, VR and XR technologies have developed the first building blocks for the Metaverse, such as 3D video games, VR games, and later mobile XR games like Pokemon Go. The development of the Metaverse beyond games and social media platforms has begun to realize it as the next generation of the Internet. Recent works and products have explored Metaverse applications, such as healthcare [2], e-commerce [3], entertainment, and education [4]. These ventures have shown great potential for revenue growth in the Metaverse, even though the Metaverse has not yet been fully realized. As the Metaverse is still in its early stages, many ongoing parallel multifaceted research areas and challenges, including VR/XR, human-machine interfaces, and computing services, require substantial investment and development. Recent works and surveys have attempted to define architectures for the Metaverse based on these components. One particular area is exploring a human-centric design to enhance users' experience in the Metaverse [5]. A human-centric design is a method of utilizing a user's behavioral, psychological, physiological, and observation information to improve the performance and usability of a system [6]. In principle, a human-centric approach is to design an intuitive system by incorporating the user's potential state and modes of interaction. However, current human interaction methods, such as computer mouse clicks and keyboards, may not be intuitive within the new Metaverse experience. Specifically, in Metaverse, the users can explore their surrounding environments using the control and sensory feedback from their hands, eyes, or thoughts in an immersive virtual environment [5, 7]. Although a large body of work has been proposed recently, the human-machine interfaces are lagging compared to other aspects of the Metaverse. However, human-machine interaction is the primary channel that links the human body, the center of the Metaverse, to machines, i.e., any supporting devices and infrastructures. Conventional sensing techniques such as radio sensing, cameras, and wireless sensors can be utilized to develop a human-machine interface in the Metaverse. For example, face tracking, eye tracking, photogrammetry, computer vision, and motion capture can be used to construct fully immersive avatars in the Metaverse [8, 9, 10]. In Fig. 1, we illustrate a human-centric design for the Metaverse in which we focus on the interaction aspect of the users between the physical world and the virtual world. For this, the virtual world can be enriched by using the cognitive interactions of the users. Such cognitive interactions can be collected through sensing techniques such as eye tracking, voice detection, and computer vision. Using cognitive interactions of the users, or biological signals of the users in general, has shown its potential in designing and developing human-centric VR applications. For example, eye movement and heart rate data have been widely used in several applications, ranging from healthcare to robotics and virtual reality control [11]. Early findings showed that brain signals are encoded with a higher fidelity of sensory information than conventional sensing techniques, such as heart rate measurement and eye tracking. Toward this vision, Brain-Computer Interfaces (BCIs) have been considered in such applications as a neural interface between users and applications. As an attractive research area for exploiting human cognition in enabling technologies, BCI is inevitably becoming a part of the Metaverse. ### _Towards a Human-Centric Metaverse using BCI_ This paper aims to provide a comprehensive survey about using BCI to enable a human-centric design for the Metaverse and discuss its associated communications challenges and future research directions/opportunities. BCI offers the ability to monitor the state of a Metaverse user and facilitate intuitive modes of user interaction within the Metaverse. BCI research began in 1875, Richard Canton, a British physicist, discovered the existence of electrical signals in the brains of animals. Only four decades later, Han Berger, a psychiatrist, invented the first measurement device allowing humans to measure the brain's electrical activity for the first time [12]. After decades of research, BCI technologies exceeded their original scope in clinical trials and started attracting attention from the industry. Since the re-emergence of the Metaverse, researchers and industry have early adopted BCI as an enabling technology for the Metaverse [1]. BCI can provide rich information from brain activity for building virtual environments and digital avatars beyond conventional sensing approaches such as eye movement tracking, touch, and audible sensors. We believe that BCI technology enables the following unique opportunities in Metaverse that would be unachievable with conventional sensing: * **Direct communication to the brain:** BCI devices offer the unique ability to bypass the peripheral motor-sensory nervous/bodily systems to communicate directly with the brain. The brain is the complex motor-sensory control center of the body. The capability to interface with the brain enables the real-time reading of motor actions before execution and the response to sensory information, as it is processed within the brain [13]. This also enables the Metaverse users to send voluntary and directed commands for communication and control, adding additional channels that convey highly relevant information about the users' intent [14]. BCI can enable Metaverse users to relay complex commands, such as locomotion, limb movement, speech, and planned actions, to the Metaverse. * **Multimodal information encoded onto a singular sensor:** The brain serves as the body's central nexus for motor-sensory information. A distinctive aspect of BCI sensors is the ability to capture the brain's complex neurological activity. The data acquired from these BCI sensors offer rich and multimodal motor-sensory information condensed into a singular sensor signal [13]. These signals can encompass various cognitive processes, sensory stimuli, emotions, and motor functions. Within a Metaverse BCI system, this information can be harnessed and translated across various brain activities, including perception, attention, memory, language processing, motor control, and emotional states. * **Higher degrees of encoded information within the signal:** The degree of encoded information is a crucial drawback of conventional peripheral biomonitor sensors, such as cardiovascular, electrodermal activity, and motion tracking [15]. These sensors can reliably detect discrete changes in cognitive, mental and emotional states. However, they are less sensitive to transient or subtle shifts Fig. 1: An illustration of a human-centric design for the Metaverse. Users’ information can be collected using human body sensing techniques, e.g., heart rate sensors, VR headsets, inertial measurement units (IMUs), and motion capture systems. Human characteristics such as body shapes, facial expressions, behaviours, and health can be revealed by analyzing the collected data. Unlike conventional human body sensing approaches, BCI can provide an alternative method to create intelligent avatars from a singular data source, i.e., human digital twins. Additional information from human users, e.g., emotional state and motion sickness, can be transferred to their avatars through BCI. [16]. In contrast, the neurological signals measured on BCI sensors possess a higher degree of encoded information that can be used to accurately measure subtle changes in cognitive, mental, and emotional states [17]. Additionally, BCI signals can enhance our understanding of complex neurological states that cannot be measured through conventional sensing methods [18]. This capability can significantly enrich the understanding of the cognitive, mental, and emotional state of Metataverse users and enhance their personal experiences. * **Intuitive control and natural interaction:** BCI can tap into the user's intentions, thoughts, and cognitive states. Instead of relying on overt physical actions, such as pressing buttons on a controller or moving a mouse, BCI can interpret the user's internal mental states. BCI can be a supportive mechanism to convey user intentions and desires without needing explicit external movements, resulting in a more intuitive and seamless control experience [19, 20]. This BCI application can benefit individuals with limited or impaired motor function [21]. BCI can aid these individuals to regain control and interact with their environment using intact brain activity. In the Metaverse, an intuitive/natural user interface can enable a Metaverse avatar that acts as the virtual prosthesis/extension of a Metaverse user. * **Universal Access:** A valuable aspect of BCI technology is the ability to restore/replace motor functions for individuals with motor impairments, such as paralysis or limb loss [21]. BCI technologies can bypass traditional motor pathways and allow individuals to directly interface with various assistive technologies to enable interaction with their environments using their brain activities. BCI has the potential to significantly enhance the quality of life and enable individuals to perform tasks that would otherwise be challenging or impossible. BCI can also be integrated with other assistive technologies or input modalities to create multimodal interfaces [22]. If actualized, BCI can enable universal access for Metaverse users with disabilities. Within the Metaverse, users can experience all the features of the virtual environment without restrictions on physical or mental capabilities. While there are several advantages to BCI technology, there are also several challenges hindering the integration of BCI for Metaverse users. The first challenge in the Metaverse is the construction of virtual embodiments, which involve multi-sensory data from the Metaverse users. Conventional user embodiment and user interaction schemes require many wearable and external sensing devices, such as handheld controllers for user input, wearable trackers for body kinematics, wearable cameras on head-mounted devices as facial sensors, heart rate trackers for workload and emotion measurement, and pupil cameras for eye movement tracking. Each sensing technique requires specific hardware devices, sensors, and customized software to serve a particular application, limiting the scalability and synchronization of multi-sensory data sources. In this case, BCI can act as a neural interface that integrates multiple sensing modalities, such as limb movement, intentions, emotion, and eye movement, into a single wearable signal source [23, 24, 25]. The second challenge is a lack of individualization for the Metaverse technology. The multiple sensing modalities of conventional technologies also raise significant concerns about utilizing multi-sensory data sources to tailor applications to individual needs. As the sensing data come from different sources, it is challenging to fully utilize, synchronize, and distil information from different modalities, such as emotion recognition and eye movement [26]. Unlike traditional approaches, BCI offers an information source that can be individualized and tailored to the experience of individuals. The third challenge is the real-time or near real-time processing, and communications of BCI signals in Metaverse [27]. A critical factor is the scalability of the computation load and power for the large user base within the Metaverse. The current communications and computing capabilities will be insufficient to actualize the full-scale of the Metaverse. The fourth challenge is our limited knowledge of virtual embodiment, as we have not been able to fully actualize a virtual being/avatar in the Metaverse world. To address this challenge, we discuss the potential of a human digital twin (HDT) solution to provide a viable solution and open emerging applications for the Metaverse using BCI. ### _Related Surveys and Our Contributions_ Various surveys have been conducted on the Metaverse, covering architecture, applications, technologies, security, and privacy concerns. In [5], the authors discussed the potential architecture and applications for the Metaverse. In [1], the authors examined the enabling technologies for the Metaverse from a cloud/edge computing perspective. In [28], the authors analyzed the potential applications of machine learning and deep learning for the Metaverse. In [7], privacy and security issues of the Metaverse were discussed in detail. These existing surveys about the Metaverse focus on general issues and technologies, often overlooking the incorporation of human factors within the Metaverse. Although the authors in [28] and [7] discussed the idea of using BCI as a neural interface between users and the Metaverse, none of them examined the BCI techniques involved in decoding human behavior. In [29], the authors considered potential BCI applications for the Metaverse. However, the context of the work was limited to the surface of BCI and the Metaverse. For example, the authors did not address the applications, limitations, and challenges, such as synchronizing and communicating between entities, let alone applying them in the Metaverse. Overall, BCI was not discussed in detail in the aforementioned surveys. To the best of our knowledge, our survey is the first in the literature to comprehensively discuss BCI's potential in the Metaverse. In particular, in this survey, we aim to provide a comprehensive survey about BCI technologies and their potential for future development of the Metaverse. Furthermore, we propose and discuss a new concept of Human Digital Twin (HDT), a new approach to constructing human embodiment within the Metaverse using BCI. In summary, our main contributions are as follows: * We provide the fundamental background and discuss the current challenges of the Metaverse that conventional sensing approaches could not effectively address. * We provide the background of BCI, focusing mainly on non-invasive BCI, which is more suitable for commercial applications. We then describe how BCI technologies can enhance user embodiment within the Metaverse. We also review BCI-enabled interaction schemes for the Metaverse users and describe the differences between the BCI sensing technologies and the conventional sensing techniques. * We introduce the new concept of HDT. With HDT, we can develop individualized Metaverse applications and enhance our knowledge of virtual embodiment in the Metaverse. We further discuss potential challenges for integrating HDT in the Metaverse, such as real-time communications, synchronization, and interactions between HDTs. * We highlight the challenges, open issues, and future research directions of BCI technologies for the Metaverse. The ethics and security of using BCI for the Metaverse are also discussed. Alternatively, the potential applications, such as brain communications, are also discussed. ### _Paper Organization_ As illustrated in Fig. 2, our paper is organized as follows. Section II provides the background of the Metaverse and BCI technologies. Section III provides more details about BCI technologies, including emotional and cognitive state recognition for the Metaverse. This section further discusses the potential approaches to prevent error-related behaviors, such as VR motion sickness, stress, and fatigue, in the Metaverse. Section IV describes the potential interactions between the users and the Metaverse through BCI. This section also discusses VR-BCI user interface design paradigms for the Metaverse applications. In Section V, we propose a new concept of the HDT in which the digital twin is utilized to create twin entities for human users from their brain signals. We also discuss challenges in synchronizing and real-time communication between virtual and physical entities. In Section VI, we highlight the current challenges, open issues, and future research directions toward BCI-enabled Metaverse systems. This section also discusses ethics, security, privacy issues, and emerging applications, including hardware, software, and algorithm designs. Finally, Section VII concludes the paper. In addition, we provide a list of abbreviations and descriptions used in this paper in Table I. ## II Background ### _The Metaverse_ The term "Metaverse" is a combination of the prefix "Meta" (meaning beyond) and the suffix "verse" (meaning universe). As its name suggests, the Metaverse is a universe of the next-generation Internet that allows the parallel existence of the physical world and shares 3D virtual worlds. The earliest concepts of the Metaverse can be found in classical MMO (Massively multiplayer online) games [30]. In these games, users are given a uniquely singular persistent virtual world as a medium for social and worldly interactions. The Metaverse shifts this paradigm by incorporating modern technology to generate a seamlessly immersive experience transitioning between the physical and virtual worlds. The envisioned Metaverse is where users can naturally (touch the environment with their hand or walk with their feet) move and interact with the virtual environment as though they are in the physical world [5]. Unlike conventional interactions in the current Internet, where we use devices such as a mouse, cursor, and keyboard, the Metaverse enables users to immerse applications and services through their digital avatars with supporting VR and XR technologies. As a result, users in the Metaverse can potentially eliminate Spatio-temporal barriers in how they work, live, and entertain. To this end, the Metaverse can be developed from the convergence of multiple supporting engines such as VR/XR, digital twin (DT), tactile Internet, artificial intelligence (AI), and blockchain-based economy [1]. To create an immersive Metaverse experience, various technologies must be integrated and coordinated. In the following, we highlight the essential technologies for human-centric Metaverse design, but emerging technologies are not limited to this discussion as the Metaverse is continually evolving. Considering the left side of Fig. 1 as an example, multiple technologies are used to create a virtual avatar, including wireless sensors, sensor fusion, interpretation with machine learning, 3D projection from data, and 3D view synthesis. Machine learning/deep learning algorithms can be utilized to fuse the data collected from the sensors effectively [8, 9]. These Fig. 2: The organization of the survey. learning algorithms can further capture user data patterns, e.g., body shape, behaviours, poses, and expressions, and then prepare the data to be projected into the virtual environment. Finally, the virtual avatar is placed into the virtual scene (with VR) or mixed environment (with XR). The user's experience in the VR/XR environment can be enhanced by optimizing the view, angle, resolution, and interaction within the scene [5]. Extra haptic feedback can also be utilized to generate realistic feelings about the virtual environment [25, 31]. Other necessary technologies include intelligent sensing, data compression, edge computing, and wireless multiple access to reduce latency and improve reliability [1]. Beyond gaming, governments and tech companies seek a presence in the Metaverse. Decentralized lets users create, explore, and interact with 3D virtual worlds owned and controlled by themselves [32]. Users buy virtual land as NFTs via MANA cryptocurrency, which uses the Ethereum blockchain. NVIDIA's Omniverse introduces a computing platform for creating Metaverse applications such as 3D scene generation, art creation, and robotic control with supportive generative AI and physic-based simulation engines [33]. Microsoft's Mesh brings a new toolset for users to create custom workplaces and tools harmonized with other applications in Microsoft's ecosystem, such as Teams [34]. Other Metaverse apps focus on healthcare and education, such as Xirang [35] and Telemedicine [36]. Besides using conventional sensing and data collection techniques, integrating human physiological and psychological information is crucial for developing human-centric Metaverse applications. As motivated by the fact that the brain signals are encoded with rich information about human activities, in the following, we discuss the details of BCI technology and how BCI can offer multimodal, low latency, and high fidelity metrics for Metaverse user behavior [37]. ### _Brain-Computer Interface: Sensor Technology_ The human brain is the most complex and adaptive organ within the human body [38]. It is the control center of human intelligence, sensory perception, and motor functions. Herculano-Houzel [39] estimated that the central brain might contain around 86 billion neurons. Each neuron is a node along trillions of neural pathways within the brain. Each neural pathway passes neuroelectric signals (neurotransmission) around the brain, forming a system that enables the brain to function by communicating with the nervous system. A broad definition of Brain-Computer Interface (BCI) is any device that measures, analyzes, and interprets the brain signal (neural pathways) and then relays information to a machine to respond. The story of BCI begins with a discovery made by the British physicist Richard Canton [12]. In 1875, Canton discovered the existence of electrical signals in the brain of animals. This discovery paved the way for the pursuit of electrically mapping brain signals and a better understanding of human neurophysiology. Four decades later, a psychiatist named Han Berger invented the first measurement device, allowing humans to measure the brain's electrical activity for the first time [12]. Berger created the tool and discovered the first neural oscillation frequency, the 8-12 Hz Berger (Alpha) wave. In modern times, researchers furtheed these discoveries by developing various ways to measure the brain's neural signals, learning new neurophysiological behaviors, and building autonomous systems. BCI refers to technologies that can create communication pathways from the brain's activity to external devices, such as a computer or a machine [37]. BCI devices/sensors are delivered in one of three forms, invasive, semi-invasive, and non-invasive, as illustrated in Fig. 3. **Invasive BCI** devices characterise electrodes implanted (through surgery) beneath the skull and within the cortex (direct signal acquisition from the brain). Invasive BCI systems are not in a mature stage of development to be safely used Fig. 3: Three types of BCI devices/sensors: invasive BCI, semi-invasive BCI, and non-invasive BCI. as consumer devices. Invasive BCI require longer dedicated research with animal populations before engaging in low-sample-sized human research studies [40]. Invasive BCI is often employed in extreme cases (e.g. severe motor disability) where the patient's quality of life benefits from the BCI outweighs the risks [41]. Companies, such as Neuralink, are pursuing the goal of implantable BCI devices [42]. **Semi-Invasive BCI** devices are sensors that are implanted (through surgery) between the cortex (on the surface) and the skull [22]. Semi-invasive BCI systems commonly use an array of Electrocorticography (ECoG) electrodes to map a specific brain region. The surgical procedures and implant durability for semi-invasive BCI typically carry lower short to long-term risks when compared to invasive BCIs. Semi-invasive electrodes offer a higher-quality signal; however, similar to invasive BCI, the surgical risks outweigh the benefits of the device. Due to the risks of invasive and semi-invasive devices, non-invasive systems are more popular as a low-risk and more viable product for researchers and consumers. With further research, reduction in surgical risks, and improved robustness of implants, invasive and semi-invasive BCI devices will have great potential to enhance the user experience within the Metaverse dramatically. **Non-Invasive BCI** devices encompass multiple technologies that can detect neurophysiological behaviours without any implanted electrodes; this includes technologies such as Magnetoencephalography (MEG), Functional Near-Infrared Spectroscopy (fNIRS), Functional Magnetic Resonance Imaging (fMRI), and Electroencephalography (EEG). Certain non-invasive BCI systems, such as fMRI and MEG, lack portability due to equipment size or complexity. These types of systems are not feasible as wearable devices for Metaverse users. EEG and fNIRS-based BCI systems are the primary feasible solution for a portable, wearable, and accurate device that can be used in a general consumer capacity [43]. Typically, EEG devices use wearable scalp electrodes (see Fig. 4) with a highly conductive material to measure the voltage (\(\mu\)V) fluctuations on the wearer's scalp [44]. On the other hand, fNIRS electrodes utilize near-infrared spectroscopy to discern cortex neural activity [45]. Certain systems offer paired EEG-fNIR electrodes within one system [46]. These types of electrodes will measure neural signals, which are then amplified and digitized for analysis. The resulting signal may contain multiple components (depending on electrode placement), such as eye blinking, muscle movement, movement artifacts (from displaced channels), and other electrical activity. Most importantly, the signal will contain information on the brain's electrical activity [47]. Through extensive repeated measure research and machine learning, common neurophysiological behaviors can be classified and used for various research applications, e.g., military, rehabilitation gaming, medicine, mental health, robotics and automation, and public services [48]. Therefore, wearable non-invasive EEG/fNIRs BCI devices are most suitable for researchers and consumers exploring the Metaverse. EEG devices typically consist of two types, wet and dry (see Fig. 4) of electrode systems [49]. Wet electrodes refer to any electrode system that requires conductive gel or saline fluid to improve the contact connectivity between the electrode and the wearer's scalp. In contrast, dry electrodes leverage optimized electrode shapes (often hair comb-like) to contact the scalp without needing gel/fluids. When comparing the two types, wet electrodes offer a better signal quality (less noise from impedance and external sources) but require preparation (applying gel/fluid) and a limited operation time due to the drying of the gel/fluid. Dry electrodes are generally larger than wet electrodes, limiting the total possible electrodes placed on the scalp, the overall signal quality (large electrodes are more susceptible to movement), and the user comfort when wearing the device [50]. Both types of EEG systems are limited by movement noise, device usability (user comfort and real-world practicality), and neurodiversity [47]. Unlike EEG, fNIRs do not require conductive gel as the sensors primarily use light [51]. The key drawback to consumer fNIRS devices is the requirement of paired sensors (a source and detector) to measure neural signals. A 64-channel system would require 128 fNIRS sensors compared to the 64 EEG electrodes. The current size of the fNIRS sensors makes the system unideal for consumer use, as dry EEG electrodes could achieve a similar result with fewer sensors. Therefore, given the real-world feasibility factors, EEG dry electrode BCI systems are currently ideal for Metaverse users. It is likely that with further improvement to signal processing techniques, machine learning algorithms, and hardware design, we will find that dry electrode EEG with fNIRS BCI devices will become the next popular consumer device. Fig. 4: This figure presents the two types of non-invasive EEG-based BCI devices available on the market: (a) the Brain Products’ antiCAP active 64-channel wet EEG electrodes system; (b) the Cognitions Quick-20 dry EEG electrodes system; and (c) the layout for both the 10/20 EEG channel layouts used in both systems. Pictures were taken from the Computational Intelligence and Brain-Computer Interface (CIBCI) lab at the University of Technology Sydney (UTS), Australia. Fig. 5: An overview of the pipeline of BCI systems. The figure illustrates the acquisition of a signal through various types of BCI systems. Once a signal has been acquired, information can be extracted for observational purposes (monitoring or measuring a state) or classified into a specific behaviour (detecting intentions or tasks). Signal processing is another important aspect of BCI devices. Fig. 5 outlines the typical workflow of BCI-related research [52]. The two principal methodologies further enhance our knowledge of the neurophysiology of the brain to enhance BCI systems through signal feature recognition (built through observational information). The other is to develop real-time systems through classifiers (machine learning or AI). BCI research has expanded to many interdisciplinary research fields, studying behaviors such as cognitive states, emotional responses, pathology (neurology), mental health, pedagogy, ergonomics, and many other fields [53]. The current challenge for BCI systems is to develop a real-time "plug and play" system for consumer use [54]. ### _Deployment of BCI within the Metaverse_ In Fig. 6, we illustrate a BCI-enabled Metaverse in which BCI plays a vital role as an interface to create adaptive virtual environments and intelligent avatars, supported by other technologies such as a digital twin and real-time communications. As illustrated in Fig. 6, a BCI-enabled Metaverse is a human-centered approach in which BCI and VR technologies can coexist and cooperate in a closed loop. Within the conventional BCI research field, there are many examples of VR-BCI integration [55, 56, 57, 15] through using a traditional wet electrode EEG cap (see Fig. 3(a) and Fig. 7) under XR/VR device. This method of VR-BCI integration is viable in a research context because of the importance of signal quality and spatial resolution from the BCI operating in a controlled environment. This VR-BCI set-up would not be feasible for a real-world consumer because the wet sensor would only provide limited usage as the gel would rapidly dry out. A commercially available option to enable VR-BCI is to use dry electrodes integrated into the VR/XR device, such as the Galea VR HMD [58]. A dry electrode system will enable a portable system with lower signal quality and spatial resolution. The basic operation of the BCI-enabled Metaverse may include the following steps. The VR-BCI interface extracts the users' brain signals and processes the signals locally or remotely at a computing unit, e.g., a remote server. Brain signals extraction, processing, and classification are enabling processes for creating human-like digital avatars with unique characteristics of the users, e.g., emotional state, visual stimulus, and behaviors, from the human brain signals. The communication channels, such as wired and wireless channels, further enhance the scalability of the Metaverse system. The Internet-connected computing unit will update and synchronize the information into the Metaverse. By monitoring the brain activities of the users, the Metaverse platform can analyze or predict the users' behaviors, attention, or emotional states and adjust VR settings to be transmitted back to the users. As such, the service provider can actively provide customized and personalized Metaverse applications for the users. Note that other users in the Metaverse also contribute to the dynamics of the above process. In addition, other supporting technologies, such as digital twins, integrated VR-BCI devices, and real-time communications, can facilitate and improve the completeness of the system. BCI can be directly used to measure Emotion/Cognitive states of the Metaverse user and facilitate social, active, and passive interaction within the Metaverse. environment based on the user's emotional and cognitive state. Previous works explored this concept [59] using passive BCI techniques to adaptively change the game environment's lighting and Field of View (FOV), thus enhancing the user's immersion. Passive BCI refers to using BCI to detect and measure changes in a user's unintentional cognitive and emotional state [60]. In this section, we explore two potential methods of using BCI to enhance the user's immersion in the Metaverse. Specifically in Section III-A, we explore the user's emotional (including happiness, sadness, stress, calmness, anxiety, and uneasiness) and cognitive state (mental workload, fatigue, and attention) as shown in Fig. 8. After that, in Section III-B, we present the potential of using anomalous and error-related neurological behaviors to enhance immersion by correcting anomalies within the Metaverse. ### _BCI Emotional and Cognitive State Recognition Applied to the Metaverse_ A person's emotional state is commonly quantified by a scale of valance (pleasant to unpleasant) and arousal (alertness to drowsy). This mode of mapping a person's emotional spectrum is known as the Russell Circumplex model of affects [61]. Fig. 9(a) presents the Russell Circumplex models and shows the spectrum of emotional states quantified by the arousal and valance level. This understanding was furthered by the Yorkes-Dodson law [62] (see Fig. 9(b)) that found a direct correlation between human emotional arousal and cognitive performance. Therefore, the ability to quantify and measure human emotional states can play an essential role in understanding an individual cognitive state. The BCI classification of emotional states involves extracting specific EEG features for arousal and valance from the EEG signal for a machine learning classifier to detect [63]. Arousal is detected through the changes in the brain's frontal region. Brainwaves are common oscillations in the brain's electrical activity that correlate to various neural activities. The brainwaves are broken down into the Gamma (\(>\)35 Hz), Beta (12-35 Hz), Alpha (8-12 Hz), Theta (4-8 Hz), and Delta (0.5-4 Hz) wave. Studies [64, 65, 66] found a strong correlation between an individual's arousal level and the frontal Beta, Alpha, and Theta power. The ratio between Beta and Alpha activity is commonly used to measure arousal level. Typically, the Beta brainwave would indicate an active mental state. Fig. 8: An illustration of the emotional and cognitive state measurements using BCI. These types of information can be utilised in the Metaverse to provide status indicators and improve the immersion of Metaverse users. Fig. 7: An illustration of a VR user using a VR headset (HTC Vive Pro) with a BCI sensor cap (64 channel EEG, Liveamp system) worn under the headset. The user is experiencing a mixed reality environment where they are physically (through the platform) and virtually (through VR) elevated. The picture was taken from the Computational Intelligence and Brain-Computer Interface (CIBCI) lab at the University of Technology Sydney (UTS), Australia. Conversely, the Alpha brainwave suggests a relaxed and resftful state. Therefore, heightened arousal can be measured by a frontal region increase in Beta power and a decrease in Alpha power. An individual's valence level is measured through the brain's hemispherical symmetry/asymmetry. Hemispherical symmetry refers to an equal/similar activation (neuron firing) state between the brain's left and right cortex. Hemispherical asymmetry occurs when one cortex has significantly more activity than the other. Studies [67, 68, 69] showed that valence correlates to the degree of hemispherical symmetry with a strong hemispherical asymmetry exhibited when in a negative valance (unpleasant) state. Works by [70] and [71] used BCI and machine learning to classify the dimensions of an individual's arousal and valence levels, which indicate their emotional state. A person's cognitive or mental state refers to their mental well-being and the ability to think or process information. A Metaverse user's mental and cognitive state can significantly impact their experience within the Metaverse. A high workload (complex or challenging to navigate environment) or sensory-loaded (high noise or colour intensive) environment can trigger negative mental states, reducing the user's immersive in Metaverse. Factors such as the current emotional state, experience of mental workload, fatigue level, and attention level can directly affect a person's cognitive state. Like emotional states, cognitive state features are extracted by evaluating the brainwaves of specific brain regions. The theta activity in the frontal cortex often determines mental workload. Studies by [72] and [73] asserted that as the difficulty of a task (higher workload) increases, the theta activity in the frontal cortex will increase. Interestingly, the inclusion of multimodal data sources such as cardiovascular (changes in heart rate) and pupillary activity (changes in pupil dilation) can improve the accuracy of the workload classification [17]. Mental fatigue resulted in the increase in theta and alpha activity and the decrease of beta activity in the frontal cortex [74]. Studies on attention discovered that a distracted (unfocused) individual would exhibit a decrease in beta power in the frontal region, an increased theta and delta power in the central region, and a decrease in alpha power in the parietal region [75, 76]. Fig. 10 depicts using passive BCI to create an adaptive Metaverse display to enhance the user's immersion. Using passive BCI to gauge a user's emotional and cognitive state is a well-researched area with multiple reliable classifiers to enable the technology. When introduced to the Metaverse, passive BCI can dynamically adjust the user's surrounding environment and render display to improve the user's immersion. An example of this was the adaptive virtual reality environment by [77]. By measuring the VR user's emotions, the system created a feedback loop that used the virtual environment to modulate the user's emotional state. Similarly, the works by [78] and [59] explored limiting and adjusting an environment's complexity to improve the user's cognitive state. These works would use adjustable lighting and fog to moderate the amount of the visible virtual environment to reduce the user's workload and visual fatigue. These practices could be applied to the Metaverse through an integrated BCI VR device to create a feedback system that adaptively adjusts the rendered environment. This solution is very close to realization with the recent innovations such as the workload measurement integration in the HP G2 Reverb VR HMD [79] and the dry electrode BCI integrated Galea VR HMD [58]. ### _Anomalous and Error-related Behaviours to Improve User Immersion_ Another unique functionality of passive BCI is the ability to detect potential adverse events before consciously recognising the event. Adverse and anomalous events can hinder user immersion by creating a disassociation between the expected real-world and the Metaverse. Examples of this could be events such as the onset of VR sickness, loss of balance/falling, or environmental errors. The real-time detection of these events allows the implementation of safety and preventative measures to improve the user's longevity within the Metaverse. Through extensive studies, each adverse event's unique EEG signal features can be reliably extracted and classified. VR sickness refers to the sensation and experience of symptoms such as headaches, nausea, vomiting, drowsiness, and disorientation when using VR [80]. This is relevant for Metaverse applications, where users often spend prolonged periods in virtual environments. When VR sickness occurs, users often disengage from the virtual environment to alleviate the symptoms. In extreme cases, it may result in fainting, falling (due to loss of balance), or severe nausea. These types of adverse events will negatively affect the user's sense of immersion and reduce the longevity of the Metaverse user to stay within the Metaverse. Certain VR Studies [81, 82, 83] have successfully used EEG signals to classify and detect when a person is experiencing VR/motion sickness. They found a significant correlation between VR/motion sickness and theta and delta bands activity within the occipital lobe (attributed to the sensory perception of motion). They also observed a decrease in alpha activity in the parietal and motor regions. The loss of balance or falling is another significant risk for VR Metaverse users [56]. Studies by [84] and [85] show that the beta and theta band activity in the parietal/motor cortex is closely related to losing balance and falling. These VR sickness and falling indicators can be trained through an AI classifier to detect anomalous events in real time Fig. 9: This figure presents (a) the Russell Circumplex model of Affects, which is used to measure emotional states on a spectrum between arousal and valence; and (b) the Yorkes-Dodson Law dictates the relationship between emotional arousal/stress and cognitive performance. during a Metaverse experience. The ability to effectively detect VR sickness and falling can allow the implementation of preventative techniques such as reducing the motion of the virtual environment and turning on VR see-through mode [56]. In the continuity of the Metaverse, errors and visual bugs are inevitable factors that will appear within the virtual environment. Erroneous artifacts or system glitches can hinder a user's immersion. Therefore, it is essential to have a system in place to detect and correct these errors. One proposed method that BCI could solve this issue through error-related negativity (ERN) detection (see Fig. 11). ERN is a signal response that occurs when a person observes incongruent or erroneous stimuli within a task or environment [86]. ERN is characterized by a negative potential around 50-250 ms after the error [87]. Previous studies [57, 88] have successfully classified the ERN response as an error correction method within a VR environment. Due to the simplicity of ERN, it can be reliably implemented to detect potential errors that Metaverse users may observe. This solution would enable a more efficient (compared to manual user reporting) method of detecting and correcting potential errors within the Metaverse environment. These BCI solutions can improve the safety and longevity of a Metaverse user. By incorporating the outlined BCI techniques, the Metaverse system can become more reactive to adverse events and use an appropriate strategy to prevent or correct the problem. There are two critical challenges to the implementation of this system. The first challenge is ensuring the ability to accurately detect the onset of the adverse event before the occurrence or conscious recognition of the event. The system would not be valid if it cannot prevent the adverse event from occurring. The second challenge is the explore practical strategies for preventative and corrective methods. The current methods of prevention or correction involve removing the user from the virtual environment and breaking their immersion. Better methods are required that do not require the removal of the user from the Metaverse. ## IV BCI-Enhanced Metaverse User Interactions An important aspect of the Metaverse is to deliver a platform to facilitate meaningful user interaction. A Metaverse user must be able to interact with the environment (pick up objects and locomotion) and socially with other Metaverse users. Active BCI offers the potential to generate more intuitive modes of user interaction within the Metaverse. In contrast to passive BCI, active BCI refers to the user of BCI performing specific tasks through intentional, conscious thought. This section will explore the ways BCI can be used for users to interact with their environment (Section IV-A) and the potential of using BCI for social interaction through brain-to-text (Section IV-B). ### _Decoding Thoughts and Intentions Using BCI to Improve Metaverse Interactions_ Understanding human thoughts and intentions is a commonly sought-after goal in the BCI research field [89]. Tradi Fig. 11: This picture illustrated the use of BCI to detect anomalous states and error-related behaviors. Using the BCI signal (e.g. ERN), the system can detect and correct adverse events, such as when the user is about to fall due to VR sickness. Fig. 10: This figure shows using passive BCI to detect the Metaverse user’s emotional and cognitive state. Then using, the measured information to modulate different factors of the Metaverse display to enhance the user’s immersion. The picture was taken from the Computational Intelligence and Brain-Computer Interface (CIBCI) lab at the University of Technology Sydney (UTS), Australia. tional systems rely on tactile manipulators, such as controllers, buttons, joysticks, levers, and keyboards, to allow users to convey their intentions to a system [90]. Other works explored voice recognition, gesture control, and AI to develop more intuitive methods of understanding user intention [91]. BCI offers the potential for direct translation between human thought and intention, which could result in an intuitive and responsive system. The underlying challenge in understanding human intention is the complex multilevel nature of the human mind [92]. Human intention ranges from low-level cognitive decisions based on sensory perception (reacting to events, bodily movements, or simple choices) to complex high-level decisions requiring observation, planning, mental stimulation, and multistep execution. Based on this challenge, researchers have designed reliable active BCI paradigms to capture specific behaviors that are exhibited across the human population. When designing an active BCI system, one typically selects a reliable BCI paradigm to translate intentional thought into a classifiable EEG signal. We will highlight three of the most common paradigms used for active BCI control (as shown in Fig. 12); these are P300, Motor Imagery (MI), and Steady-State Visual Evoked Potential (SSVEP). **P300:** The P300 wave is the oldest and potentially the most well-known EEG response out of the three paradigms. As described by [93], the P300 wave is a positive peak human event-related potential that occurs around 300ms after a 'target' stimuli are perceived. This P300 peak can be observed across the brain's frontal, central, and parietal regions. The stimuli used for P300 paradigms can be both visual and auditory. Typically, P300 paradigms feature an oddball design where the user has a target and several non-target stimuli. A signal classifier can discern whether the user observes a target stimuli by detecting the positive peak amplitude. The P300 speller [94] is a successful example of using a P300 wave to create a user interface. In this paradigm, the user will decide their target or intended letter to type; then, multiple visual letter stimuli will sequentially appear to the user. The classifier will detect the P300 response within the EEG signal and input the letter that appeared 300ms before the peak as the user's choice. Generally, P300 paradigms provide a reliable signal response for detection; the primary drawback is the speed of stimuli presentation (slow rate of input) and the dependency on user focus on the target stimuli (difficult to use in complex environments). **MI:** An MI paradigm utilizes the thought of left and right motor action to create a simple control paradigm [95]. MI involves the extensive training of a classifier that detects left and right motor actions (participants will clench their left or right hand) within the motor cortex. The classifier model relies on the hemispherical activations between the left and right-hand actions. Once sufficient training is complete, the user will train the classifier using the thought of left and right motor actions. Previous studies [95, 96, 97] found that the mere thought or representation of a motor action can trigger a response in neurological pathways for actual motor action. This results in a reliable and detectable signal for left and right control without the need for'real' motor action (not even tensing). MI's primary benefit is the lack of need for visual or auditory stimuli. However, MI requires extensive training individualized for each user and is more susceptible to noise if the user is mobile. **SSVEP**: The SSVEP paradigm is popular with multiple visual flashing stimuli that flicker at specific frequencies. SSVEP BCI studies [98, 99] show that when a person observes flickering/flashing visual stimuli, a synchronized frequency activity can be observed in the occipital region of the brain. In essence, if multiple flickering input control options are presented to the user, the frequency of the occipital region's activity can be used to determine which control option the user is focused on or targeting. SSVEP offers the advantage of not requiring extensive training while being a reliable paradigm for detection. The main drawback of the SSVEP paradigm is the argument that the stimuli require too much attention, as flickering visual stimuli will likely distract or block out the surrounding environment. Basic interactions and low-level intentions can be translated through active BCI paradigms such as P300, MI, and SSVEP. High-level decisions present are far more challenging to decode. Many works explored various areas of the hierarchy of executive decision-making [100]. One example of a higher-level decision-making process is active spatial navigation [18]. The work investigated the use of BCI during active spatial navigation to understand the neurophysiological behaviors of a user when processing spatial information when navigating complex environments. The authors observed that the retrob-splenial complex (RSC, a cortical region of the brain) theta Fig. 12: This figure presents the common BCI paradigms used for designing user interfaces and the potential of using brain-to-text for social interactions. activity correlates when a user engages in active spatial navigation (identifying spatial locations around the user). These biomarkers can provide an indication of whether a user is lost within a complex Metaverse environment. Other works explored the use of AI to build training models which observe specific behaviours/interactions of the user and attempt to decode the neurological behaviours (such as robotic systems interaction [101, 102, 103]. In conjunction with other multimodal information and more advanced AI modelling, BCIs can be used to infer and decode many complex thoughts or intentions. These mechanisms can be applied to the Metaverse to understand the user better and facilitate a more intuitive user experience. ### _Social Interaction Using Imagined Speech_ The ability to input semantic information is an essential form of communication in human society. In the digital era, the keyboard has become a ubiquitous tool that enables a human to translate written language into a digital form that can be communicated between humans and computers. A key challenge in the Metaverse is to provide effective communication methods for social interaction. The straightforward approaches are to use a virtual keyboard [31] or use a microphone for direct speech [104]. In the traditional line of thought, BCI can offer keyboard and letter selection solutions through P300 [94], MI [105], and SSVEP [106]. However, it could be argued that these methods would be inherently less efficient than traditional ones. We believe that the more significant application of BCI for communication is the potential of creating a new communication medium called 'imagined speech'. By decoding human thoughts, semantic comprehension, and emotions, BCI could allow individuals to communicate through their thoughts alone [107, 108]. A recent innovation is the research by [109], which explores decoding brain activity into text. This offers the ability for the Metaverse user to interact socially by thought alone. Various works achieved this feat [109, 110, 111] in which ECoG, an implanted electrode grid, provides spinal cord injury patients with the ability to generate text through thought. This technique decodes the brain's motor cortex region to interpret the thought or representation of the handwriting motor action (similar to MI). Then, the handwritten text is translated via deep learning into digitized signals. As suggested by [112], this technology enables imagined speech communication between the Metaverse and real-world users. In principle, the finding from the ECoG results can be applied to EEG BCI devices. Fig. 13 illustrates an example of using EEG signals to classify imagined speech for social interactions. The critical ongoing challenge is translating this work from ECoG to EEG. The semi-invasive BCI system (ECoG) is a direct form of brain activity sensing with minimal noise. In contrast, a non-invasive EEG BCI system may produce significantly worse signal quality. Another challenge is that neurodiversity and the requirement of extensive participant training will hinder the realization of imagining speech technology. The outlined BCI paradigms offer the potential to build a more intuitive user interface and social interaction for Metaverse use. The ongoing challenge is to develop a reliably detected paradigm in an EEG signal, which requires minimal training, does not unnecessarily distract the user, and can be used by various users. These ongoing challenges indicate the need for further research and exploration into BCI to create a technology used for the Metaverse user. ### _Key Technologies to Enable the Human Digital within the Metaverse_ Fig. 14 outlines the core components of creating the human digital twin. In this survey, we identify several technologies enabling HDT within the Metaverse. Technologies such as BCI, wearable biosensors (heart rate, muscle, IMU, and temperature), smartphones, and AI, can be integrated into the Metaverse to formulate the HDT to replicate life-like representation of the user's cognition, emotions, thoughts, and movements. #### V-A1 Wearable Brain-Computer Interface One clear advantage of using human brain signals to construct HDT avatars in the Metaverse is the potential reduction of the number of sensors and wearable devices required for users, leading to lower production costs and increased mobility and creativity. The brain signals contain a wealth of information about physical and mental health, such as the ability of EEG to complement electrocardiograms in predicting and diagnosing indicators of pathological perturbations, as demonstrated in brain-heart interaction studies [117]. As a result, the number of electrocardiogram sensors may be reduced or eliminated. Using BCI to control prosthetic devices, as described in [118], opens the door to potential BCI applications such as performing activities of daily living. Other BCI approaches can translate motor imagery and prosthetic limb movements into control of virtual avatars, eliminating the need for external sensors on the body [119]. With the advancement of technology, we can expect future BCI-enabled Metaverse systems to feature lightweight, highly mobile BCI devices for interaction with Metaverse applications. One of the central challenges in deploying HDT in the Metaverse is ensuring high accuracy in the data measured by integrated VR-BCI headsets, compared with data collected from conventional sensors. To address this challenge, future research may need to investigate the correlations and connections between brain signals and other human biological signals, such as the electrocardiogram (ECG) and electromyogram (EMG) [117]. Another challenge is real-time synchronization and communication between HDT avatars and the Metaverse users, which is essential for maintaining high-quality data within the Metaverse. These challenges are discussed further in Section V-C, respectively. Once the correlations and relationships between signals among different brain lobes and the human's biological signals, e.g., ECG and EMG, are recognized, a wide range of applications for the Metaverse can emerge. Multimodal machine learning techniques, which recently advanced in processing large amounts of data from various sources or distributions [120, 121], can be a central component in a range of the Metaverse applications. In [122], the authors proposed an MID BCI control framework that can work with multimodal signals, i.e., EEG and fNIRS. In particular, the proposed multimodal classifier based on a Convolutional Neural Network (CNN) can extract spatial and temporal features of both EEG signals and fNIRS images, thus resulting in higher classification accuracy. In [123], the authors proposed an interactive social platform that integrates eye gaze, EEG signals and peripheral psychophysiological signals of children with Autism Spectrum Disorders (ASD) in a VR setting. The aim of the study in [123] is to understand the underlying factors that affect ASD children through emotion recognition tasks in a VR environment. As a result, potential future works can improve the emotion recognition abilities and eventual social functioning of children with ASD. Although the works mentioned earlier achieved adequate performance with multimodal data in VR environments, incorporating such approaches into the Metaverse is still a significant research gap. The main difference between conventional VR settings and the Metaverse is that multiple HDT avatars are involved in the Metaverse, yielding more complicated interactive social systems between the HDT avatars. In other words, multiple 'brains' of different individuals would be synchronized in the Metaverse. To fully understand the social connections and interactions between such HDT avatars, conventional approaches considering a single deep learning/machine learning classifier for an individual may fail to apply in a social HDT avatars setting [26]. To address these challenges, transfer learning [124] and meta-learning [125] can be applied. The transfer learning and meta-learning techniques share the same interest in utilizing underlying transferable features of the input signals, e.g., EEG, among different individuals. Once the learning models are trained, they can be transferred or directly applied to different HDT avatars with minimal fine-tuning processes. As a result, the Metaverse applications Fig. 14: An outline of the key technologies and information involved in mapping the Human digital twin from human body sensing (with BCI and other supporting technologies). The graph model illustrates the human digital states containing real-time information of the human user. can only maintain a small number of learning models while guaranteeing high prediction accuracy of different tasks, e.g., emotion recognition and seizure prediction, compared with conventional machine learning approaches. This can reduce the maintenance, deployment, and scalability costs for the Metaverse applications. #### Iv-A2 Wearable Sensors Technologies Wearable and portable technology has become ubiquitous in the current digital era. BCI technology can unlock a wide array of human sensing capabilities for a life-like HDT and can be further enhanced through additional wearable sensors. There are many platforms for wearable sensors such as smartphones, smartwatches, and smart clothing/jewellery [126]. These platforms utilise multiple types of sensors, such as pulse oximetry, electrodermal activity, global positioning system (GPS), inertial measurement unit (IMU), microphones, cameras, and temperature sensors (thermometers or thermistors). AI personalization and IoT data sharing can enhance the interpretation of these sensor measurements. These multimodal measurements and advanced technology culminate into a sensory information mapping system for emotions, cognitions, thoughts, and body movements. **Smartphones** are carried as an essential technology for everyday life. Over 80\(\%\) of the world's population is estimated to own a smartphone [127]. Smartphones offer a portable platform for high-performing processing, network connectivity, and various onboard (or attached) sensors [128]. Additionally, smartphones can be the gateway device for users to enter and interact with the Metaverse [129]. For the HDT-based Metaverse interaction, the smartphone can measure physical (steps and calendar/schedule logging), locational (GPS location and IoT geographical data), and social activity (AI personalisation and IoT social data) [130]. This information can be used to formulate the activities and location of the HDT as a representation of the user in the Metaverse. **Smartwatches** is an emerging technology that is growing in popularity in daily life. A smartwatch is a wrist-worn computing device that is capable of communicating with other smartphones and computer devices [131]. Smartwatches are often used as a monitor fitness, an extension of a smartphone (phone calls, messaging, payment, and music), and as an assistive technology. Typically, smartwatches (such as Fitbit, Garmin, Apple, or Samsung watches) will carry a range of integrated sensors such as pulse oximetry, ECG, EMG, temperature, and IMU [132]. These sensors can actively measure physical exercise, basic emotional state (meditation and stress), health metrics (cardiovascular), and longitudinal activity (sleep, steps, and location) [133]. Within the Metaverse, the HDT can utilise metrics to accurately represent real-world users' current bodily/mental state and physical activity. ### _Human Digital Twin Interaction within the Metaverse_ Fig. 15 represents the different interaction scenarios we envision the HDT will engage within the Metaverse. The lifecycle of the HDT begins with the user leaving the Metaverse (Scenario 1: User-HDT synchronization). The HDT will replace the user's avatar and become of representation of the user when the user is present in the real-world. The HDT will continuously synchronize with the user state through various wearable technologies outlined in Fig. 14. Within the Metaverse, the HDT can interact with the avatars of other Metaverse (Scenario 2: HDT-Avatar Interaction). The HDT will use technology, such as natural language models, to simulate a life-like interaction between an HDT and a Metaverse user. Alternatively, if two real-world users interact, their HDT will also interact (Scenario 3: HDT-HDT Interaction). This interaction will centre around the transference/sharing of information and ensuring that real-world users can inform each other with up-to-date information, similar to using social media. HDT's lifecycle ends when the real-world user enters the Metaverse (Scenario 4: User-HDT replacement). The Metaverse user's avatar would replace the HDT, and all of the HDT's activities would be synchronized with the Metaverse user. #### Iv-B1 User-HDT Synchronization The HDT addresses the key problem of discontinuity of a Metaverse user once they exit the Metaverse. In this scenario, when a user exits the Metaverse, the HDT will be activated to replace the user. As outlined in Section V-A, the HDT can leverage several technologies, such as BCI, smartphones, smartwatches, and AI, to create a life-like extension and representation of the user within the Metaverse [115]. Through AI personalization and behaviour recognition, the HDT can interact with other avatars of users and HDTs within the Metaverse. The HDT can represent the current status of the real-world user through health and biometric tracking technology [113, 116, 134]. The HDT can accurately reflect the real-world user's current status, activities, and interactions within the Metaverse. #### Iv-B2 HDT-Avatar Interaction Social life-likeness is important when an HDT interacts with other Metaverse users' controlled avatars. While the HDT will likely have a unique status/identity within the Metaverse, the HDT must be life-like to supplement the user base of the Metaverse (similar to non-playable characters in video games) [135]. An HDT-to-avatar interaction can provide a naturalistic interface for Metaverse users to be informed on the status, activities, and locations of other users in the real-world [136]. By utilising natural language models, such as ChatGPT, the HDT can engage in authentic social conversations with Metaverse users [137, 138]. This technology can further personalized natural language models by analysing imagined speech [112] and prior social engagements (social media or messaging) [139]. Overall, the HDT will act as a representation of the real-world user that is capable of engaging in meaningful social interactions with other Metaverse users. #### Iv-B3 HDT-HDT Interaction HDT-to-HDT interaction is another distinctive form of interaction within the Metaverse. This type of interaction occurs when two real-world users interact within the real-world at the same location/area. In this situation, the HDTs will communicate in an esoteric manner to share information and update the real-world user on various life events, similar to social media information sharing [140]. This type of interaction shares similarities to the spatial location functions of the Snapchat social platform [141]. Within the Snapchat app, the spatial location of the users is visualised on a map with Bitmoji avatar representation. This spatial map is used to relay information, news, and other social events to groups of users occupying the same spatial area [141, 142]. BCI and other wearable technologies can further enrich the behavior of HDT and shared information by measuring the thoughts and well-being of the users. With the growing usage of social media, HDT-to-HDT interaction can be a useful tool for the expedient social transference of information between two people. #### V-B4 User-HDT Replacement The end of the HDT's lifecycle is when the real-world user re-enters the Metaverse (through smartphone, VR/AR/XR, or other devices). In this scenario, the user's avatar will replace the HDT and the Metaverse user can return to interacting within the Metaverse. During this scenario, the HDT can provide highlights and narratives akin to social media stories [143]. Creating a narrative can enhance the acceptance of HDT as they are essential to informing and continuing the user's interaction/connection to the Metaverse [135]. Furthermore, the process of updating the user of the HDT's activities provides an incentive for other Metaverse users to interact with the HDT. The importance of the avatar's replacement of the HDT (over the HDT co-existing with the user) is to maintain the HDT's identity as an extension of the user. If the user co-exists with its own HDT in the Metaverse, it may create a sense of disemboment or detachment between the HDT and the user [144]. Therefore, the process of a user-controlled avatar replacing the HDT is paramount in facilitating a sense of embodiment and continuity for the user. ### _Potential Challenges for the Development of the Human Digital Twin in the Metaverse_ Another critical challenge in enabling the Metaverse applications with BCI is real-time synchronization and communication between the Metaverse users and their HDTs. In the following, we focus on the communication aspect based on the two main perspectives that are (i) communications between BCI headsets and other infrastructures in the physical world and (ii) communications between human avatars and other avatars or technologies/virtual services in the Metaverse. The first type of real-time communication is in the physical world, while the latter is in the Metaverse. Real-time communications in the physical world aim to provide robust and reliable connectivity for users with equipped VR/BCI headsets. The transmission of the brain signals over the network systems should meet low latency and error requirements. On the other hand, real-time communications in the Metaverse mainly occur between the users or their avatars with the environment, objects, or other avatars within the Metaverse. As a result, the requirement of real-time communications in the Metaverse is to achieve the continuity of user experience in the Metaverse where there is a parallel presence between the Metaverse and the real-world. #### V-C1 Real-time Communications in the Physical World To achieve reliable and robust real-time communication between BCI headsets and other devices, the infrastructures that support wired/wireless communications play an essential role. In conventional BCI systems, e.g., BCI2000 [145], real-time communications usually refer to scenarios where user brain signals are acquired with wired connections. Thanks to hardware development, wired connections are getting replaced by wireless connections with increasing mobility and reliability. Early research works in wireless BCI utilize Bluetooth for short-range communication between BCI headset and the processing unit, i.e., a computer [146, 147, 148, 149]. Although Bluetooth shows its capability in real-time communication, the communication range of Bluetooth is relatively short, i.e., from a few meters up to a range of ten meters. Further efforts to increase the communication range of the wireless BCI systems are reported in [146, 150, 151, 152, 153]. With significant increases in communication range [146, 150, 151] and joint computing-resource allocation [153], wireless BCI shows its potential for enabling real-time communications between BCI headsets and other infrastructures in real-time at large scale. As mentioned earlier, several works successfully investi Fig. 15: An illustration of the different scenarios where the Human Digital Twin (HDT) interacts with the Metaverse user, other avatars of Metaverse users, and other HDTs. Each scenario depicts the types of interactions that the HDT can engage in and the supporting technology to enable the interaction. gated wireless BCI systems' connectivity and reliability. However, when large-scale systems include heterogeneous wireless devices and medium access schemes, the radio resources should be efficient management [154]. Specifically, Bluetooth and fiber connections may not always be available for BCI users because of coverage problems of such technologies. Such problems require new wireless access methods, radio resource allocation schemes, and broader bandwidth. In the following, we discuss the potential solutions for the problems mentioned above in wireless BCI systems. To handle multiple requests and transmission of not only brain signals but also VR content over the wireless environment, Time Division Multiple Access (TDMA) can be an efficient solution. With TDMA, the time horizon is divided into multiple time slots, and the BCI users can communicate with the service providers or to each other in reserved time slots [155]. However, using time domain division also brings scheduling and data packet collision problems, thus making the TDMA-based systems hard to scalable. Recent advances in antenna design can enable many BCI users to use Multiple Input Single Output (MISO) and Multiple Input Multiple Output (MIMO) communications. For example, in MISO systems, the service provider can be a multi-antenna transmitter that can serve multiple users or multiple groups of users via Spatial Division Multiple Access (SDMA) [156]. Advanced multiple-access methods can utilize the power domain to transmit VR applications and brain signals. For example, Non-orthogonal Multiple Access (NOMA) [157] and Rate-splitting Multiple Access (RSMA) [158, 159] can be used to enhance data transmission rate, thus increasing the quality of service for the users. Besides advanced multiple access methods, 6G systems can utilize broadband communication techniques such as millimeter wave (mmWave) and Terahertz to enhance data transmission rate further. In such 6G systems, the data transmission rate is envisioned to be ten times faster than that of the 5G systems, making seamless experiences for data-demanded applications such as BCI-enabled Metaverse. The techniques above and methods in wireless communications are promising for BCI-enabled Metaverse applications. However, the underlying theory behind such techniques and methods is based on Shannon's theory of wireless channel capacity. In other words, the data transmission rate of such methods cannot exceed the Shannon bound. Recent advances in machine learning and wireless communication techniques enable the transmission beyond Shannon bound with semantic compression and semantic communication [160]. Unlike conventional data compression techniques, such as Shannon-Fano and Huffman coding, semantic compression is designed especially for machine-based communications in which intelligent machines only need specific semantic information to encode/decode the data successfully. On the other hand, semantic communication refers to using language and other symbolic systems to convey meaning between individuals or groups. Semantic communication involves transmitting words or signals and interpreting those words or signals within specific contexts for their intended meaning. Considering the brain signals as information that needs to be transmitted, we need further investigations on the semantic meaning of the brain signals, e.g., semantic reasoning of EEG signals [161], to design effective semantic communication frameworks for BCI-enabled Metaverse applications. #### V-A2 Real-time Communications for the Human Digital Twin in The Metaverse Unlike real-time communications in the BCI/VR systems, real-time communications in the Metaverse refer to the scenario where users can interact with the Metaverse environment and their digital avatars in real-time. For this, the Metaverse applications should be able to provide highly user-driven embedded facilities such as real-time recommendations and individual support for the users through the virtual environment and digital avatars. For example, digital avatars can give valuable suggestions to users based on the analyzed brain signals. On the other hand, the quality of immersion of users in the Metaverse through VR/XR should also be highly considered. Our discussion about imagined speech communications [162], adaptive VR/XR environment rendering [59], error-related behaviors detection [60], and HDT in the previous section can also be applied in this context. The embodiment of the human digital avatars, i.e., HDT, can be further enhanced by using a digital twin. With a digital twin, the human digital avatars can actively mirror their real-world counterparts through sensory data [163]. In addition, digital twin avatars can simulate or predict potential user anomalous behaviors with reinforcement learning and deep learning in real-time [164, 165]. This functionality can also support other Metaverse-related interactions such as user collaboration, conferencing, presentations, and educational training/demonstrations [166]. ## VI Challenges, Emerging Applications, and Future Research Directions Despite the success in clinical trials and healthcare applications, there are still debates on using BCI technologies for commercial products. We discuss the open issues regarding the usability of BCI for the Metaverse, ethics, and security. We also review potential research directions of BCI toward a human-centric design for the Metaverse. ### _Current Challenges_ #### Vi-A1 Hardware Development The first challenge comes from brain signal extraction for BCI applications. Although the portability and mobility of non-invasive BCI technologies enable commercial products, the brain signals extracted with such non-invasive BCI technologies through either dry or wet sensors usually come with a low signal-to-noise ratio (SNR) compared with the invasive methods. The reason is that the external sensors in non-invasive methods are further away from neuronal sources, plus noise, muscle contraction artifacts, and other tissue-related interference, making signals extracted less effective. On the contrary, invasive methods with the implanted electrode grids under the skull can provide less noisy and more reliable readings. However, the usability of invasive methods still faces critics and ethics concerns. The hardware and software capabilities remain open challenges toward the two distinct directions of the BCI methods. The invasive BCI research and development may focus on designing microchips and grids that can be effectively implanted under the skull. The recently funded companies such as NeuralLink 1 and BrainGate 2 are operating toward this vision. However, such companies still develop their products based on clinical trials for patients with paralysis or animals. Apart from this direction, other companies such as Neurable 3 and OpenBCI 4 focusing on non-invasive BCI methods aim to develop portable, highly mobile BCI devices for daily use. Moreover, OpenBCI's Galea headset is a VR-BCI device that allows users to play games and interact with virtual environments through thoughts. We can expect potential Metaverse applications based on VR-BCI technology in the near future. However, the software development must be further developed due to the noisy nature of non-invasive BCI signals. The wearable weight and comfort of such devices should also be considered in design and development. Footnote 1: [https://neurallink.com/](https://neurallink.com/) Footnote 2: [https://www.braingate.org/](https://www.braingate.org/) Footnote 3: [https://neuralto.com/about](https://neuralto.com/about) Footnote 4: [https://openbci.com/about](https://openbci.com/about) #### V-A2 Software Development The software development for BCI may heavily focus on extracting and monitoring brain signals. Understanding an individual's brain signals is yet a challenging task, let alone the neurodiversity among the population, age, race, and health. For example, the EEG signals from an individual may almost differ from the others in terms of amplitudes of the signals and the delayed responses to external events. As a result, one software or algorithm does not always fit all. Dealing with the neurodiversity between populations is still an open research issue [26]. Few research works attempted to address this problem, but the existing methods are still limited to a few research participants [153, 167, 168, 169]. The common approaches of the above studies are additional feature extractions techniques [167], feature representation [168], multimodal machine learning [169], and meta-learning [153]. The main goal of these works is to ensure that the trained machine-learning models can be applied to a new BCI subject without further user-specific training or calibration. Thus, this can significantly enhance the scalability and interoperability of the system. For the large-scale BCI-enabled Metavverse systems, besides developing accurate signal processing schemes, resolving the neurodiversity of brain signals should be further investigated. #### V-A3 Security and Privacy Recent studies showed that analyzing resting-state fMRI data [170] and local field potentials [171] of the users with BCI can early reveal diseases such as Parkinson's and Alzheimer, respectively. In BCI-enabled Metavverse applications, the BCI headsets connected to the Internet also expose the users to potential privacy issues such as hackers, corporations, or government agencies that can track or even manipulate an individual's mental experience [172]. Specifically, the advertisement-based applications in the Metavverse can gather BCI data from users to tailor their ads that target suitable individuals. Similar to privacy issues of existing social media platforms such as Facebook and Twitter, in which the user data is the product of the social media platforms, the users' information can be sold to advertisement companies. To address this challenge, a decentralized Metavverse with a transparent consensus mechanism, e.g., blockchain [173], can prevent the data manipulation problems of a firm/company in a centralized Metavverse. #### V-A4 Ethics Thanks to the rapid growth of machine learning and deep learning algorithms, much research has shown that using machine learning and deep learning enables highly accurate predictions and classifications on different BCI settings [26]. Although such algorithms successfully achieved high performance, they are usually tricky or impossible to comprehend [174, 175]. As a result, this introduces an unknown and unaccountable process between the neural pathways within the brain and the external technologies within the Metaverse [176]. For example, the deep learning-aided auto-correction mechanism in imagined speech communication [112] may send unintended messages that the user is possessive about. To void this issue, implementing a BCI-enabled Metavverse system needs to prioritize "how" and "when" to send and/or collect imagined speech of the users. Toward an ethical solution for this, a collaborative project named BrainCom [177], funded by the European Union, is developing speech synthesizers with BCI technologies. Such technologies aim to vocalize the users' thoughts with ethical concerns accurately. ### _Emerging Applications and Future Research Directions_ Although BCI has a long development history, integrating BCI into the Metaverse is still in its infancy. Thus, we expect that the interest of the industry and academy on this topic will expand in the following years. In this section, we discuss the emerging applications of BCI toward the Metaverse. Furthermore, we present several potential research directions. #### V-B1 Integrated VR-BCI Devices Recent development of hardware and software for BCI applications in VR and XR enables the reduction in sizes and costs of the integrated VR-BCI devices. There are start-ups that are developing headsets allowing users to control the VR environment by using their EEG signals. For example, Galea5 and Cognixion ONE6 are the headsets developed by OpenBCI and Cognixion, respectively, use from 6 to 8 dry EEG electrodes, in combination with transmission module, e.g., Bluetooth and WiFi, and eye tracking module. Galea is a VR-based headset, while Cognition ONE is an AR-based one, so these devices function very differently. Galea focuses on translating EEG signals into digital commands for VR games and applications. On the other hand, Cognixion ONE uses a combination of EEG signals, eye movement, and facial expressions to control digital devices and interact with augmented environments. Overall, both Galea and Cognixion ONE are innovative and exciting products that are pushing the boundaries of brain-computer interface technology. However, they have different strengths and applications, so their choice will depend on the user's needs and preferences. For example, Galea is more promising for gaming applications in the BCI-enabled Metavverse, where the users can control their players by thoughts, and Cognition ONE is more appropriate for healthcare Metaverse applications. Footnote 5: [https://galea.co/](https://galea.co/) Footnote 6: [https://one.cognixion.com/](https://one.cognixion.com/) Footnote 7: [https://www.cognixion.com/](https://www.cognixion.com/) #### Vi-C2 Multitasking in the Metaverse Most of the current machine learning and deep learning approaches for BCI applications are task-specific, meaning that a machine learning model is associated with an individual, given a specific demand, e.g., emotional detection and seizure prediction [26]. This approach is suitable for the classifiers to be deployed at the user site, e.g., a pre-installed software in the headset. However, when it comes to multitasking applications, e.g., combined imagined speech and emotional recognition, the task-specific classifier/software may fail or downgrade the performance. For example, in some Metaverse applications such as virtual fighting, fitness, and dance gaming, the multisensory data from EEG, facial expression, and emotional states can be jointly exploited to enable secure imagined conversations between groups of users and reduce potential motion sickness induced by fast-moving scenes. To fully exploit the multisensory data in such scenarios, multimodal machine learning approaches, which we described in Section V-A, can be a suitable approach. With multimodal machine learning, we can expect only one machine learning model to assist the user in various tasks, from emotional recognition to imagined speech communication, without requiring changes or upgrades in software and hardware. #### Vi-C3 Potential Research Directions Regarding the technical challenges and usability of the BCI-enabled Metaverse concept, different aspects of this new concept must be further investigated and studied. The following discusses the potential research directions for the BCI-enabled Metaverse systems. **Machine Learning for processing heterogeneous datasets:** In recent years, the notable trend in BCI is applying advanced machine learning techniques for analyzing brain signals. Similarly, machine learning techniques will be a significant component in the BCI-enabled Metaverse system. Unlike conventional problems in BCI research, which extensively focus on designing highly accurate classifiers for specific tasks [26], the integration of BCI into Metaverse brings new challenges. One of the most notable is addressing the complexity of the system processing multiple sources of human brain signals. As shown in early findings [167, 153, 169], the participation of multiple users in a task yields a degraded performance for the classifier due to the neurodiversity among the users. The neurodiversity suggests that the brain signals such as EEG or fMRI are highly individual and different among users based on gender, age, and other factors [178]. As a result, a classifier trained with one dataset, e.g., EEG, of a person does not work well with one another. Conventional approaches with machine learning require different classifiers for different people, thus resulting in inefficient computing and poor usability. To address this problem, further analysis on the brain signals [167, 169] or advanced machine learning algorithm, e.g., meta-learning [153], can be applied. As a result, only one classifier can be used across the users without degrading the performance. Although a few early works address the neurodiversity problem, the large number of data generated from human activities, including human brain signals, poses new challenges. The virtual environments in the Metaverse also raise technical concerns about combining virtual environments' constraints, e.g., motion sickness and delay, into creating effective machine learning models. The multiple modalities of the data sources make it hard to understand and analyze the informative features. The multimodal machine learning techniques we discussed in the previous sections can be a potential solution. Apart from multimodal approaches, the success of attention-based machine learning models, e.g., Transformer [179], in understanding complex problems, ranging from visual scenes to language understanding, make it a potential candidate for tacking the multiple modalities of the evolved data in the Metaverse. The attention-based mechanisms with designed attention vectors make the machine learning models can pay attention to the most valuable parts of the data, thus making the data extraction process more effective. In addition, the applications across the virtual worlds in the Metaverse can share similar features, e.g., user behaviors in applications such as e-commerce. Therefore, we can better exploit the valuable features and optimize the machine learning models by learning transferable features of the virtual worlds. In such scenarios, transfer learning techniques are commonly used [124]. **Human Digital Twin for Maintaining Continuity in the Metaverse:** As discussed in Section V, HDT will play a key role in helping us better understand the virtual embodiment of the human body in the Metaverse. In future Metaverse applications, HDT should provide intelligent interfaces for digital avatars by using the brain signals and leveraging 3D visual effects of the human body, e.g., facial expression and body pose, to construct individual avatars. As a result, the continuity of the Metaverse can be maintained in which the HDT can auto-generate or self-operate in the virtual environment with less control from humans. The human users can join the Metaverse and replace the HDT by their avatars when wearing VR/XR headsets (see Fig. 15). To this end, the brain signals can be mixed with other human data such as body motion with physical engines [8, 9], and facial expression [10] to construct the most realistic and personalized HDT in the Metaverse. This can be a new research area that lies at the intersection of neuroscience, e.g., using human biological data, and 3D avatar construction, e.g., using facial expression, limb movement, and kinematics study of the human body. ## VII Conclusion This survey provides an in-depth overview of non-invasive BCI technologies and their potential applications in the Metaverse. With BCI-enabled applications, the Metaverse is expected to be highly personalized and customized to individual needs. Furthermore, the users can interact with the virtual environments with a limited number of sensors, such as kinematics sensors and handheld devices. We also discussed a novel concept of HDT, where the twin representations of the users in the virtual worlds can be developed using a digital twin. Lastly, we discussed the open issues, including security, privacy, hardware and software capability. The potential research directions were also covered. Alternatively, the survey outlines the initial steps for the potential research area evolved at the intersection of BCI and the Metaverse technologies, e.g., VR/XR, 3D environment construction, and real-time communication and synchronization.
2306.07464
Unlocking Sales Growth: Account Prioritization Engine with Explainable AI
B2B sales requires effective prediction of customer growth, identification of upsell potential, and mitigation of churn risks. LinkedIn sales representatives traditionally relied on intuition and fragmented data signals to assess customer performance. This resulted in significant time investment in data understanding as well as strategy formulation and under-investment in active selling. To overcome this challenge, we developed a data product called Account Prioritizer, an intelligent sales account prioritization engine. It uses machine learning recommendation models and integrated account-level explanation algorithms within the sales CRM to automate the manual process of sales book prioritization. A successful A/B test demonstrated that the Account Prioritizer generated a substantial +8.08% increase in renewal bookings for the LinkedIn Business.
Suvendu Jena, Jilei Yang, Fangfang Tan
2023-06-12T23:42:08Z
http://arxiv.org/abs/2306.07464v1
# Unlocking Sales Growth: Account Prioritization Engine with Explainable AI ###### Abstract. B2B sales requires effective prediction of customer growth, identification of upsell potential, and mitigation of churn risks. LinkedIn sales representatives traditionally relied on intuition and fragmented data signals to assess customer performance. This resulted in significant time investment in data understanding as well as strategy formulation and under-investment in active selling. To overcome this challenge, we developed a data product called Account Prioritizer, an intelligent sales account prioritization engine. It uses machine learning recommendation models and integrated account-level explanation algorithms within the sales CRM to automate the manual process of sales book prioritization. A successful A/B test demonstrated that the Account Prioritizer generated a substantial +8.08% increase in renewal bookings for the LinkedIn Business. sales-intelligence, upsell/churn prediction, explainable AI, causal measurement, sales experimentation + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 + Footnote †: journal: Accepted in 2023 ## 1. Introduction At LinkedIn, we have two broad categories of sales representatives (reps): Account Executives (AEs) who focus on acquiring new customers and converting them into first-time LinkedIn B2B buyers, and Account Directors (ADs) who serve existing customers and focus on their growth. Our focus is ADs and optimizing their books: At the beginning of every fiscal year, ADs are assigned a book consisting of a set of accounts.1 Their responsibility is to renew these accounts and drive growth by upselling and cross-selling LinkedIn's hiring and learning products, following the SaaS renewal cycle. from highest to lowest. The larger positive values denote higher upsell potential, smaller negative values denote higher churn risk, and values near 0 denote accounts staying relatively flat. In addition to analyzing the overall spend at the account level, we offer sales reps valuable insights into upselling opportunities and potential risks associated with specific products. This becomes particularly relevant when customers engage with multiple LinkedIn products. To achieve this, we can formulate a similar approach with previous product quantity \(pq_{1}\), \(pq_{2}\),..., \(pq_{n}\) and upcoming product quantity \(cq_{1},cq_{2},\ldots,cq_{n}\) to make an ordered list of \[(cq_{q_{1}}-pq_{q_{1}}),(cq_{q_{2}}-pq_{q_{2}}),\ldots,(cq_{q_{n}}-pq_{q_{n}})\] for each LinkedIn product. ### Label Creation #### 2.2.1. Label Granularity An account serviced by the sales reps could purchase multiple types of LinkedIn products. The customer's spending behavior varies based on the product type and quantity. Consequently, there are numerous upsell and churn events that occur for each account. Here is an example to illustrate this: 1. As shown in Figure 1, consider Customer 1 who has 3 ongoing contracts with LinkedIn. They increased the number of Recruiter Licenses2 by adding one additional Recruiter License to Contract 2. Simultaneously they also reduced their spend on Jobs3 by removing two job licenses from Contract 3. Since the additional spend on Recruiter Licenses (upsell) is outweighed by the reduction in spend on Jobs (churn), the overall spending of Customer 1 with LinkedIn decreases at time \(T-1\). Footnote 2: LinkedIn jobs platform helps companies to post their jobs on LinkedIn and easily target, prioritize, and manage qualified applicants. 2. As highlighted in Figure 2, this scenario leads to the creation of three distinct labels at time \(T-1\) (one overall & one for each of the two products). At the overall account level, we observe a churn label indicating a reduction in overall account spending (given the individual product spend addition was less than spend on products removed). At the individual product level, we have an upsell label for the Recruiter product and a churn label for the Jobs product. By collecting these upsell and churn labels gathered from the accounts (both overall spend, and products added/removed) over the past two years, we aim to predict the upsell and churn potential for all the accounts due for renewal in the upcoming time period \(T\). Through the prediction of both overall account-level spend and constituent product-level quantities, our approach enables the provision of granular and tailored recommendations to sales reps. This empowers them to strategically prioritize accounts based on their potential for higher overall spend, while also facilitating informed decisions regarding the optimal product mix for each individual customer. #### 2.2.2. Capturing upsell/churn events throughout the SaaS renewal cycle As with other SaaS products, while churn happens during renewal, upsell can happen throughout the year - e.g., customer increased spend mid-cycle by adding additional products to their contract. This leads to a waterfall trend of product bookings/quantity throughout the renewal cycle, so defining a label for churn/upsell becomes challenging. To illustrate this, let's consider Figure 3. 1. In Case 1, we have a straightforward scenario where a customer adds an extra Jobs License during renewal in time \(T-1\), resulting in a single upsell event. This case has already been covered in the previous section. Figure 1. Historical upsell/churn events. Figure 2. Upsell/Churn labels & predictions. 2. In Case 2, in addition to Case 1's 1 upsell event, there is an additional upsell event since the customer added an extra Jobs License mid-cycle before the renewal (through an add-on opportunity4), making a total of two recorded upsell events. Footnote 4: Add-on on opportunity is a mid renewal cycle customer purchase event, wherein the customer adds more products to their portfolio and the opportunity renews as per the regular renewal cycle. 3. Finally, in Case 3, we encounter a non-renewal cycle add-on opportunity called add-on non-co term5 (non co-term opportunities differ from regular add-on because non-co term triggers their own renewal chain). The customer on top of the two upsell events in Case 2, adds one more Jobs License during add-on non co-term opportunity and a 4th Jobs License during the non co-term renewal. In totality, this scenario results in 4 upsell events for this customer. Footnote 4: Add-on on co-term opportunity is similar to a regular add-on opportunity except that it triggers its own renewal cycle. In order to collect all these upsell events across the renewal cycle, we implement a system where labels are generated in overlapping time periods throughout the cycle, instead of assigning a single label to an entire renewal cycle. In order to explain this, we take the example of Case 3 in Figure 3, and illustrate how labels are collected in this specific case, as illustrated in Figure 4: 1. Collect Sample 1: 1st Jobs upsell event is captured when we collect the label from Renewal \(T-1\) Year to Month of add-on, with the total upsell quantity = 1 Job. 2. Collect Sample 2: Between Month of add-on and Month of add-on non co-term we capture the 2nd Jobs upsell event, taking the total upsell quantity = 2 Jobs. 3. Collect Sample 3: Between Month of add-on non co-term and Renewal \(T\) Year, we capture the 3rd Jobs upsell event, with the total upsell quantity = 3 Jobs. 4. Collect Sample 4: Finally, between Renewal \(T\) Year to Renewal of the non co-term, we capture the 4th Jobs upsell event, taking the total upsell quantity = 4 Jobs. Hence for Case 3, we have 4 samples in the training dataset mapping to 4 Job upsell events. On the similar lines, for Case 1 we will have a single sample in the training dataset, and 2 samples for Case 2. Although this approach increases the complexity of the features used for labeling and generates multiple samples for each account, it provides more accurate labeling and a larger sample size across various customers. ### Feature Generation #### 2.3.1. Features used In the feature selection process, we curated a set of informative features to capture the key aspects influencing the upsell/churn prediction task. Some of the most important feature categories included past purchasing patterns, product usage, delivered ROI from LinkedIn to customers, spend in other LinkedIn business lines, online purchases of LinkedIn subscription products, talent trends such as hiring & attrition, customer traffic on LinkedIn.com as well as various macro features. We observe that product usage and ROI continue to be significant factors in a customer deciding to buy more. #### 2.3.2. Feature preprocessing In the data preprocessing phase, we employed several techniques to prepare the dataset for our machine learning model, aiming to enhance feature representation and address specific challenges in the data. Through an extensive exploratory data analysis (EDA), we identified highly correlated features and applied transformations to derive rate features. This process involved normalizing the features by dividing them with a reference variable for calculating ratios, effectively capturing the relative proportions or rates between the variables. For example we calculate the acceptance rate of recruiter messages by candidates rather than looking at only the volume of messages sent by recruiters. By incorporating these rate features, we mitigated the issue of multicollinearity, which can hinder model interpretability and potentially lead to overfitting. Figure 4. Label generation across overlapping time periods for Case 3. Figure 3. Different scenarios of upsell events. To handle categorical features, we employed label response encoding, a technique that maps categories to their corresponding target variable values. This approach not only converted categorical variables into numerical representations but also preserved the underlying relationship between the categories and the target variable. Label response encoding enables the model to effectively capture the impact of categorical features on the prediction task. Furthermore, to address outliers and extreme values in the Year-over-Year (YoY) booking features, we applied a capping mechanism. By setting upper limits on these variables, we restricted the influence of outliers on the model's training process, ensuring more robust and accurate predictions. These data preprocessing steps, driven by the insights gained from EDA, contributed to the optimization of feature representation, reduction of multicollinearity, and management of outliers, ultimately enhancing the performance and interpretability of our machine learning models. ### Modeling Framework The ordered list of \((cs_{1}-ps_{1})\), \((cs_{2}-ps_{2})\), \(\ldots\), \((cs_{n}-ps_{n})\) could be framed as a regression problem. In our work, we employed multiple XGBoost Regressors to tackle this task. One XGBoost model was trained to predict the delta in overall account-level spend, while multiple product-level XGBoost models were trained to predict the delta in product quantity across various product categories. These models were trained using two years of historical account and product bookings data and are currently retrained on a monthly basis. The magnitude of output of the models \((cs_{i}-ps_{i})\) is dependent on the size of the account and its baseline spend. To make accounts with different base sizes and having different ranges of account spend comparable among each other and generate a rank order for prioritization, we normalize the model outputs by passing them onto a log(base spend) based normalization step. Specifically, 1. Account spend model: Score range of \((cs_{i}-ps_{i})\) is normalized to \([-100,100]\). 2. Product quantity models: Score range of \((cq_{i}-pq_{i})\) is normalized to \([-10K,10K]\). The resulting normalized and rank-ordered list of accounts is then passed to the explanation layer, along with input features, to generate instance-level explanations. Post explanation generation we then feed the scores into the sales rep facing tools and CRM systems. The normalized score ordered list gives sales reps the ability to compare across their account book and decide whether they want to focus on upsell or churn. Many sales reps leverage these scores in a \(2\times 2\) table with the scores on \(x\) axis and renewal target on \(y\) axis to identify the segments of accounts such as high spend + high likelihood to grow, high spend + high churn risk, low spend + low churn risk and so on, and devise their sales strategies accordingly. Figure 5 shows the overall model architecture and deployment. ### Explanation Generation A key thing we learned from a focus group study with sales reps is that the scores alone may not be the most helpful. For sales reps to take action, they need to know the underlying reasons behind these scores, and they also want to double check these reasons with their domain knowledge. Even though some state-of-the-art model interpretation approaches (e.g., LIME(Beng et al., 2015), SHAP(Beng et al., 2015)) can help create an important feature list to interpret the ML-model provided scores, the feature names in these lists are often not very intuitive to a non-technical audience. The features also may not be well-organized (e.g., relevant features could be further grouped, redundant features could be removed). To deal with the above challenges, we have built and implemented a user-facing model explainer called CrystalCandle(Beng et al., 2015), which is a key part of developing transparent and explainable AI systems at LinkedIn. The output of CrystalCandle is a list of top narrative insights for each customer account (shown in Figure 6), which reflects the rationale behind the ML-model provided scores. Figure 5. Model architecture and deployment. This account is very likely to upsell. Its likelihood is driven by: 1. In the past 12 months, there are a total of 240 new hire(s) including 38 Directors(s) and 7 VP(s), across 18 different functions, including 15 new hire(s) in HR function. 2. In the past 12 months, 113 employees left the company which spanned across 18 functions, including 29 Director(s) and 2 VP(s). 3. Inmail response rate in the last month changed from 13% to 32% (+146%). 4. Monthly LCP viewers in the last 3 months changed from 88 to 116 (+32%). 5. Monthly LRI in the last 3 months changed from 28 to 35 (+25%). These narrative insights are much more user-friendly, give more support for sales teams to trust the prediction results and better extract meaningful insights. Figure 7 shows the pipeline of CrystalCandle. CrystalCandle serves as a bridge between the machine learning models (e.g., up-sell/churn prediction models) and the end users (e.g., sales reps). The Model Importer extracts the model output from a set of major machine learning platforms (e.g., ProML [(7)]), and then in Model Interpreter, we implement model interpretation approaches onto the extracted machine learning model output and generate the important feature list for each sample. The Model Interpreter is compatible with state-of-the-art model interpretation approaches such as SHAP[(3)], LIME[(4)], K-LIME[(1)], and FastTreeSHAP[(5)]. We also feed some additional inputs into CrystalCandle at this stage, including the additional feature information and narrative templates. We then conduct narrative template imputation in Narrative Generator and produce top narrative insights for each sample. Finally, we surface these narrative insights onto a variety of end-user platforms via Narrative Exporter. #### 2.5.1. Narrative Generator and Insights Design deep dive The goal of Narrative Generator is to produce the top narrative insights based on model output and model interpretation results. Figure 8 shows how we build the feature clustering information file and narrative templates in Narrative Generator. We build the four-layer feature hierarchy as the feature clustering information file. For each original feature, we figure out its higher level features, moving from super feature to the category. We see that the feature descriptions have been naturally incorporated in the super feature names. We also construct a list of narrative templates, where each template is uniquely identified by its insight type. The rule to conduct narrative template imputation is provided by the last table, where one narrative will be constructed for each super feature. The insight item determines the position to impute the feature values into the narrative templates. For example, to construct the narrative for super feature "viewers per job," we find out its narrative template "value change," replace the blanks "prev_value", "current_value", and "super_feature" with the feature values and super feature name "viewers per job," and calculate "percent_change." We next show how we select top narratives in a scalable way in Figure 9. We first append the feature importance scores from Model Interpreter into the feature information table presented in Figure 8. During the narrative imputation process, we also calculate the narrative importance score as the largest feature importance score of all the original features appeared in the narrative, and use the narrative importance score to rank all the narratives (heuristics-based reweighting can also be implemented to prioritize actionable narratives). In the meantime, we also conduct narrative deduplication by keeping only the narrative with the largest narrative importance score within each ultra feature. This is in consideration of the fact that narratives under one ultra feature can be highly overlapped. Finally, we conduct narrative concatenation by concatenating narratives within each category; the concatenated top narratives are the final output from the Narrative Generator. #### 2.5.2. CrystalCandle implementation details LinkedIn sales teams use multiple internal sales intelligence platforms. One typical platform, MyBook (embedded in Microsoft Dynamics), aims to help sales reps close deals faster by providing well-organized sales insights and recommendations. Figure 10 shows one typical output of CrystalCandle on MyBook in Project Account Prioritizer. When a sales rep logs into MyBook, a list of accounts are displayed on the MyBook homepage. The column "Existing Customer Propensity Score (LTS) Justification" shows the upsell/churn propensity score for each account from the predictive models. To learn more about the underlying reasons behind each score, sales reps can hover over the "i" button and a small window with more account details will pop up. In this pop-up window, CrystalCandle provides top narrative insights for each account. Feedback from the sales team has been highly positive: "The predictive models [are] a game changer! The time saved on researching accounts for growth opportunities has been cut down with the data provided in the report which has allowed me to focus on other areas across MyBook." ### Measurement #### 2.6.1. Launch with an A/B Test The primary objective of the account prioritization engine is to enhance the efficiency of sales reps Figure 6. Mocked top narrative insights generated by CrystalCandle for a specific customer in account level upsell/churn prediction. Figure 7. CrystalCandle pipeline. and drive growth in B2B sales revenue. To evaluate its effectiveness, we conducted an A/B test. However, conducting experiments in the sales domain presents unique challenges, including: 1. Low sample size: The sample size is inherently limited due to the number of existing customers and the number of sales reps. 2. Lagged treatment effect: The sales cycle, from initial customer discussions to opportunity closure, typically spans 3-6 months. Consequently, measuring the impact of account prioritization on sales reps or individual accounts requires a significant amount of time. 3. Fairness: Randomizing the treatment based on sales reps may result in some reps receiving the account scores while others do not. If these scores prove to be beneficial, it could potentially influence the sales performance and compensation of the reps involved. To address these challenges, we implemented a stratified randomization technique (Figure 11), wherein randomization occurs at the level of individual sales reps and their respective accounts, with accounts being stratified by matching across variables such as account size, sales segment and region. This approach ensures that each rep observes the model scores for a randomly selected 50% of their Figure 8. Feature clustering and template imputation in Narrative Generator. Figure 9. Narrative ranking and deduplication in Narrative Generator. accounts, while the remaining 50% act as the control group with no scores displayed (Table 1). In addition, the account stratification ensures similar distribution of accounts in treatment and control. To maintain transparency and fairness, we proactively communicated this change to the reps, emphasizing that even accounts without scores still provide opportunities for customer growth and sales reps can identify these opportunities leveraging their field knowledge. Although this randomization technique may not completely eliminate the placebo effect, it offers the following advantages: 1. Ensures control is the status quo, i.e. no scores and sales reps using their intuition and field knowledge to find opportunities. 2. Offers a fairer alternative to reps as compared to displaying randomized and distorted scores as placebo for the control accounts. In the A/B test, we measured the difference in Renewal Incremental Growth (RIG, which measures the ratio between renewal actual bookings/expected bookings) between the treatment and control groups. Let \(t\) denote treatment and \(c\) denote control, \(r\) represent the number of reps, and \(n\) the number of accounts. The specific metric we measured was the following: \(\Delta\) in Renewal Incremental Growth \(=\)(Renewal Incremental Growth)\({}_{t}\)\(-\) (Renewal Incremental Growth)\({}_{c}\)\(=\)\(\frac{1}{\sum_{i\in r_{k}}n_{i,t}}\left[\sum_{i=1}^{r}\sum_{j=1}^{n_{i,t}}(\frac{\text{ Renewal Bookings}}{\text{Renewal Target}})_{j,t}\right]\)\(-\)\(\frac{1}{\sum_{i\in r_{k}}n_{i,c}}\left[\sum_{i=1}^{r}\sum_{j=1}^{n_{i,c}}(\frac{\text{ Renewal Bookings}}{\text{Renewal Target}})_{j,c}\right]\), where \(n_{i,t}\) and \(n_{i,c}\) stand for the number of treatment accounts and the number of control accounts for sales rep \(i\) respectively. Furthermore, to calculate the Average Treatment Effect on Treated (ATT), we tracked sales reps who viewed the scores at least once, and calculated the difference in RIG between their treatment and control accounts. The experiment yielded positive results, and we observed an +8.08% lift in RIG for the treatment accounts as compared to control (Table 1). #### 2.6.2. Shifting to Consistent MAUs and Causal Measurement In the context of sales optimization, wherein the sales reps self select whether and how frequently they will access the scores, we face the challenge of rep adoption and its downstream effect on model impact. The 8.08% increase in Renewal Incremental Growth (RIG) identified through the A/B test is applicable only when the sales rep visits the data product where scores are displayed. With constant messaging and initiatives driving adoption, we were able to achieve an impressive 85% cumulative adoption rate (defined as a one-time view of the scores), but over the course of this data product maturity, we realized the need for measuring the incremental impact of using the scores more frequently as compared to ad hoc usage. Additionally, we observed that the cohort of sales reps viewing the score more frequently had higher RIG than the reps who were ad hoc users. Hence, there was a need to define a retention metric, examine the effect of differentiated usage and understand whether the effect is causal. To do so, we introduced the following definitions: 1. Consistent Monthly Active User (Consistent MAU): Sales reps who were MAUs for 4 or more months within a 12-month period. 2. Infrequent User: Sales reps who were MAUs for 1 to 3 months within a 12-month period. Given the scores were already 100% ramped, ruling out the possibility of another AB experiment, we conducted a causal measurement study to quantify the incremental impact of transitioning from an infrequent user to a Consistent MAU. Since the effect is directly linked to a sales rep's behavior and extends to all accounts in their portfolio, we needed to conduct the study at the sales rep level rather than the account level. Using a coarsened exact matching (CEM) [2] based matching method, we matched treated units Figure 11. AB test randomization. Figure 10. Mocked CrystalCandle output (Customer Propensity Score insights illustrative example) on MyBook. (sales reps who became Consistent MAUs from infrequent users) with control units (sales reps who continued to be infrequent users) based on a set of confounders that affect the treatment and the likelihood of being treated (Table 2). The confounders we controlled for include account size, rep region, sales segment, account spend, rep tenure, macro along with other business changes. In order to make this comparison more robust we: 1. Only take the users who converted into a Consistent MAU from an infrequent user during the time adoption activities were carried out. 2. Apply a A/A test validation, wherein during pre-treatment intervention we check whether the treatment and control cohorts did not have any statistically significant difference in RIG. 3. Carried out detailed balance and coverage checks, trying to ensure that the coverage of matched accounts in control & treatment does not drop below 80% (for generalization of treatment effect), while lowering imbalance between the two groups. Let's say the matched group of reps are \(R=r_{1}\cup r_{2}\cup\cdots\cup r_{k}\), then we are measuring, \(\Delta\) in Renewal Incremental Growth =(Renewal Incremental Growth)\({}_{t}\) - (Renewal Incremental Growth)\({}_{c}\) where \(w_{r_{k}}\) is the CEM weight for matched group \(r_{k}\). We observed a +20.4% lift in RIG for the sales reps who converted to Consistent MAUs during the treatment period as compared to the sales reps who continued to be infrequent users (Table 2). This further strengthened the need to use the scores more consistently in the sales process and boosted rep adoption. ## 3. Conclusion and Future Directions Scaling intelligent and value-driven customer outreach for the LinkedIn sales team is a crucial business challenge. The LinkedIn Data teams developed state-of-the-art machine learning models in Account Prioritizer to provide the sales team with account-level information on churn risk and upsell propensity. We further leveraged the user-facing model explainer CrystalCandle to create top narrative insights for each account-level recommendation. CrystalCandle helps sales teams trust modeling results and extract meaningful insights from them. A/B testing results demonstrate that the launch of Account Prioritizer with CrystalCandle interpretation has led to significant revenue improvements. The sales reps adoption (at 85%) showed strong user intent to use the data product. Although Account Prioritizer + CrystalCandle offers data-driven guidance on renewal and upsell opportunities, it has room to further develop into a much more agile, personalized and effective recommendation engine with actionable guidance and detailed materials for facilitating customer engagement. For example, additional layers of optimization could be introduced to prioritize the weekly account-related tasks performed by sales reps, such as customer training, sharing of usage reports, conducting product demonstrations, and engaging in budget discussions. By leveraging multitask learning techniques, we can seamlessly integrate the optimization \begin{table} \begin{tabular}{l|l|l} Variant & Treatment & Control \\ Definition & Treatment accounts of reps who visited the data & Control accounts of reps who visited the data \\ & product at least once & product at least once \\ Product Experience & Scores with account-level explanations & No scores \\ Sample size & \(\sim\)4.5K accounts & \(\sim\)4.5K accounts \\ Test Duration & 6 Months & \\ Success Metric & Renewal Incremental Growth (RIG) & \\ Result & +8.08\% lift in Average RIG of treatment vs control & \\ & & \\ \end{tabular} \end{table} Table 1. AB test randomization details and measurement results. \begin{table} \begin{tabular}{l|l|l} Variant & Treatment & Control \\ Definition & Infrequent users who turned into Consistent & Infrequent users who continued to be infrequent users \\ & MAUs & quent users \\ Sample size & 280 sales reps & 596 sales reps \\ Study Duration & 2 years lookback (6 months pre-treatment for A/A test, 12 months of treatment, 6 months of & \\ & response collection) & \\ Success Metric & Renewal Incremental Growth (RIG) & \\ Result & +20.4\% lift in Average RIG of treatment vs control & \\ & & \\ \end{tabular} \end{table} Table 2. MAU based causal measurement details and results. of customer acquisition and existing customer accounts, ensuring a well-balanced approach to optimizing the entire sales funnel. To provide proactive support to sales reps, the system can incorporate dynamic alerts to notify them about upcoming renewal activities, along with offering Next Best Actions (NBAs) to guide their actions. Once the next best action is identified, the recommender can leverage the potential of Generative AI features to generate sales outreach drafts and automate customer reports as part of the pitching material. In essence, this development aims to create a collaborative sales co-pilot that works alongside sales reps to optimize their performance. ## Acknowledgments We would like to express our sincere appreciation to our colleagues at LinkedIn for their contributions in making this data product a reality, including Liyang Zhao, Adam Chard, Das Apparsamy, Jacqueline Rivas, Jessica Li, Diana Negoescu, Saad Eddin Al Orjany, Yu Liu, Wenrong Zeng, Samba Njie, Xia Hong, Katie Stohs, Rodrigo Aramayo, Kunal Chopra, Guy Berger, Brian Weller, Farhan Syed, Mark Lobosco, Parvez Ahammad, Faisal Farooq, Rehan Khan and Ya Xu. We particularly thank Jiang Zhu and Stephanie Sorenson for their helpful comments and feedback.
2310.15786
Amortised Inference in Neural Networks for Small-Scale Probabilistic Meta-Learning
The global inducing point variational approximation for BNNs is based on using a set of inducing inputs to construct a series of conditional distributions that accurately approximate the conditionals of the true posterior distribution. Our key insight is that these inducing inputs can be replaced by the actual data, such that the variational distribution consists of a set of approximate likelihoods for each datapoint. This structure lends itself to amortised inference, in which the parameters of each approximate likelihood are obtained by passing each datapoint through a meta-model known as the inference network. By training this inference network across related datasets, we can meta-learn Bayesian inference over task-specific BNNs.
Matthew Ashman, Tommy Rochussen, Adrian Weller
2023-10-24T12:34:25Z
http://arxiv.org/abs/2310.15786v1
[ ###### Abstract In many machine learning applications, well-calibrated posterior predictive distributions are required for a number of closely-related datasets. Given similarity between datasets, it is natural to wish to develop meta-learning algorithms that utilise other datasets to reduce the computational complexity and / or improve predictive performance when deploying models on newly-seen datasets at test time. There have been a number of significant recent developments in meta-learning for predictive distributions, most notably that of the neural process (NP) family (Garnelo et al., 2018, 2018; Foong et al., 2020; Gordon et al., 2018, 2019). Despite the utility of these methods on large-scale meta-datasets, they perform poorly in settings where the number of datasets and the total number of datapoints is small. We argue that this is a result of the large number of shared model parameters overfitting to the meta-dataset. A natural solution is to remove these shared model parameters, and instead train a meta-model to learn to approximate fully Bayesian inference over task-specific model parameters. Recently, Ober and Aitchison (2021) developed a variational approximation for Bayesian neural networks (BNNs) based on using a set of inducing inputs to construct a series of conditional distributions that accurately approximate the conditionals of the true posterior distribution. Notably, the variational distribution consists of the prior multiplied by a set of approximate likelihoods for each inducing input. Our key insight is that these inducing inputs can be replaced by the actual data, such that the variational distribution consists of a set of approximate likelihoods for each datapoint. This structure lends itself to amortised inference, in which the parameters of each approximate likelihood are obtained by passing each datapoint through a meta-model known as the inference network. By training this inference network across related datasets, we can meta-learn Bayesian inference over task-specific BNNs, addressing the challenge above. A]Amortised Inference in Neural Networks for Small-Scale Probabilistic Meta-Learning Matthew Ashman]Matthew Ashman\({}^{*}\) [email protected] Tommy Rochussen\({}^{*}\) [email protected] A]Marian Weller [email protected] ## 1 Introduction In many machine learning applications, well-calibrated posterior predictive distributions are required for a number of closely-related datasets. Given similarity between datasets, it is natural to wish to develop meta-learning algorithms that utilise other datasets to reduce the computational complexity and / or improve predictive performance when deploying models on newly-seen datasets at test time. There have been a number of significant recent developments in meta-learning for predictive distributions, most notably that of the neural process (NP) family (Garnelo et al., 2018, 2018; Foong et al., 2020; Gordon et al., 2018, 2019). Despite the utility of these methods on large-scale meta-datasets, they perform poorly in settings where the number of datasets and the total number of datapoints is small. We argue that this is a result of the large number of shared model parameters overfitting to the meta-dataset. A natural solution is to remove these shared model parameters, and instead train a meta-model to learn to approximate fully Bayesian inference over task-specific model parameters. Recently, Ober and Aitchison (2021) developed a variational approximation for Bayesian neural networks (BNNs) based on using a set of inducing inputs to construct a series of conditional distributions that accurately approximate the conditionals of the true posterior distribution. Notably, the variational distribution consists of the prior multiplied by a set of approximate likelihoods for each inducing input. Our key insight is that these inducing inputs can be replaced by the actual data, such that the variational distribution consists of a set of approximate likelihoods for each datapoint. This structure lends itself to amortised inference, in which the parameters of each approximate likelihood are obtained by passing each datapoint through a meta-model known as the inference network. By training this inference network across related datasets, we can meta-learn Bayesian inference over task-specific BNNs, addressing the challenge above. ## 2 Related Work Neural processesOur work is most similar to the NP family (Garnelo et al., 2018, 2018), which seeks to meta-learn predictive distributions either through maximisation of the posterior predictive likelihood or variational inference (VI) (Foong et al., 2020). Similar to our method, NPs utilise an encoder to create embeddings for each datapoint. These embeddings are then aggregated to form a distribution over a latent variable which is then sampled and passed, together with a test datapoint, through a decoder. Volpp et al. (2020) propose the use of Bayesian aggregation, in which embeddings of individual datapoints take the form of approximate likelihoods which are multiplied together with the prior to form an approximate posterior distribution over the latent variable. Whilst these methods differ to ours in their use of shared model parameters, our method can be reinterpreted as a member of the NP family in which the latent variables are the parameters of the decoder. Through this perspective we can train our model in an identical way to NPs. Meta-learning neural networksMeta-learning for neural networks had received a significant amount of attention from the research community. Notable examples include MAML Finn et al. (2017) and its extensions (Yoon et al., 2018; Antoniou et al., 2018), which seek good parameter initialisation, and those which explicitly condition on the dataset to obtain task specific parameters (Requeira et al., 2019; Gordon et al., 2018). Whilst conceptually similar to our approach, these methods differ in their use of shared model parameters--the task specific parameters amount to a small subset of the overall model parameters. In addition to requiring a large meta-dataset, this limits these methods to meta-datasets in which the individual datasets are very similar. By contrast, our approach does not use any shared model parameters but rather meta-learns inference in a BNN. We discuss the relationship between our method and NPs in Appendix A, demonstrating that they conceptually differ only in a change of objective functions. ## 3 Background In this section we review the GI-BNN of Ober and Aitchison (2021) and NP family (Garnelo et al., 2018). Throughout, let \(\mathbf{\Xi}=\{\mathcal{D}\}\) denote a meta-dataset of \(|\Xi|\) datasets, and \(\mathcal{D}=\{\mathbf{X},\mathbf{y}\}\) denote a dataset consisting of inputs \(\mathbf{X}\in\mathbb{R}^{N\times D_{0}}\) and outputs \(\mathbf{y}\in\mathbb{R}^{N\times P}\). ### Global Inducing Point Variational Posteriors for BNNs Let \(\mathbf{W}=\{\mathbf{W}^{\ell}\}_{l=1}^{L}\) denote the weights of an \(L\)-layer neural network, such that \(\mathbf{W}^{\ell}\in\mathbb{R}^{D^{\ell-1}\times D^{\ell}}\) where \(D^{\ell}\) denotes the dimensions of the \(\ell\)-th hidden layer, and \(\psi(\cdot)\) denote the element-wise activation function acting between layers. Ober and Aitchison (2021) introduce the global inducing point variational approximation for BNNs, in which the variational approximation to the posterior \(p(\mathbf{W}|\mathcal{D})\) is defined recursively as \(q_{\phi}(\mathbf{W})=\prod_{\ell=1}^{L}q_{\phi}(\mathbf{W}^{\ell}|\{\mathbf{W }^{\ell}\}_{\ell^{\prime}=1}^{\ell-1},\mathbf{U}^{0})\), where \[q_{\phi}\left(\mathbf{W}^{\ell}|\{\mathbf{W}^{\ell^{\prime}}\}_{\ell^{\prime }=1}^{\ell-1},\mathbf{U}_{0}\right)\propto\prod_{d=1}^{D^{\ell}}p\left( \mathbf{w}_{d}^{\ell}\right)\underbrace{\mathcal{N}\left(\mathbf{v}_{d}^{\ell };\psi(\mathbf{U}^{\ell})\mathbf{w}_{d}^{\ell},\left[\mathbf{\Lambda}_{d}^{ \ell}\right]^{-1}\right)}_{t_{d}^{\ell}(\mathbf{w}_{d}^{\ell})}. \tag{1}\] This mirrors the structure of the true posterior in the sense that it is equivalent to the product of the prior and an _approximate likelihood_, \(t_{d}^{\ell}(\mathbf{w}_{d}^{\ell})\). The variational parameters \(\phi\) of this approximation are the parameters of each approximate likelihood, \(\mathbf{v}_{d}^{\ell}\in\mathbb{R}^{M}\) and \(\mathbf{\Lambda}_{d}^{\ell}\in\mathbb{R}^{M\times M}\)--which themselves can be interpreted as _pseudo observations_--and the _global inducing locations_, \(\mathbf{U}_{0}\in\mathbb{R}^{M\times D_{0}}\), which are used to define \(\{\mathbf{U}^{\ell}\}_{\ell=1}^{L}\) according to \[\mathbf{U}^{1}=\mathbf{U}^{0}\mathbf{W}_{1},\quad\mathbf{U}^{\ell}=\psi( \mathbf{U}^{\ell-1})\mathbf{W}^{\ell}\quad\ell=2,\ldots,L. \tag{2}\] Optimisation of \(\phi\) is achieved through maximisation of the evidence lower bound (ELBO): \[\mathcal{L}_{\mathrm{ELBO}}(\phi;\mathcal{D})=\mathbb{E}_{q_{\phi}(\mathbf{W}) }\left[\log p(\mathbf{y}|\mathbf{W},\mathbf{X})\right]-\mathrm{KL}\left[q_{ \phi}(\mathbf{W})||p(\mathbf{W})\right]. \tag{3}\] We refer to this variational approximation as pseudo-observation variational inference for BNNs (POVI-BNN). Ober and Aitchison (2021) demonstrate the efficacy of POVI-BNNs relative to mean-field Gaussian variational approximations for BNNs, achieving state-of-the-art performance on a number of regression and classification experiments. Their effectiveness is further demonstrated by Bui (2022), who shows that the estimate of the marginal likelihood provided by the POVI-BNN approximation is close to the true value, indicating the approximation is close to the true posterior. ## 4 Amortising Inference in Bayesian Neural Networks In this section, we build upon the work of Ober and Aitchison (2021) to develop an effective method of performing amortised inference in BNNs. Consider the same variational approximation described in Section 3.1, except with diagonal precision matrices \(\boldsymbol{\Lambda}_{d}^{\ell}\) and inducing locations \(\mathbf{U}\) replaced by training locations \(\mathbf{X}\), such that \[q\left(\mathbf{W}^{\ell}|\{\mathbf{W}^{\ell^{\prime}}\}_{\ell^{\prime}=1}^{ \ell-1},\mathcal{D}\right)\propto\prod_{d=1}^{D_{\ell}}p\left(\mathbf{w}_{d}^ {\ell}\right)\prod_{n=1}^{N}\underbrace{\mathcal{N}\left(v_{d,n}^{\ell};x_{d,n }^{\ell},\sigma_{d,n}^{\ell}\right)}_{t_{d,n}^{\ell}(\mathbf{w}_{d}^{\ell})} \tag{4}\] where \[\mathbf{x}_{n}^{1}=\mathbf{W}^{1}\mathbf{x}_{n},\quad\mathbf{x}_{n}^{\ell}= \mathbf{W}^{\ell}\psi(\mathbf{x}_{n}^{\ell-1})\quad\forall\ell=2,\ldots,L. \tag{5}\] This form of variational approximation enables the use of per-datapoint amortised inference. Specifically, rather than treating the variational parameters \(\{\{\{\{\{\{\{\{\{\{\{\{\{\{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}\}\}\}\} \mathbf{\}}\mathbf{\{\{\{{{\{{\{{{\{{\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{ \mathbf{{\mathbf{\mathbf{{}}}}}}}}}}}}}}}\}\}\}\}}}}\mathbf{\{\{{{{{{ \{{\{{{\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{\mathbf{{\mathbf{{\mathbf{\mathbf{{ }}}}}}}}}}}}}}\mathbf{{}}}}\}}}}}}}}}\mathbf{\{ \{{{{{\{{{\{{{\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{ \mathbf{{\mathbf{{\mathbf{{\mathbf{{}}}}}}}}}}}}}}}}}\mathbf{}}}\}\}}}}}}} }\mathbf{\{\{{{{{\{{{\{{{\{{\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{{ \mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{ }}}}}}}}}}}}}}}}}}\mathbf{{}}}\mathbf{{{{{{ {\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{ }}}}}}}}}}}}}\mathbf{{{{{{{{ \mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \mathbf{\{\{{{{{{\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{ \mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{{\mathbf{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\mathbf{\{ \{{{{{\{{{\{{{\{{{\{{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{ \mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{} }} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\\\\\\}\}\\\\\\\\\\\}\\\\\\\\\\\}}\\\\\\\\\\\\\ \}\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\}\\\\\\\\ An important limitation of this approach is that performing stochastic optimisation through mini-batching datapoints is not possible, as to compute the \(q(\mathbf{W}|\mathcal{D})\) we require passing the entire dataset \(\mathcal{D}\) through the inference network. Nonetheless, this limitation is only significant for large datasets--provided the entire dataset can be passed through the network at once (i.e. in the small to medium-sized dataset regime which we consider here) this is not an issue. At test time, we can obtain an approximate posterior \(q(\mathbf{W}|\mathcal{D}_{*})\) with a single pass of the dataset through the inference networks. ## 5 Results and Discussion In this section, we evaluate the performance of our model in the meta-learning setting. We consider a synthetic meta-dataset consisting of samples from a GP with a squared-exponential (SE) kernel. Each dataset consists of between 10 and 20 training datapoints. We compare A-POVI-BNNs to amortised mean-field variational inference for BNNs (A-MFVI-BNN), which we detail in Appendix B, and a ConvCNP (Gordon et al., 2019). All Figure 1: Posterior predictive distributions for A-POVI-BNN (top), A-MFVI-BNN (middle), and ConvCNP (bottom) after training on a meta-dataset of size \(|\Xi|=1\) (left) and \(|\Xi|=100\) (right). Data points are shown as black dots, the predictive mean is shown as a dark blue line and the 95% confidence interval is shown as shaded blue. NNs used in the amortised BNN architectures (both the model and inference network) consist of two layers of 50 hidden units and ReLU activation functions. The ConvCNP implementation and architecture is identical to that provided in [https://github.com/cambridge-mlg/convcnp/blob/master/convcnp_regression.ipynb](https://github.com/cambridge-mlg/convcnp/blob/master/convcnp_regression.ipynb). Figures 1(_a_) to 1(_f_) compare the posterior predictive distributions of each method on an unseen dataset drawn from a GP with the same hyperparameters as those in the meta-dataset. We evaluate the predictions of each method trained on a meta-dataset of size \(|\Xi|=1\) and \(|\Xi|=100\). We see that only A-POVI-BNN obtains a sensible predictive posterior on the unseen dataset in both cases. The ConvCNP performs poorly for \(|\Xi|=1\), which is unsurprising given the large number of model parameters increasing it susceptibility to overfitting. The A-MFVI-BNN performs significantly better for \(|\Xi|=100\), yet, in both cases the quality of its predictive posterior is poor relative to both the A-POVI-BNN and ConvCNP. Despite these results being very preliminary, they are encouraging and suggest that A-POVI-BNNs may provide a more effective alternative to NPs when the size of meta-datasets are small. We intend to explore the effectiveness of A-POVI-BNNs in more diverse settings, such as image completion, in future work.